Over the past few decades, we’ve focussed our energies on migrating information from the physical world on pen-and-paper into digital formats. In the process, we’ve created large information troves on the Internet using computers to be able to organise and store all the data we have. And computers have merely been helping us process this information.
But the future is in the converse: where computers begin to observe, assimilate and report the physical world.
Of course, computers don’t interact with the physical world like us; so we create tools that allow computers to do so. Take, for instance, barcodes. Here, vertical lines are scanned using a thin beam of light (usually laser), and this allows a computer to identify a book or a product without having to feed this information into the computing system. We see this at counters at grocery stores, supermarkets and libraries. The future of the Web will be somewhat similar but far more sophisticated, wherein computers will identify our physical world by looking at it, much like we do: with ‘digital eyes’ using semiconductor-based cameras.