It all started 50 years ago.
Gordon Moore formulated the conjecture that the new technology of photolithography on silicon would allow increasing the component density exponentially for an indefinite number of years.
At Stanford University, two Computer Science research laboratories were opened: at one end, Douglas Engelbart devised to augment human mind by computer tools; at the other end, John McCarthy foresaw an artificial intelligence that exceeds the human one.
J. C. R. Licklider first, Lawrence Roberts later envisioned a global computer network that could be connected one day, wirelessly, also to personal devices.
50 years later…
Fast-forward to today.
Moore’s law still seems to hold: everything has changed, but we are still observing an exponential growth of the power of computer systems.
Over one and a half billion humans live symbiotically with a personal computer, called “smartphone”, which has a multimedia interface and a permanent connection to a global network. They depend on it for communicating with other humans, expressing their emotions or experiences, choosing where to go, eat, sleep, shop and pay, for being informed about the world and their own community, deciding who to vote for.
The visionaries’ eyes sparkled, envisioning a future where we would have lived more thoroughly, where, with the help of computer science, we would have augmented our intellectual abilities.
We do not know where this world, midway between being physical or immaterial, will go; we only perceive it is going faster and faster.
Rephrasing Clemenceau: [the digital world] “is too serious a matter to entrust to technologists”.