In this article, I briefly explore this possible scenario and some of its implications for (cyber)security. Scenarios are narratives of alternative environments in which today's decisions may be played out  . They are not predictions nor strategies. They are alternative futures, descriptions of future strategic contexts that stimulate our imagination and can help us to be better prepared for the future and make more robust decisions in the present.
Let's imagine, then, that the idea of cyberspace will have no future in the medium/long term. That it will disappear as a concept. Let's imagine that, when it becomes universal, the virtual will be diluted in the real, and that all our sensory experience will take place in a single and indistinguishable environment.
Today we still clearly understand the differences between non-cyberspace and cyberspace. We still turn on our computers to go online or click on a link or an app on our cell phone, or some other medium. Most of the time, we can still tell at which moment we "entered" cyberspace. And when we are there, we know, most of the time, that we are there. We still understand the difference between entering a bank or entering the bank's website, between stepping afoot at the Colombo Shopping Center in Lisbon and clicking on Amazon's website.
However, there are already many signs that this distinction, this perception of the difference between cyberspace and (non-cyberspace) space is blurring. For all of us, for our senses, it will be increasingly difficult to distinguish the real from the virtual, the physical object from the hologram. Algorithmization, augmented reality, and virtual reality, coupled with the exponential acceleration of computing speed, are rapidly transforming the way we live. Rather than always being connected (which would imply the possibility, after all, of disconnecting), we are always "there". If it is not possible to distinguish the "real" from the "virtual", does it make sense to even try to discuss the difference between one and the other?
Long gone are the times when it was "only" Instagram replacing the paper photograph, Airbnb revolutionizing the world of tourism, or Uber worrying thousands of cab operators around the world. Access to information is increasingly immediate and embedded. Artificial or augmented reality systems such as Reality Labs / Oculus (Facebook), Microsoft HoloLens, Google Cardboard, or Magic Leap are examples of technologies and platforms that transform our perception of the world and the interactions it is composed of. I am also thinking of new ways of exploring virtual reality developed from these frameworks, like Fearless (to fight our fears through increasing doses of virtual reality...), Lowe's Innovation Labs projects (to experience changes at home, learn how to do small construction works, etc.), Neuro Rehab VR (rehabilitation / physiotherapy), ClassVR (education), or the almost infinite world of immersive gaming (see, for example, the augmented reality sport HADO). To better understand what is already happening, imagine you are at the table and have two cups of coffee in front of you. The challenge (without touching and smelling, for now) is to choose which one is real. Imagine that, looking very carefully, it is impossible to notice any difference. That is the world we will live in. And it won't take a long time. And in a little longer time we will be able to feel the virtual cup (with some gloves or small stickers on our hands or through magnetic fields), "smell" it (through the representation of odors that trigger our sense of smell), and maybe even "drink" it (when we put the cup to our mouth we will have the exact perception that we are drinking its contents, flavor included).
Let’s go back to my first hypothesis. The day will come when trying to make the distinction between "real" and "virtual" will be impossible and anachronistic. Everything will be real, equally real.
In this scenario, in the medium/long term, talking about cybersecurity will be identical to talking "only" about security, with some significant differences compared to today's world. Some examples:
1) Internet-of-Things-related attacks will be extremely common. The ability to respond to them will be part of the ongoing tasks of public safety.
2) The visibility and impact of cyber-attacks will grow exponentially, accelerated not only by the spread of autonomous cars but also by the expansion of the multiverse and the value of virtual inventories (just as an example, there are skins in Counter-Strike that cost well over 100,000 Euros), as well as the pervasiveness of cryptocurrencies;
3) The prevalence of deep fakes and the fluidity and fragmentation of identities will require certification mechanisms with the same level of security that we have today, in financial transactions. Today's zoombombers will literally try to break into our house.
4)Redundancy will be key. Not only in traditionally critical systems (health, transportation, energy, finance, defense, etc.), but also in systems that are not traditionally considered critical, such as housing or education. This expansion of the criticality of systems and the need for disaster recovery seems to lead, almost inevitably, to a society with a higher prevalence of risks, more security-conscious and with a greater focus on prevention. We won't be in the world of Steven Spielberg and Tom Cruise's Minority Report (at least in criminal matters), but we may be close.
5) Accountability will present new challenges: who is responsible for a crime (or accident) directly linked to a machine learning system? Maybe who developed it, but what if it was an algorithm? Maybe who "trained/taught it/provided the data," but what if it was the system itself? Maybe who operated it, but what if no one operated or owned it? Similar questions arise today concerning autonomous driving. Who is responsible for accidents? How does the algorithm decide in the event of conflicting risks between vehicle occupants and people on the public road? Answers to these questions - and to others where algorithms act on their own - will have to be provided by society, by regulation, by courts, or even by other algorithms.
6) The risks will be "less nice." With the exponential acceleration of innovation in attacks, many will become "singular," that is, structurally different from waves of previous attacks (in method, in technology, in targets). This uniqueness of attacks turns risks into structural uncertainties, where their probabilities are not objective because they are not "provided" by the fact that the event is repeated over and over again, which will make it difficult to integrate them into decision-making and risk hedging processes.
I consider that the scenario described above is possible, but not predetermined, nor necessarily the most plausible of possible scenarios. It would certainly be a completely new and, perhaps, "brave" world. However, even though it is imaginable, it is very difficult to evaluate its qualities and defects with the eyes of today. Our world would also be tremendously frightening to our great-grandparents or grandparents born in the transition from the 19th to the 20th century. For a large part of that generation, even the combustion engine in automobiles or the electric car were evil tools. Imagine the amazement that would have been caused by the Internet or augmented reality. Part of the future is not yet written; it will be built by us. And this is exactly why it is worth exploring it in the present, visualizing the possible consequences of the choices we make today, and preparing ourselves better. Both virtually and for real.
This article is part of Nova SBE Digital Experience Lab's annual cycle of reflection on technology, business, and sustainability in the month dedicated to cybersecurity, in which we are focusing on the impact that cybersecurity has on business, society, and our future. Join us on day 17 to think about data privacy and on 24about cybersecurity.
To receive more news about the events and articles that the Digital Experience Lab is organizing, subscribe to our monthly newsletter.
 Ogilvy, Jay, and Peter Schwartz. "Plotting Your Scenarios." GBN. 2004.