The technological forces unleashed by the „digital revolution“ will fundamentally transform our economy and society within a very short time. We need to prepare ourselves for this transformation and the new era to come. In fact, we may just have 20 or 30 years to adapt. Even though we may dislike such socio-economic change, we will probably not be able to stop it. In order to be able to cope with surprises, we need to design our socio-economic systems in a resilient way. Moreover, if we learn to understand the fundamentally new logic of our digital future and harness the driving forces behind it for our purposes, we can benefit tremendously and fix a number of long-standing problems that humanity has struggled with!
It seems the world is changing at a rapidly increasing pace and is getting ever more complex. However, today’s democratic institutions are slow, and they are expensive, too. In the meantime, most industrialized countries have piled up debt levels that can hardly be dealt with. This calls for a new and more efficient approach to decision-making. Two concepts have been around: (1) the “China model”, as China has developed much faster than Western countries recently,1 and (2) giving more control to multi-national corporations, as they tend to be more efficient than government institutions.
5.1 The China Model
Has the French Revolution been a tragic historical accident that is now making lives in Western democracies more difficult? Are citizens and public institutions obstacles, or have we not yet learned to use them as a valuable resource? Is democratic decision-making outdated and blocking our way into a better future? Would more top-down control be better?
Given that countries such as Singapore and China are experiencing larger economic growth rates, should other countries copy their model? Most likely, this wouldn’t be a good idea. First, a political system must be culturally fitting. Second, even though top-down control might accelerate decision-making, it is questionable, whether the resulting decisions would be better and sustainable, too.2 If we implemented the Chinese approach in Europe or the USA, we could easily decide to build a road over here, a shopping mall over there, and even new city quarters or entire towns. Moreover, these projects could be realized in just a few years. This may sound tempting to decision-makers. However, when space and other resources are limited, improving a system becomes increasingly difficult as it evolves: Given the many interdependencies, improvements in one part of a system may come with undesired side effects in others. Over time, many decisions will turn out to be less favorable than expected, and quick decisions will often result in mistakes. For example, there are many “ghost malls” in China, which almost nobody wants to use, and empty “ghost towns”, too. Moreover, environmental pollution has become a serious issue. Some cities are now suffering from smog levels that imply considerable risks to health and make them almost dysfunctional.
Even though China has quickly developed, it should also be considered that the satisfaction of its people hasn’t everywhere increased. Social unrests are frequent. Due to the world economic crisis, China is now even confronted with a dangerous reduction in its growth rate and financial turmoil, so that it must reorganize its economy. To reduce the likelihood of a revolution, information flows to and from the country are increasingly controlled by the government.
In other words, China shows worrying signs of destabilization.3 India’s democracy, in contrast, is doing increasingly well.4 Also in Europe, federally organized systems such as Germany and Switzerland are performing better than more centrally governed countries such as France. In fact, it can be said that the most advanced economies in the world are the most diverse and complex economies.5
5.2 Can Corporate Control Fix the World?
If today’s democratic systems are facing difficult times and autocratic systems struggle, too, would more corporate control be able to solve our problems? In fact, this is often claimed, and companies keep demanding more control. This is probably what the free trade and service agreements, which have recently been negotiated under secrecy, are about: governments will give up some of their power and hand it over to corporations.
So, would it be better to let multi-national corporations run the world? In fact, companies are often more efficient than governments in accomplishing specific tasks that can be well monetized. However, if we look at a map displaying what companies control what regions of the world, it doesn’t look less fragmented than the map illustrating the hypothetical “clash of cultures”. So, we can’t expect that more corporate control will cause more agreement in the world. We may just see more “economic wars”.
There are other issues, too.6 While large corporations have certainly a lot of power to move things ahead, they often show surprisingly low innovation rates and tend to obstruct innovations of others.7 There are many examples where even the value of own inventions hasn’t been realized. For instance, Xerox did not see the value of the Windows software they invented. The value of the mp3 music file format was totally underestimated, and nobody expected that text messaging would become important. To compensate for their innovation weakness, large corporations buy innovative small and medium-size companies. Nevertheless, they often fail to stay on top for a long time. Within a period of just 10 years, 40–50% of top 500 companies are predicted to disappear. Given such high takeover and “death” rates, societies would be extremely unstable if run by corporations. Compared to this, countries and cities persist for hundreds of years, exactly because they are governed in a more participatory way, unlike corporations.
In summary, there is little evidence that more corporate control would solve the problems of the world. I don’t deny that many companies have laudable goals. The use of self-driving cars, for example, intends to eliminate accidents, which have killed a lot of people in the past. Moreover, by means of personalized medicine, genetic engineering and biological enhancements, companies try to overcome death altogether. However, none of these ambitious goals have been accomplished yet. I might start to believe in corporate control of our globe, if we had a perfect world everywhere within a 100 km radius of the Silicon Valley, but we are far from this. In the Silicon Valley, there is a lot of light, but lots of shadow, too. In the past, not a single city in the USA has been in the top 10 list of most livable cities. In other words, we need entirely new medicines to heal the world’s ills, in particular as we are now confronted with another unsolved challenge: the destabilization of our economy and society by the digital revolution.
5.3 The Digital Revolution on Its Way
The digital revolution arguably deserves even more attention than climate change because it will dramatically affect almost every aspect of our economy and society within our lifetimes. Just about ten years ago, most of us didn’t have the slightest idea that Facebook, Twitter, and iPhones would be an integral part of modern life. Even large companies such as Microsoft, Yahoo and Nokia didn’t see some important developments coming. Now, we are in the middle of a socio-economic transformation, which will create a world ruled by a different logic, and the progress of digital technologies is further accelerating. What can we expect to happen in the next ten, twenty, or fifty years? In the analysis that follows, I offer some suggestions which are not wide-eyed futurology, but considerations based on existing evidence and trends. However, before we discuss these trends, let us look at how the current situation emerged.
According to a well-known anecdote, Thomas John Watson Sr. (1874-1956), the then chairman of IBM, said in 1943:
“I think there is a world market for maybe five computers.”
Although the statement is surprisingly inaccurate with the benefit of hindsight, at that time almost nobody could imagine a mass market for computers. For decades, computers were barely useful to anyone. Even in 1968, an engineer at the Advanced Computing Systems Division of IBM asked of the microchip: “what … is it good for?” In 1981, Bill Gates is claimed to have said that a computer memory of 640 kilobytes “ought to be enough for anybody”. Today, our smartphones have a hundred thousand times more memory than that, and an iPhone has more processing power than the Apollo rocket, which flew to the moon and back. Nevertheless, the early innovators who pushed for the acceptance of information technology didn’t have an easy life. The founder of Apple Computers Inc., Steve Jobs (1955–2011), remembers his failed attempts to interest Atari and Hewlett-Packard in the personal computer he co-developed with Steve Wozniak:
“So we went to Atari and said, ‘Hey, we’ve got this amazing thing, even built with some of your parts, and what do you think about funding us? Or we’ll give it to you. We just want to do it. Pay our salary, we’ll come work for you.’ And they said, ‘No.’ So then we went to Hewlett-Packard, and they said, ‘Hey, we don’t need you. You haven’t got through college yet.’”
Later, Steve Jobs even found himself forced to leave the Apple board for a number of years. But when he returned, he had the iPod, iPhone, iPad as well as iTunes and the AppStore developed, i.e. products which made Apple the most valuable company in the world for some time.
This is just a small part of an even larger story. There was, in fact, another important development. In the late sixties, Arpanet was created to allow a few military computers to exchange information with each other. But it was then opened up for public use and became the Internet. This eventually unleashed the power of information. Later, to support the collaboration between Swiss and French teams contributing to the CERN elementary particle accelerator, physicist Tim Berners-Lee (*1955) invented a hyperlink protocol supporting the development of web pages that can be linked with each other. This gave rise to the World Wide Web (WWW) and eventually made computers useful for ordinary people rather than just experts. It also made the Internet attractive for doing business and enabled a multi-billion dollar market. However, Tim Berners-Lee’s ideas did not have an easy start either. At CERN he didn’t get the support he requested, so he went to the Massachusetts Institute of Technology (MIT), where he founded the World Wide Web Consortium (W3C).
Now, there are about 3 billion Internet users in the world. Eventually, the creation of Facebook in 2004 and Twitter in 2006 linked more than a billion people together in giant social networks. We are no longer just users of this system. We are also (co-)creators of data and services. In some sense, we are now “human processors” and nodes in a globe-spanning information system.
5.4 Computers More Intelligent Than Humans?
For decades, the processing power of computers has doubled roughly every 18 months. If this trend continues, some information systems may surpass the power of a human brain sooner than we think. Our brain has approximately 100 billion neurons, a firing rate of 200 times a second, and a signal speed of 120 meters per second. This has to compete with the current generation of supercomputers, which have 100 times fewer transistors, but operate 20 million times faster, and have a signal speed which is two million times faster.
Therefore, computers are outperforming humans in increasingly many ways. Whereas a century ago, some companies maintained departments with hundreds of employees to perform business calculations, these were later replaced by simple calculators. Computers are far better than us at doing calculations—and even at chess. In 1996, IBM’s Deep Blue computer defeated the best chess player in the world at that time, Garry Kasparov (*1963). Computers can now also beat the best backgammon players, the best scrabble players and the best players in many other strategic games. In 2011, IBM’s Watson, a “cognitive computer”, which is able to judge the relevance of information from the Internet, even beat the best human experts in answering questions in the game show “Jeopardy!”8 Now, Watson is taking care of customer hotlines, as the computer is better at managing all the knowledge required to answer customer queries. Watson understands natural language and comes increasingly close to what humans can do. In many respects Watson is even superior to humans. Now, by investing 1 billion dollars in cognitive computing technology, IBM hopes to earn 100 times that amount in the years to come.
Europe invests a comparable sum—1 billion Euros—to build a supercomputer which can simulate the human brain. The USA even decided to spend $3 billion on a brain project. The Google Brain project, led by the technology guru Ray Kurzweil, is another attempt to turn the Internet into an intelligent entity that can think and decide.9
While building computers with performance comparable to humans may still take a few years, the Google car—a driverless vehicle developed by the tech giant—is already there. It may soon navigate our roads more safely than human drivers. Furthermore, about 70% of all financial transactions in the world’s stock markets are now performed by autonomous trading algorithms. Therefore, computers and robots are increasingly doing our work, and they will replace many jobs that can be performed according to rules and routines. We may even lose highly qualified jobs which depend on skilled judgments, including medical doctors, care workers, scientists, lawyers, managers, and politicians, and to some degree even teachers and parents. For example, South Korea recently invested in robotic childcare, and Japan is building robots to take care of elderly people. All of these developments raise the question: what role will humans play in future? To stay competitive, would we have to become more robotic, or would we better distinguish ourselves by engaging in less predictable activities and more creative jobs?
5.5 When Will We See Artificial Superintelligences and Superhumans?
Only two years ago, most people would have considered it impossible that algorithms, computers, or robots would ever challenge humans as the “crown of creation”. This has changed.10 Intelligent machines are learning by themselves. For example, Google’s Deep Mind learned to play and win 49 Atari video games.11 One could also recently read about a computer that passed the Turing test, i.e. it was mixed up with a human.12 Furthermore, a robot recently mastered a university entry exam.13 Some robots even show self-awareness,14 and they might build other robots, which are superior. The resulting evolutionary progress is quickly accelerating, and it is just a matter of time until there are machines smarter than us.15 Perhaps such superintelligences already exist.
Some notable people have recently commented on this new situation. For example, Silicon Valley’s technology genius, Elon Musk of Tesla Motors, tweeted that Artificial Intelligence is “potentially more dangerous than nukes” and warned16:
“I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful. … I am increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish. …”
Similar critique comes from Nick Bostrom at Oxford University.17
Stephen Hawking (1942–2018), perhaps the most famous physicist recently, voiced concerns as well18:
“Humans who are limited by slow biological evolution couldn’t compete and would be superseded. … The development of full artificial intelligence could spell the end of the human race. … It would take off on its own, and re-design itself at an ever increasing rate.”
Bill Gates of Microsoft contributed to the discussion on Artificial Intelligence, too19:
“I am in the camp that is concerned about super intelligence. … I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”
Last but not least, Steve Wozniak, co-founder of Apple, formulated his worries as follows20:
“Computers are going to take over from humans, no question … Like people including Stephen Hawking and Elon Musk have predicted, I agree that the future is scary and very bad for people … If we build these devices to take care of everything for us, eventually they’ll think faster than us and they’ll get rid of the slow humans to run companies more efficiently … Will we be the gods? Will we be the family pets? Or will we be ants that get stepped on? I don’t know …”
So, how will our future look like? Will superintelligent robots enslave us? Or will we upgrade our biological hardware in order to become superhumans? I openly admit, this sounds like science fiction, but such scenarios can’t be any longer ignored, given the current pace of technological advances. The first cyborgs, i.e. humans that have been technologically enhanced, already exist. The most well-known of them is Neil Harbisson.21 With the Crisper technology, it is even possible now to genetically engineer humans.22 But there is an alternative: If suitable information platforms and personal digital assistants support us in creating collective intelligence, we might be able to keep up with the exponentially increasing performance of Artificial Intelligence.23
It is clear, however, that new socio-economic forces will be created when, within the next 10–30 years, computers, algorithms and robots increasingly take over our classical roles and jobs. In the past, we witnessed the transition from the “economy 1.0”, characterized by an agricultural society, to the “economy 2.0”, driven by industrialization, and finally to the “economy 3.0”, based on the service sector. Now, we are seeing the emergence of the “economy 4.0”, a “digital” sector driven by information and knowledge production. The third socio-economic transformation, which is currently on its way, is fueled by the spread of Big Data and Artificial Intelligence, and the Internet of Things.24 Unfortunately, history tells us that the emergence of a new economic sector tends to cause social and economic disruptions, as the innovation that feeds the new sector undermines the basis of the old ones.25 Financial and economic crises, revolutions, or war may be the results. If we want to avoid this, we must quickly adapt our perspective of the world and the way we organize it. I am totally convinced that we can master the digital revolution, but only if we manage to create a win-win-win situation for politics, business, and citizens alike.
5.6 Everything Will Change
The digital revolution is fundamentally changing many of our activities and institutions. This includes how we educate (using massive open on-line courses), how we do research (through the analysis of Big Data), how we move around (with self-driving cars), how we transport goods (with drones), how we shop (with Amazon and eBay, for example), how we produce goods (with 3D printers), and how we run our health systems (with personalized medicine). Our political institutions are likely to change as well, for example, by data-driven decision-making and demands for more citizen participation. Likewise, the basis of our economy is altered by a new wave of automation and a number of other trends. Financial transactions, which used to be performed by bankers, are being replaced by algorithmic trading, Paypal, BitCoin, Google Wallet, and so on. Moreover, the biggest share of the insurance business, amounting to a multiple of the global gross domestic product, is now in financial derivatives traded at stock markets. According to military experts, war may fundamentally change, too—with states preparing for cyberwar. If one succeeded to build a “digital God”, even religion may change.
Let us now look at the problem of the coming economic transformation more closely. Are our societies well-prepared for the digital revolution? In some industrialized countries, the proportion of jobs in the agricultural (“primary”) sector is now just about 3%, whereas 300 years ago, it stood at 80% or more. How did we get there? Among other developments, the great success of James Watt’s steam engine, invented in 1769, gave rise to the industrial revolution. As a result, many people lost their jobs due to automation—in the production of textiles, for example. However, the industrial revolution also gave rise to consumerism, meaning that industrial production eventually created a secondary sector with many new jobs. In fact, during its heyday, industrial production provided more than 30% of all jobs in developed countries. But later, efficiency increases in industrial production led to a loss of half of all industrial jobs. Unfortunately, the number of industrial jobs will continue to fall—probably below 10%. Very soon, therefore, only about 10 to 15% of jobs will be provided by the primary and secondary sectors of economy.
The tertiary sector—services—is the result of another big innovation: the expansion of education to all levels of society from the 1870s onwards. Reading and writing were no longer restricted to elites, and many more people could perform jobs requiring special qualifications. As a result, the service sector became more developed and brought about an increase in planning, administrative and management tasks. In modern societies, up to 70% of jobs are currently in the service sector. But again, history will repeat itself and more than half of these employees will be replaced—this time by algorithms, computers and robots.26 What will happen, if computer power and Artificial Intelligence progress so far that humans can no longer keep up?
5.7 The Third Economic Revolution
The digital revolution is driving another dramatic wave of automation, leading to a “second machine age”.27 Not only philosophers become increasingly worried about our future, but leading experts as well. The techno-revolutionaries are losing control of their revolution. If we don’t find a new way to produce jobs quickly, we will face unemployment rates far beyond the levels that our current socio-economic system can handle. Indeed, all of the previous socio-economic transitions were accompanied by mass unemployment, which is one of the major forces transforming societies. Given that computers will match the capabilities of the human brain within 10–25 years, we have very little time to adapt to the current societal transition. If we don’t adapt fast enough, the transition will be sudden, harsh, and discontinuous: we should expect a bumpy road ahead!
In control theory, it’s a well-established fact that delayed adaptation causes systems to become unstable. The consequences of such instabilities for our society may include financial and economic crises, social and political conflicts, and even wars. In fact, there are already signs of destabilization in many countries around the world. These might be interpreted as advance warning signals of socio-economic transitions to come. The list of countries that have recently encountered social unrest is surprisingly long. It includes Afghanistan, Pakistan, Iraq, Syria, Somalia, Nigeria, South-Sudan, Uganda, Congo, Central African Republic, Egypt, Bahrain, Libya, Lebanon, Yemen, Tunisia, Turkey, China, India, Malaysia, Thailand, Brazil, Mexico, Ukraine, Spain, Greece, Romania, the United Kingdom, and even Sweden, Switzerland and the USA. Note, however, that these countries are faced with different scenarios. While many African countries are now in transition from an agricultural to an industrial society, some Asian countries are in transition from an industrial to a service society. Finally, the US, Europe, Japan and a few other countries are in transition from a service to a digital society. These transitions may also interact with each other, of course. In the worst case, they might all occur around the same time, and pretty soon.
Can we avoid or suppress these transitions? I don’t think so. Stopping technological innovations does not seem to be compatible with most economic systems we currently have, and it wouldn’t be effective, too. Countries trying to obstruct technological progress by means of legal regulations would have to find other ways to create comparable economic efficiency gains, as all countries are exposed to international competition in the world market. And even if a global moratorium against the development of Artificial Intelligence was signed, it is not clear how one could ensure compliance.
Therefore, we may be able to delay historical developments, but we cannot stop them, and it wouldn’t be reasonable to try. Delaying these transitions makes them more costly, and implies competitive disadvantages. On the long run, no country can afford to miss the opportunities resulting from these transitions, which will eventually lead to more efficient systems and a higher quality of life. This is all part of the cultural evolution which takes place as societies progress. It is driven by socio-economic forces such as increases in complexity, diversity, Big Data and computer-supported (artificial and collective) intelligence.
In the case of the digital revolution, the genie is out of the bottle. Information and communication systems have grown increasingly powerful, but a single mistake could be disastrous. We must, therefore, learn to make the genie work for us. How can we do this? We must learn to understand the new logic implied by digital technologies and the trends they are producing. And we must learn it soon.
5.8 The New Logic of Prosperity and Leadership
Most likely the twenty-first century will be governed by principles which are fundamentally different to those of the twentieth century. That is why we must change our way of thinking about socio-economic systems. We need to recognize some fundamental trends and truths.
Overall, our anthropogenic systems are becoming more connected and more complex. As a result, they are often more variable, less predictable, and less controllable. So, most of our personal knowledge about the world is outdated, and digitally literate people may gain informational advantages over today’s experts. Therefore, classical hierarchies may dissolve.
It is also important to recognize that we are entering an increasingly non-material age, characterized by an abundance of information and ideas rather than a shortage of goods. Data can be replicated as often as we like. It’s a virtually unlimited resource, which may help us to overcome conflicts over scarce resources. In fact, this opens up almost unlimited possibilities for new value generation.
In the digital society, ideas will spread more quickly. Information is both ubiquitous and instantly available almost everywhere, such that borders tend to dissolve. Furthermore, the more data we produce, the more difficult is it to keep secrets. Data will also become cheaper, i.e. one can make increasingly less profits on data. In addition, the rapid growth in data volumes will result in potential information overload. So, we need to learn how to convert raw data into useful information and actionable knowledge. Most value will be derived from algorithms rather than data, and from individualized, personalized, user-centric products and services. I further believe that the organization of our future socio-economic system will increasingly be based on collective intelligence.
Finally, what used to be regarded as science fiction may become reality one day, and possibly much more quickly than we think. As Sir Arthur Clarke realized:
“Any sufficiently advanced technology is indistinguishable from magic.”
The countries which are first to recognize these new principles and use them to their advantage will be the leaders of the digital age. Conversely, those which fail to adapt to these trends on time will be in trouble. Given that computers may soon match the capabilities of the human brain, we may only have two or three decades to adapt our societies—a very short time considering that the planning and building of a road often takes 30 years or more.
5.9 Creating a Resilient Society
The best way to prepare for a future, which is hard to predict, is to create a society capable of flexibly adapting to new kinds of situations. The digital revolution certainly has the potential to precipitate disruptive change. But are our societies well-prepared for such shocks? Our response to September 11, 2001, calls this into question. Even though it was initially a local event, it has changed the face of the entire world. In the aftermath, we have built a security architecture to protect us from terrorism. But even with mass surveillance and armed police, are we now safe? The bomb attack at the Boston marathon and the Charlie Hebdo attack in Paris, for example, question this. Moreover, less than 10 years later, an even more serious event changed the face of the world: the financial crisis. Again, this started locally, but ultimately had a major global impact.
Therefore, how can we better prepare for future challenges? A whole range of measures are at our disposal, including risk assessment, probabilistic prediction, prevention, intervention, insurance, and hedging (which basically means to choose a portfolio strategy rather than betting on one horse). Nevertheless, we must realize that problems will sometimes occur and accidents will sometimes happen in a world that is not entirely predictable. That is why we need resilient systems.
But what exactly does resilience mean? Resilience is the ability of a system to absorb shocks and to recover from them both quickly and thoroughly. If we fall and hurt us, we will recover quickly because our body is resilient to such shocks. So, how to build resilient systems that are not prone to undesired cascading effects, but recover quickly and well from disruptions? This is primarily a matter of systems design and management. Safety margins, reserves, backups, and alternatives (a “plan B”, “plan C”) can certainly help. Furthermore, modularization is a well-known principle to make the complexity of a system manageable. This basically means that the organization of a system is broken down into substructures or “units”, between which there is a lower level of connectivity or interaction compared to the connectivity or interaction within the units. This allows one to reduce the complexity within substructures to a manageable level. Furthermore, it decreases interaction effects between units. Well-designed systems have “engineered breaking points”, “shock absorbers”, or “dynamic decoupling strategies”, which can counter the amplification of problems and undesirable cascading effects. For example, think of electrical fuses in your flat or the crumple zones of a car, which are there to protect the sensitive parts of the system (such as our home or our life).
In principle, of course, the modular units of a system can be organized in a hierarchical way. This can be efficient, when the units (and the interactions between them, including information flows and chains of command) work reliably, with very few errors. However, as much as hierarchical structures help to define accountability and to generate power, control might already be lost if a single node or link in the hierarchy is dysfunctional. This problem can be mitigated by redundancies and decentralization. In particular, if the dynamics of a system is hard to predict, local autonomy can improve proper adaptation, as it is needed to produce solutions that fit local needs well. More autonomy, of course, requires the decision-makers to take more responsibility. This calls for high-level education and suitable tools supporting a greater awareness of potential problems, in particular reliable information systems.
A further important principle that can often support resilience is diversity. The benefits of diversity are multifold. First of all, diversity makes it more likely that some units stay functional when the system is disrupted, and that solutions for many kinds of problems already exist somewhere in the system when needed. Second, diversity supports collective intelligence, as we will see later. Third, the innovation rate typically grows with diversity, too. However, diversity also poses challenges, as we know, for example, in intercultural settings. For this reason, interoperability is important. I will come back to this issue, when we discuss “digital assistants” and “externalities”, i.e. external effects of decisions and (inter-)actions. Finally, using the principles of assisted self-organization, it is possible to control complex dynamical system in a distributed way.
In the light of the principles discussed above, can we be confident that we are currently well prepared to master the digital revolution? Does mass surveillance, combined with armed police, create a resilient society? On the contrary (see Appendix 5.1)! Sustainable political power requires legitimacy, and this requires trust. However, undermining privacy reduces trust (see Appendix 5.2). Mass surveillance also promotes fear and self-censorship, thereby obstructing innovation and the ability of our society to adapt to a changing world.
I personally believe that we must—and can—prepare ourselves much better for the challenges ahead. To illustrate this, let us discuss the problem of traffic safety. We know that accidents keep happening and that this is bad. But how would it be to live in a world without traffic? Our economy would not work, and we would not be able to live in modern societies. Therefore, we have developed measures to reduce the number accidents and their severity. How have we done this? We have improved traffic rules, developed better technology, created emergency services and built hospitals. We have developed better brakes and driver assistance systems that empower drivers to do a better job. We provide traffic news to drivers in order to prevent accidents from messing up the entire traffic system. And we have designed better cars. Decades ago, whereas the body of a car was scarcely damaged by an accident, the driver and passengers were often killed. Nowadays, cars are constructed to absorb shocks, so that even a small accident may damage the entire car, but the risk of injuries and deaths is dramatically reduced.
This serves as a useful metaphor to guide our thinking about how to deal with societal challenges. It shows us that trying to protect institutions from changing is wholly counterproductive and would, in fact, make societies less resilient. Many institutional structures today are like old cars with a rigid body—they are constructed to stay the same rather than to flexibly adjust to a changing world. But in order to be resilient, our institutions must learn to flexibly adapt in ways that serve the citizens best.28
It is, therefore, alarming to see so many global crises. Within just a few months, our world was confronted with several major challenges: worrying epidemics (such as Ebola and MERS), the crisis in Ukraine endangering world peace, and the raise of the Islamic State, to mention just a few. While all these problems started locally, their implications have become globally relevant. The question is how much more has to happen until we prepare ourselves better and start building a more resilient society?
A society should be prepared for all eventualities. It should be hedging its risks, and its capacities to respond. A society should, therefore, have a portfolio of options to (re)act. But as I will show, we have seriously neglected some of the most promising options. It’s time to do something about this!
5.10 Time for a New Approach
Importantly, in contrast to the approach followed today, simplifying our world by homogenizing or standardizing it would not fix our problems. It would rather undermine cultural evolution and innovation, thereby causing a failure to adjust to our ever-changing world. Thus, do we have alternatives? Actually, yes: rather than fighting the properties of complex systems, we can use them for us, if we learn to understand their nature. The fact that the complexity of our world has surpassed our capacity to grasp it, even with all the computers and information systems assisting us, does not mean that our world must end in chaos. While our current system is based on administration, planning, and optimization, our future world may be built on (co-)evolutionary principles and collective intelligence, i.e. intelligence surpassing that of the brightest people and best expert systems.
How can we get there? In the second part of this book, I will show how the choice of suitable local interaction mechanisms can create surprising and desirable outcomes by the process of self-organization. Information and communication systems will enable us to let things happen in a favorable way. This is the path we should take, because there are currently no better alternatives. The proposed approach will create more efficient socio-economic institutions and new opportunities for everyone: politics, business, science, and citizens alike. As a positive side effect, our society will become more resilient to the future challenges and shocks, which we will surely face.
5.11 Appendix 1: Side Effects of Massive Data Collection
Like any technology, Big Data has not only great potential but also harmful side effects. Of course, not all Big Data applications come with the problems below, but they are not uncommon. We must focus, in particular, on those problems that can lead to large-scale effects and major crises.
5.11.1 Crime
In the past years, cybercrime has exponentially increased, now costing the entire world 3 trillion dollars a year. Some of this has resulted from undermining security standards for the purpose of surveillance (e.g. by creating hardware and software “backdoors”). Other common problems are financial, data or identity theft, data manipulation, and the fabrication of false evidence. These crimes are often committed by means of “Trojan horses”—computer codes that can steal passwords and PIN codes. Further problems are caused by computer viruses or worms that damage software or data.
5.11.2 Military Risks
Because most of our critical infrastructures are now connected with other systems via information and communications networks, they have become quite vulnerable to cyber attacks. In principle, malicious intruders can manipulate the production of chemicals, energy (including nuclear power stations), as well as financial and communication networks. Attacks are sometimes possible even if the computers controlling such critical infrastructures are not connected to the Internet. Given our dependence on electricity, information and money flows as well as other goods and services, this makes our societies more vulnerable than ever before. Coordinated cyber-attacks might be launched just within microseconds and bring the functioning of our economy and societies to a halt.
Digital weapons (so-called “D weapons”) are now considered to be as dangerous as ABC (atomic, biological, and chemical) weapons.29 Therefore, the US government reserves the right to respond to cyber war with a nuclear counter-strike. Everywhere in the world, we are now seeing a digital arms race for the most powerful information-based surveillance and manipulation technologies. It is doubtful whether governments will be able to prevent a serious misuse of such powerful tools. Just imagine, a crystal ball or magic wand or other powerful digital tools would exist. Then, of course, everyone wanted to use them, including criminals and our enemies. It is obvious that, sooner or later, these powerful tools would get into wrong hands and finally out of control. If we don’t take suitable precautions, mining massive data may (intentionally or not) create problems of any scale—including digital weapons of mass destruction. Therefore, international efforts towards confidence-building and digital disarmament are crucial and urgent.30
5.11.3 Economic Risks
Similarly, cybercrime harms our economy, as does the illicit access to sensitive business secrets or the theft of intellectual property. Furthermore, a loss of customer trust in products can reduce sales that would be worth billions of dollars. This has recently been experienced, for example, by companies selling cloud-based data storage. Many digital systems and services can only work with a sufficient level of trust, including electronic banking, eBusiness, eVoting, social media, and sensitive communication by email. Yet more than two thirds of all Germans say they don’t trust government authorities and Big Data companies any longer that they wouldn’t misuse their personal data. More than 50% even feel threatened by the Internet.31 Similarly, trust in governments has dramatically dropped in the USA and elsewhere.32 The success of the digital economy is further undermined by information pollution, as it results from spam and undesired ads, for example, and recently from fake news as well.
5.11.4 Social and Societal Risks
To contain “societal ills” such as terrorism and organized crime, it often seems that surveillance is needed. However, the effectiveness of mass surveillance in improving the level of security is frequently questioned: the evidence is missing or weak.33 At the same time, mass surveillance undermines privacy, whereby it questions the government’s trust in citizens. This, in turn, undermines the citizens’ trust in their government, which is the basis of its legitimacy and power.
The saying that “trust is good, but control is better” is not entirely correct: control cannot fully replace trust.34 A well-functioning and efficient society needs a suitable combination of both. In particular, the perceived loss of privacy is likely to promote conformism and to endanger diversity and useful criticism. Independent judgments and decision-making could be undermined. On the long run, this would impair a society’s ability to innovate and adapt, which could finally make it fail.35
For such reasons, the constitutions of many countries consider it of fundamental importance to protect privacy, informational self-determination, private communication, and the principle of assumed innocence without proof of guilt. These principles are also considered to be essential preconditions for human dignity and for democracies to function well.36
However, today the Internet lacks good mechanisms for forgetting, forgiveness, and re-integration. There are also concerns that the increasing use of Big Data could lead to greater discrimination, which in turn could promote increasing fragmentation of our society into subcultures.37 For example, it is believed that the spread of social media and personalized information has promoted the polarization of US society and politics.38
5.11.5 Political Risks
It is often pointed out that leaking confidential communication can undermine the success of sensitive political negotiations. This is probably true, but there are other political problems, too, which aren’t less serious. For example, if incumbent governments have better access to Big Data applications than opposition parties, this can result in unfair competition, biased election outcomes and non-representative governments. This could seriously undermine democracies. The greatest danger to society, however, is the likelihood that powerful information systems will attract malicious agents and sooner or later get into the hands of criminals, terrorists or extremists. Unfortunately, this scenario has a high likelihood, given that the information systems of almost all companies and institutions have been hacked, including the military, Pentagon, White House, and German Parliament. Therefore, collecting huge masses of personal and other sensitive data in one (or few) place(s) could eventually turn democracies into totalitarian regimes, even if everyone had good intentions.
5.12 Appendix 2: Why Privacy Is Still Needed
Should we still care about privacy, even though many people don’t mind to give a lot of personal data away? I believe, we do need to care. The feeling of being exposed scares many people, particularly minorities. The success of our society depends a lot on vulnerable minorities (be it critical intellectuals or artists, billionaires or politicians, teachers or judges, religious or other minorities, which should be protected). As the „Volkszählungsurteil“39 correctly concludes, the continuous and uncontrolled recording of data about individual behaviors is undermining chances of personal, but also societal development. Society needs innovation to adjust to change (such as demographic, environmental, technological or climate change). And innovation needs a cultural setting that allows people to experiment and make mistakes.40 In fact, many fundamental inventions have been made by accident (Porcelain, for example, resulted from attempts to produce gold). Experimenting is also needed to become an adult who is able to judge situations and take responsible decisions.
Therefore, society needs to be run in a way that is tolerant to mistakes. But today, any little mistake can be detected and punished, and this is increasingly being done. Will this end democracy and freedom? Or if only a sample of people is being punished, wouldn’t this be arbitrary and undermine justice? Furthermore, wouldn’t the principle of assumed innocence be gone, which is based on the idea that the majority of us are good citizens, and only a few are malicious and to be found guilty?
Furthermore, “public” without “private” wouldn’t work well. Privacy provides opportunities to explore new ideas and solutions. It helps to recover from the stress of daily adaptation and reduces conflict in a dense population of people with diverse preferences and cultural backgrounds.
Public and private are two sides of the same medal. If everything is public, this will eventually undermine social norms.41 On the long run, the consequence could be a shameless society—or if any deviation from established norms is sanctioned, a totalitarian society.
Therefore, while the effects of mass surveillance and privacy intrusion are not immediately visible, they might still cause a long-term damage by undermining the fabric of our society: social norms and culture. It is highly questionable whether the economic benefits would really outweight this, and whether a control-based digital society would work at all. I rather expect such societal experiments to end in disaster.