The end is near… We have arrived at the end of the book, and it is time to summarize our findings and to wrap up. The great potentials of the digital revolution cannot be denied. However, we need to think twice how to use this technology, so it will be at our service and not endanger what humanity has built over centuries. The lessons of wars and revolutions should not be forgotten. We need technology that empowers people, while helping us to coordinate our actions, so conflict is avoided. In this book, I have presented quite a lot of ideas how this can be done, and how to avoid the dark sides of the digital revolution. Therefore, I hope we will soon see humanistic digital technology that helps us live in harmony with nature.
14.1 AI on the Rise
There are probably not many people who would doubt that we have arrived in an age of Big Data and Artificial Intelligence (AI). Of course, this opens up many previously untapped opportunities, ranging from production to automated driving and everyday applications. In fact, not only digital assistants such as Google Home, Siri or Alexa, but also many Web services and apps, smartphones and home appliances, cleaning robots and even toys already use AI or at least some kind of machine learning.
Technology visionaries such as Ray Kurzweil (*1948) predicted that AI would have the power of an insect brain in 2000, the power of a mouse brain around 2010, human-like brainpower around 2020 and the power of all human brains on Earth before the middle of this century. Many do not share this extremely techno-optimistic view, but it cannot be denied that back in 1997 IBM’s Deep Blue computer beat the Chess genius Garry Kasparov (*1963), that IBM’s Watson computer won the knowledge game Jeopardy back in 1997, and that Google’s AlphaGo system beat the world champion Lee Sedol (*1983) in the highly complex strategy game “Go” in 2016—about 10–20 years before many experts had expected this to happen. When AlphaZero managed to outperform AlphaGo without human training shortly later, just by playing Go a lot of times against itself, the German news journal Der Spiegel wrote on October 29, 2017: “Gott braucht keine Lehrmeister”1 [“God does not need teachers.”].
14.1.1 AI as God?
This was around the time when Anthony Levandovski (*1980), a former head of Google’s self-driving car project, founded a religion that worships an AI God.2 By that time, many considered Google to be almost all-knowing. With the Google Loon project, they were also working on omni-presence. Omnipotence was still a bit of a challenge, but as the world learned by the end of 2015, it was possible to manipulate people’s attention, opinions, emotions, decisions, and behaviors with personalized information.3 In a sense, our brains had been hacked. However, only in summer 2017 did a previous member of a Google control room, Tristan Harris (*1984), reveal in his TED talk “How a handful of tech companies control billions of minds every day”.4 At the same time, Google was trying to build superintelligent systems and to become something like an emperor over life and death, namely with its Calico project.5
14.1.2 Singularity
So, was Google about to give birth to a digital God—or had done so already?6 Those believing in the “singularity”7 and AI as “our final invention”8 already saw the days of humans counted. “Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded,” said the world-famous physicist Stephen Hawking (1942–2018).9 Elon Musk (*1971) warned: “We should be very careful about Artificial Intelligence. If I had to guess at what our biggest existential threat is, it’s probably that.”10 Bill Gates (*1955) stated he was “in the camp that is concerned about super intelligence”.11 And Apple co-founder Steve Wozniak (*1950) asked: “Will we be the Gods? Will we be the family pets? Or will we be ants that get stepped on?”12 AI pioneer Jürgen Schmidhuber (*1963) wants to be the father of the first superintelligent robot and appears to believe that we would be like cats, shortly after the singularity happens.13 Sofia, the female humanoid robot, who is now counted as a citizen of Saudi Arabia, seems to see things similarly.14
Therefore, it is time to ask the question, what will happen with humans and humanity after the singularity? There are different views on this. Schmidhuber claims superintelligent robots will be as little interested in humans as we are in ants, but this does, of course, not mean we would not be in a competition for material resources with them. Others believe the next wave of automation will make millions or even billions unemployed. Combined with the world’s expected sustainability crisis (i.e. predicted shortage of certain resources), this is not good news. Billions of humans might (have to) die early, if the predictions of the Club of Rome’s “Limits to Growth” and other studies were right.15 This makes the heated debate on ethical dilemmas, algorithms deciding about life and death, as well as killer robots and autonomous weapons16 understandable17 (see discussion below).
Will we face something like a “digital holocaust”, where autonomous systems decide about our life based on a Citizen Score or some other approach? Not necessarily so. Others believe that, in the “Second Machine Age”,18 humans will experience unprecedented prosperity, i.e. we would enter some kind of technologically enabled “paradise”, where we would finally have time for friends, hobbies, culture and nature rather than being exploited for work. But even those optimists have often issued warnings that societies would need a new framework in the age of AI, such as a universal basic income.19 We would certainly need a new societal contract.
14.1.3 Transhumanism
Opinions are not only divided about the future of humanity, but also on the future of humans. Some, like Elon Musk, believe that humans would have to upgrade themselves with implanted computer chips in order to stay competitive with AI (and actually merge with it).20 In perspective, we would become cyborgs and replace organs (degraded by aging, handicap, or disease) by technological solutions. Over time, a bigger and bigger part of our body would be technologically upgraded, and it would be increasingly impossible to tell humans and machines apart.21 Eventually, some people believe, it might even become possible to upload the memories of humans into a computer cloud and thereby allow humans to live there forever, potentially connected to several robot bodies in various places.22
Others think we would genetically modify humans and upgrade them biologically to stay competitive.23 Genetic manipulation might also extend life spans considerably (at least for those who can afford it). However, in a so-called “over-populated world” this would increase the existential pressure on others, which brings us back to the life and death decisions mentioned before and, actually, to the highly problematic subject of eugenics.24
Overall, it appears to many technology visionaries that humans as we know them today would not continue to exist much longer. All those arguments, however, are based on the extrapolation of technological trends of the past into the future, while we may also experience unexpected developments, for example, something like “networked thinking”—or even an ability to perceive the world beyond our own body, thanks to the increasingly networked nature of our world. (“Links” are expected to become more important than the system components they connect, thereby also changing our perception of our world.)
14.1.4 Is AI Really Intelligent?
The above expectations may also be wrong for another reason: perhaps the extrapolations are based on wrong assumptions. Are humans and robots comparable at all, or did we fall prey to our hopes and expectations, to our definitions and interpretations, to our approximations and imitations? Computers process information, while humans think and feel. But is it actually the same? Or do we compare apples and oranges?
Are todays robots and AI systems really autonomous to the degree humans are autonomous? I would say “no”, at least the systems that are publicly known still depend a lot on human maintenance and external resources provided by humans.
Are today’s AI systems capable of emotions? I would say “no”. Changing color when gently touching the head of a robot, as “Pepper” does, has nothing to do with emotions. Being able smile or look surprised or angry, or being able to read our mimics is also not the same as “feeling” emotions. And sex robots, I would say, are neither able to feel love nor to love humans or other living beings. They can just “make love”, i.e. make sex and talk as if they would have emotions. But this is not the same as having emotions, e.g. feeling pain.
Are today’s AI systems creative? I would say “hmmm”. Yes, we know AI systems that can mix cocktails and can generate music that sounds similar to Bach or any other composer, or create “paintings” that look similar to a particular artist’s body of work. However, these creations are, in a sense, variations of a lot of inputs that have been fed into the system before. Without these inputs, I do not expect those systems to be creative by themselves. Did we really see some AI-created piece of art that blew our minds, something entirely new, not seen or heard before? I am not sure.
Are today’s AI systems conscious? I would say “no”. We do not even know exactly what consciousness is and how it works. Some people think consciousness emerges when many neurons interact, in a similar way as there are waves, if many water molecules interact in an environment exposed to wind. Other people, among them some physicists, think that consciousness is related to perception—a measurement process rooted in quantum physics. If this were the case, as the famous hypothetical “cat experiment” of Erwin Schrödinger (1887–1961) suggests, reality would be created by consciousness, not the other way round. Then, the “brain” would be something like a projector producing our perceived reality.
So far, we do not even know whether AI systems understand the texts they process. When they generate text or translations, they are typically combining existing elements of a massive database of human-generated texts. Without this massive database of texts produced by humans, AI-based text and translations would probably not sound like human language at all. It is kind of obvious that our brain works in a very different way. While we don’t have nearly as many texts stored in our brain, we can nevertheless speak fluently. Suggesting that we approximately understand the brain because we can now build deep learning neural networks that communicate with humans seems to be misleading.
Are today’s AI systems intelligent at least? I would say “no”. The systems that are publicly known are “weak” AI systems, that are very powerful in particular tasks and often super-human in specific aspects. However, so far, “strong” AI systems that can flexibly adjust to all sorts of environments and tasks as humans can do, are not publicly known. I am also not aware of any AI systems that have invented something like the physical laws of electrodynamics, quantum mechanics, etc.
In conclusion, I would say that today, even humanoid robots are not like humans. They are a simulation, an emulation, an imitation, or approximation, but it is hard to tell how similar they really are. Recent reports even suggest that humans are often involved in generating “AI” services—and it would then be more appropriate to speak of “pseudo-AI”.25 I would not be surprised at all, if our attempts to build human-like beings in Silico would finally make us aware of how different humans are, and what makes us special. In fact, this is the lesson I expect to be learned from technological progress. What is consciousness? What is love? We might get a better understanding of who we really are and what is our role in the universe, if we can’t just build it the way we have tried it so far.
Of course, I could be wrong with my judgment that true Artificial Intelligence does not currently exist. However, I have been waiting for years that someone would allow me to put their most advanced AI system to the test. They might, of course, be hiding it from me. To convince me of the existence of true Artificial Intelligence, I would have to see more than a chat bot that is capable of self-diagnosis (simulated “self-awareness”). I would need to have scientific and philosophical conversations with the system, and it should not just reply to my questions and statements by trying to find the best response in a huge database of human statements.
14.1.5 What Is Consciousness?
I sometimes speculate that our (3+1)-dimensional world might actually be an interpretation of higher-dimensional data (as elementary particle physics suggests as well). Then, there could be different kinds of interpretations, i.e. different ways of perceiving the world, depending on how we learn to perceive it. Imagine the brain to work like a filter—and think of Plato’s allegory of the cave,26 where people just see two-dimensional shadows of a three-dimensional world and, hence, would find very different “natural laws” governing what they see. What if our world was not three-dimensional with changes in time, but if we would see only a three-dimensional projection of a higher-dimensional world? Then, one day, we may start seeing the world in an entirely different way,27 for example, from the perspective of quantum logic rather than binary logic. For such a transition in consciousness to take place, we would probably have to learn to interpret weaker signals than what our five senses send to our brain. Then, the permanent distractions by the attention economy would just be the opposite of what would be needed to advance humanity.
Let me make this thought model a bit more plausible: Have you ever wondered why Egyptian and ancient paintings used to look flat, i.e. two-dimensional, for hundreds of years, while suddenly three-dimensionally looking perspective was invented and became the new standard in art? What if ancient people have really seen the world “with different eyes”? And what if this would happen again? Remember that the invention of photography was not the end of paintings. Instead, this invention freed arts from naturalistic representation, and entirely new painting styles were invented, such as impressionistic and expressionistic ones. Hence, will the creation of humanoid robots finally free us from our mechanistic, materialistic view of the world, as we learn how different we really are?
If we were not just “biological robots”, other fields besides science, engineering, and logic would be a lot more important than some may think, such as psychology, sociology, history, philosophy, the humanities, ethics, and maybe even religion. Trying to reinvent society and humanity without the proper consideration of such fields of knowledge could easily end in major mistakes of historical proportions.
14.2 Can We Trust It?
14.2.1 Big Data Analytics
In an article of the “Wired” magazine 2008, Chris Anderson (*1961) claimed that we would soon see the end of theory, and the data deluge would make the scientific method obsolete.28 If one just had enough data, certain people started to believe, data quantity could be turned into data quality, and, therefore, Big Data would be able to reveal the truth by itself. In the past few years, however, this paradigm has been seriously questioned. As data volume increases much faster than processing power, there is the phenomenon of “dark data”, which will never be processed and, hence, it will take scientists to decide which data should be processed and how.29 In fact, it is not trivial at all to distill raw data into useful information, knowledge and wisdom. In the following, I will describe some of the problems related to the question “how to connect the dots”.
It is frequently assumed that more data and more model parameters are better to get an accurate picture of the world. However, it often happens that people “can’t see the forest for the trees”. “Overfitting”, where one happens to fit models to random fluctuations or otherwise meaningless data, can easily happen. “Sensitivity”, where outcomes of data analyses change significantly, when some data points are added or subtracted, or another algorithm or computer hardware is used, is another problem. A third problem are errors of first and second kind, i.e. “false positives” (“false alarms”) and cases, where alarms should go off, but fail to do so. A typical example is “predictive policing”, where false positives are overwhelming (often above 99 percent), and dozens of people are needed to clean the suspect lists.30 And these are by far not all the problems …
14.2.2 Correlation Versus Causality
In Big Data, it is easy to find patterns and correlations. But what do they actually mean? Say, one finds a correlation between two variables A and B. Then, does A cause B or B cause A? Or is there a third factor C, which causes A and B? For example, consider the correlation between the number of ice-cream-eating children and the number of forest fires. Forbidding children to eat ice cream will obviously not reduce the number of forest fires at all—despite the strong correlation. It is, of course, a third factor, namely outside heat, which causes both, increased ice cream consumption and forest fires.
Finally, there may be no causal relationship between A and B at all. The bigger a data set, the more patterns will be found just by coincidence, and this could be wrongly interpreted as meaningful or, as some people would say, as a signal rather than noise.31 In fact, spurious patterns and correlations are quite frequent.32
Nevertheless, it is, of course, possible to run a society based on correlations. The application of predictive policing may be seen as example. However, the question is, whether this would really serve society well. I don’t think so. Correlations are frequent, while causal relationships are not. Therefore, using correlations as basis of certain kinds of actions unnecessarily restrains our freedom (effectively introducing new laws through code).
14.2.3 Trustable AI
There has been the dream that Big Data is the “new oil” and Artificial Intelligence something like a “digital motor” running on it. So, if it is difficult for humans to make sense of Big Data, AI might be able to handle it “better than us”. Would AI be able to automate Big Data analytics? The answer is, partly.
In recent years, it was discovered that AI systems would often discriminate against women, non-white people, or minorities. This is because these systems are typically trained with data of the past. That is problematic, since learning from the past may stabilize old societal paradigms we should actually better replace by something else, given that today’s world is not sustainable.
Lack of explanation is another important issue. For example, you may get into the situation that your application for a loan or life insurance is turned down, but nobody can explain you why. The salespeople would just be able to tell you that their AI system has recommended them to do so. The reason may be that two of your neighbors had difficulties paying back their loans. But this is again messing up correlations and causal relationships. Why should you suffer from this? Hence, experts have recently pushed for explainable results under labels such as “trustable AI”. So far, however, one may say that we are still living in a “blackbox society”.33
14.2.4 Profiling, Targeting, and Digital Twins
Since the revelations of Edward Snowden (*1983), we know that we have all been targets of mass surveillance, and based on our personal data, “profiles” of us have been created—no matter whether you wanted this or not.34 In some cases, such profiles have not only been data bases or unstructured data about us. Instead, “digital doubles”, i.e. computer agents emulating us, have been created of all people in the world (as much as this was possible). You may imagine this like a black box that has been created for everyone, which is continuously fed with surveillance data and learns to behave like the humans they are imitating. Such platforms may be used to simulate countries or even the entire world. One of these systems is known under the name of “Sentient World”.35 It contains highly detailed profiles of individuals. Services such as “Crystal Knows” may give you an idea.36 Detailed personal information can be used to personalize information and to manipulate our attention, opinions, emotions, decisions and behaviors.37 Such “targeting” has been used for “neuromarketing”38 and also to manipulate elections.39 Furthermore, it plays a major role for today’s information wars and the current fake news epidemics.40
14.2.5 Data Protection?
In seems, the EU General Data Protection Regulation (GDPR) should have protected us from mass surveillance, profiling and targeting. In fact, the European Court of Human Rights has ruled that past mass surveillance as performed by the GCHQ was unlawful.41 However, it appears that lawmakers have come up with new reasons for surveillance such as being the friend of a friend of a friend of a suspected criminal, where riding a train without a ticket might be enough to consider someone a criminal or suspect.
Moreover, it is now basically impossible to use the Internet without agreeing to Terms of Use beforehand, which typically forces you to agree with the collection of personal data, even if you don’t like this—otherwise you will usually not get a service. The personal data collected by companies, however, will often be aggregated by secret services, as Edward Snowden’s revelations about the NSA have shown. In other words, it seems that the GDPR, which claims to protect us from unwanted collection and use of personal data, has actually enabled it. Consequently, there are huge amounts of data about everyone, which can be used to create digital doubles.
Are our personal profiles reliable at least? How similar to us are our digital twins really? Some skepticism is in place. We actually don’t know exactly how well the learning algorithms, which are fed with our personal data, converge. Social networks often have features similar to power laws. As a result, the convergence of learning algorithms may not be guaranteed. Moreover, when measurements are noisy (which is typically the case), chances that digital twins behave identical to us are not very high. Hence, we may be easily misjudged. This does, of course, not necessarily exclude that averages or distributions of behaviors may be rather accurate (but there is no guarantee).
14.2.6 Scoring, Citizen Scores, Superscores
The approach of “scoring” goes a step further. It assesses people based on personal (e.g. surveillance) data and attributes a certain economic or societal value to them. People would be treated according to their score. Their lives would be “curated”. Only people with a high enough score would get access to certain products or services, while others would not even see them on their digital devices at all, or see a downgraded offer. Personalized prizing is just one example for the personalization of our digital world.
According to my assessment, scoring is not compatible with human rights, particularly human dignity (see below).42 However, you can imagine that there are currently quite a lot of scores about you. Each company working with personal data may have several of them.43 You may have a consumer score, a health score, an environmental footprint, a social media score, a Tinder score, and many more. These scores may then be used to create a superscore, by aggregating different scores into an index.44 In other words, individuals would be represented by one number, which is often referred to as “Citizen Score”.45 This appears to be done, for example, by the “Karma Police Program”,46 which was revealed on September 25, 2015—the day on which Pope Francis demanded the Agenda 2030, a set of 17 Sustainable Development Goals, in front of the UN General Assembly.47 China is currently testing a Citizen Score approach named “Social Credit Score”.48 The program may be seen as an attempt to make citizens obedient to the government’s wishes. This has been criticized as data dictatorship49 or technological totalitarianism.50
A Citizen Score establishes, in principle, a Big-Data-driven neo-feudalistic order.51 Those with a high score will get access to good offers, products, and services, and basically get anything they desire. Those with a low score may lose their human rights and may be deprived from certain opportunities. This may include such things as travel visa to other countries, job opportunities, the allowance to use a plane or fast train, the Internet speed, or certain kinds of medical treatment. In other words, high-score citizens will live kind of “in heaven”, while a large number of low-score citizens may experience something like “hell on Earth”. It’s something like a digital judgment day scenario, where you get negative points, when you cross a red pedestrian traffic light, if you read critical political news, or if you have friends who read critical political news, to give some realistic examples.
The idea behind this is to establish “total justice” (in the language of the Agenda 2030: “strong institutions”), particularly for situations when societies are faced with scarce resources. However, a Citizen Score will not do justice to people at all, as these are different by nature. All the arbitrariness of a superscore lies in the weights of the different measurements that go into the underlying index.52 These weights will be to the advantage of some people and to the disadvantage of others.53 A one-dimensional index will brutally oversimplify the nature and complexity of people, and tries to make everyone the same, while societies thrive on differentiation and diversity. It is clear that scoring will treat some people very badly, in many cases without good reasons (remember my notes on predictive policing above). This would affect societies in a negative way. Creativity and innovation (which challenge established ways of doing things) are expected to suffer. This will be detrimental to changing the way our economy and society work and, thereby, obstruct the implementation of a better, sustainable system.
14.2.7 Automation Versus Freedom
![../images/468986_2_En_14_Chapter/468986_2_En_14_Fig1_HTML.png](../images/468986_2_En_14_Chapter/468986_2_En_14_Fig1_HTML.png)
Full automation scenario
![../images/468986_2_En_14_Chapter/468986_2_En_14_Fig2_HTML.png](../images/468986_2_En_14_Chapter/468986_2_En_14_Fig2_HTML.png)
Creating room for free human choice in an age of automation
The suggested semi-automated approach would reduce human decision workload by automation where it does not make sense to bother people with time-consuming decisions that are clear. Moreover, it would give people more time to decide important affairs. I believe this is important to choose a future that works well for humans, who derive happiness, in particular, from exercising autonomy and maintaining good relationships.55
14.2.8 Learning to Die?
Indices based on multiple measurements or data sets are also being used to recommend prison sentences,56 to advise on whether a certain medical operation is economic, given the overall health and age of a person, or even to make life-or-death decisions.57 One scenario that has recently been obsessively discussed is the so-called “trolley problem”.58 Here, an extremely rare situation is assumed, where an autonomous vehicle cannot brake quickly enough, and one person or another will die (e.g. one pedestrian or another one, or the car driver). Given that, thanks to camera technology, pattern recognition and Big Data, a future car might distinguish between age, race, gender, social status etc., controversial questions arise. For example, should the car save a grandmother, while sacrificing an unemployed person, or vice versa? Note that such considerations are not in accordance with the equality principle of democratic constitutions59 and also not compatible with classical medical ethics.60 Nevertheless, questions like these have been recently raised in what was framed as the “Moral Machine Experiment”,61 and recent organ transplant practices already apply new policies considering previous lifestyle.62
Now, assume that we would put such data-based life-and-death situations into laws. Can you imagine what will happen if such principles originally intended to save human lives would be applied in possible future scenarios characterized by scarce resources? Then, the algorithm would turn into a killer algorithm bringing a “digital holocaust” on the way, which might sort out old and ill people, and probably also people of low social status, including certain minorities. This is the true danger of creating autonomous systems that may seriously interfere with our lives.63 It does not take killer robots for autonomous systems to become a threat to our lives, if the supply of resources falls short.
14.2.9 A Revolution from Above?
The above scenario is, unfortunately, not just a phantasy scenario. It has been seriously discussed by certain circles already for some time. One book even proposes “Learning to Die in the Anthropocene”.64 In this connection it should be remembered that the Club of Rome’s “Limit to Growth” study suggests we will see a serious economic and population collapse in the twenty-first century.65 Accordingly, billions of people would die early. I have heard similar assessments from various experts, so we should take these forecasts easy. Some PhD theses have even discussed AI-based euthanasia.66 There is also a possible connection with the eugenics agenda.67 Sadly, it seems that the argument to “save the planet” is now increasingly being used to justify the worst violations of human rights. China, for example, has recently declined to sign a human rights declaration in a treaty with Switzerland,68 and has even declined to receive an official German human rights delegation in China.69 Certain political forces in other countries (such as Turkey, Japan, UK and Switzerland) have also started to propose human rights restrictions.70 In the meantime, many cities and even some countries (including Canada and France) have declared a state of “climate emergency”, which may eventually end with emergence laws and human rights restrictions.
Such political trends remind one of the proposed “revolution from above” that has been demanded by the Club of Rome already decades ago.71 In the meantime, such a system—based on technological totalitarianism such as mass surveillance and citizen scores—has been created.72 Note, however, that the negative climate impact of the oil industry was already known about 40 years ago,73 and the problem of limited resources as well.74 Nevertheless, it was decided to export the non-sustainable economic model of industrialized countries to basically all areas of the world, and the energy consumption has (been) dramatically increased since then.75 I would consider this reckless behavior and say that politics has clearly failed to ensure responsible and accountable business practices around the world, so that human rights now seem to be at stake.
14.3 Design for Values
14.3.1 Human Rights
Let us now recall what was the reason to establish human rights in the first place. They actually resulted from terrible experiences such as totalitarian regimes, horrific wars, and the holocaust. The establishment of human rights as the foundation of modern civilization, reflected also by the UN’s Universal Declaration of Human Rights,76 was an attempt to prevent the repetition of such horrors and evil. However, the promotion of materialistic consumption-driven societies by multi-national corporations led to the current (non-)sustainability crisis, which has become an existential threat, which might cause the early death of hundreds of millions, if not billions of people.
Had industrialized countries reduced their resource consumption just by 3% annually since the early 70ies, our world would be sustainable by now, and discussions about overpopulation, climate crises and ethical dilemmas would be baseless. Would we have engaged in the creation of a circular economy and would we have used alternative energy production schemes early on rather than following the wishes of big business, our planet could easily manage our world’s population.
14.3.2 Happiness Versus Capitalism
Psychology knows that autonomy and good relationships—one might also call it “love”—are the main factors promoting happiness.77 It is also known that (most) people have social preferences.78 I believe, if our society would be designed and managed in a way that supports the happiness of people, it would also be more sustainable, because happy people do not consume that much. From the perspective of business and banks, of course, less consumption is a problem and, therefore, consumption is being pushed in various ways: by promoting individualism through education, which causes competition rather than cooperation between people and makes them frustrated; they will then often consume to compensate their frustration. Furthermore, using personalized methods such as “big nudging” and neuromarketing, advertisements have become extremely effective in driving us into more consumption. These conditions could, of course, be changed.
![../images/468986_2_En_14_Chapter/468986_2_En_14_Fig3_HTML.png](../images/468986_2_En_14_Chapter/468986_2_En_14_Fig3_HTML.png)
Illustration of the transition from democracy to surveillance capitalism, which appears to quickly become an all-encompassing “operating system” of many societies
For such reasons, I am also critical about utilitarian ethics, which makes numerical optimization, i.e. business-like thinking, the foundation of “ethical” decision-making. Such an approach gives everyone and everything (including humans and their organs) a certain value or price and implies that it would be even justified to kill people in order to save others, as the “trolley problem” seems to suggest. In the end, this would probably turn peaceful societies into a state of war or “unpeace”, in which only one thing would count (namely, money, or whatever is chosen as utility function—it could also be power or security, for example, or any kind of Citizen Score).
This is the main problem of the utilitarian approach. While it aims to optimize society, it will actually destroy it little by little. The reason is that a one-dimensional optimization and control approach is not suited to handle the complexity of today’s world. Societies cannot be steered like a car, where everyone moves right or left, as desired.80 For societies to thrive, one needs to be able to steer into different directions at the same time, such as better education, improved health, reduced consumption of non-renewable energy, more sustainability, and increased happiness. This requires a multi-dimensional approach rather than one-dimensional optimization, and therefore, cities and countries are managed differently from companies (and live much longer!). The before mentioned requirement of pluralism has so far been best fulfilled by democratic forms of organization. However, a digital upgrade of democracies is certainly overdue. For this reason, it has been proposed to build “Digital Democracies” (or Democracies x.0, where x is a natural number greater than 1).81
14.3.3 Human Dignity
It is for the above reasons that human dignity has been put first in many constitutions. Human dignity is considered to be a right that’s given to us by birth, and it is considered to be the very foundation of many societies. It is the human right that stands above everything else. Politics and other institutions must take action to protect human dignity from violations by public institutions and private actors, also abroad. Societal institutions lose their legitimacy if they do not engage effectively in this protection.82 Improving human dignity on short, medium and long time scales should, therefore, be the main goal of political and human action. This is obviously not just about protecting or creating jobs, but it also calls for many other things such as sustainability, for example.
The question is, what is human dignity really about? It means, in particular, that humans are not supposed to be treated like animals, objects or data. They have the right to be involved in decisions and affairs concerning them, including the right of informational self-determination. Exposing people to surveillance, while not giving them the possibility to easily control the use of their personal data is a significant violation of human dignity. Moreover, human dignity implies certain kinds of freedoms that serve to protect not only individual interests, but also to avoid the abuse of power and to ensure the functioning and well-being of society altogether.
Of course, freedoms should be exercised with responsibility and accountability, and this is where our current economic system fails. It diffuses, reduces and dissolves responsibility, in particular with regard to the environment. So-called externalities, i.e. external (often unwanted, negative) [side] effects are not sufficiently considered in current pricing and production schemes. This is what has driven our world to the edge, as the current environmental crisis shows in many areas of the world.
Furthermore, human dignity, like consciousness, creativity and love, cannot be quantified well. In other words: what makes us human is not well reflected by Big Data accumulated about our world. Therefore, a mainly data-driven and AI-controlled world largely ignores what is particularly important for us and our well-being. This is not expected to lead to a society that serves humans well.
While many people are talking about human-centric AI and a human-centric society, they often mean personalized information, products and services, based on profiling and targeting. This is getting things wrong. The justification of such targeting is often to induce behavioral change towards better environmental and health conditions, but it is highly manipulative, often abused, and violates privacy and informational self-determination. In the ASSET project [EU Grant No. 688364], instead, we have shown that behavioral change (e.g. towards the consumption of more sustainable and healthy products) is possible also based on informational self-determination and on respect of privacy.83 I do, therefore, urge public and private actors to push informational self-determination forward quickly.
14.3.4 Informational Self-determination
Informational self-determination is a human right that follows directly from human dignity, and this cannot be given away under any circumstances (in particularly not by accepting certain Terms of Use). Nevertheless, in times of Big Data and AI, we have largely lost self-determination, little by little. This must be corrected quickly.
![../images/468986_2_En_14_Chapter/468986_2_En_14_Fig4_HTML.png](../images/468986_2_En_14_Chapter/468986_2_En_14_Fig4_HTML.png)
Main suggested features of a platform for informational self-determination
The platform would also create a level playing field: not only big businesses, but also small and medium-sized enterprises (SMEs), spinoffs, non-government organizations (NGOs), scientific institutions and civil society could work with the data treasure, if they would get data access approved by the people (but many people may actually select this by default). Overall, such a platform for informational self-determination would promote a thriving information ecosystem that catalyses combinatorial innovation.
Government agencies and scientific institutions would be allowed to run statistics. Even a benevolent superintelligent system that helps desirable activities (such as social and environmental projects and the production of public goods) to succeed more easily while not interfering with the free will of people would be possible. Such a system should be designed for values such as human dignity, sustainability, fairness, as well as further constitutional and cultural values that support the evolvement of creativity and human potential with societal and global benefits in mind.
Data management would be done by means of a personalised AI system running on our own devices, i.e. digital assistants that learn our privacy preferences and the companies and institutions we trust or don’t trust. Our digital assistants would comfortably preconfigure personal data access, and we could always adapt it.
Over time, if implemented well, such an approach would establish a thriving, trustable digital age that empowers people, companies and governments alike, while making quick progress towards a sustainable and peaceful world.84
14.3.5 Design for Values
As our current world is challenged by various existential threats, innovation seems to be more important than ever. But how to guide innovation in a way that creates large societal benefits while keeping undesirable side effects at bay? Regulation often does not appear to work well. It is often quite restrictive and typically comes late, which is problematic for businesses and society alike.
However, it is clear that we need responsible innovation.85 This requires pro-actively addressing relevant constitutional and ethical, social and cultural values already in the design phase of new technologies, products, services, spaces, systems, and institutions.
There are several reasons for adopting a design for values approach:86 (1) the avoidance of technology rejection due to a mismatch with the values of users or society, (2) the improvement of technologies by better embodying these values, and (3) the stimulation of value-oriented behavior by design.
Value-sensitive design87—and ethically aligned design88—have quickly become quite popular. It is important to note, however, that one should not focus on a single value. Value pluralism is important.89 Moreover, the kinds of values chosen may depend on the functionality or purpose of a system. For example, considering findings in game theory and computational social science, one could design next-generation social media platforms in ways that promote cooperation, fairness, trust and truth.90 Also note that a list of 12 values to support flourishing information societies has recently been proposed (see Appendix 14.1).91
14.3.6 Democracy by Design
Among the social engineers of the digital age, it seems that democracy has often been framed as “outdated technology”.92 Larry Page (*1973) once said that Google wanted to carry out experiments, but could not do so, because laws were preventing this.93 Later, in fact, Google experimented in Toronto, for example, but people did not like it much.94 Peter Thiel (*1967), on the other hand, claimed there was a deadly race between politics and technology, and one had to make the world save for capitalism.95 In other words, it seems that, among many leading tech entrepreneurs, there has been little love and respect for democracy (not to talk about their interference with democratic elections by trying to manipulate voters).
![../images/468986_2_En_14_Chapter/468986_2_En_14_Fig5_HTML.png](../images/468986_2_En_14_Chapter/468986_2_En_14_Fig5_HTML.png)
Values that matter for democracies, in particular
14.3.7 Fairness
I want to end this contribution with a discussion of the power of fairness. People have often asked: Is it good that everyone has one vote? Shouldn’t we have a system in which smart people have more weight? Shouldn’t we replace the democratic “one man one vote” principle by a “one dollar one vote” system? (Btw, today we probably have the latter, because of voter manipulation by “big nudging”.) Experimental evidence about the “wisdom of crowds” surprisingly suggests that, giving people different weights, whatever the criteria are, does not improve results.98 On the contrary, studies in collective intelligence show that largely unequal influence on a debate will reduce social intelligence.99 Diversity of information sources, opinions, and solution approaches is what makes collective intelligence work.
In conclusion, it seems a fair system based on the principle of equality is the best. In fact, it can be mathematically shown for many complex model systems that they will evolve towards an optimal state if and only if interactions are symmetrical.100 If symmetry is broken, all sorts of things can happen. However, one can surely say that a hierarchical system or one controlled by utilitarian principles will very unlikely achieve the best systemic performance. For example, replacing tree-like supply systems by a circular economy could potentially improve the quality of living for everyone while making our economy more sustainable.
14.3.8 Network Effects for Prosperity, Peace and Sustainability
In our increasingly networked world, we currently experience a transition from component-dominated to interaction-dominated systems. The resulting network effects can change everything. Combinatorial innovation (i.e. innovation ecosystems), which would be enabled by a platform for informational self-determination, could boost our economy. Supporting collective intelligence (which should be the foundational principle of digital democracies) would benefit society. And a multi-dimensional real-time feedback system (a novel, socio-ecological finance system, which we sometimes call Finance 4.0,101 FIN4, FIN 4+ or just FIN+), would be able to promote a sustainable circular and sharing economy and, thereby, help improve the state of nature. In this way, harmony between humans and nature could be reached, and everyone could benefit.
Finally, I would like to draw the attention to a computer simulation we have performed in order to understand the evolution of “homo economicus”, the utility-maximizing selfish man assumed in economics. To our surprise we found that, after dozens of generations, people would develop other-regarding preferences and cooperative behavior, if parents raise children in their close neighborhood, as humans do.102 In fact, compared to other species, it is quite exceptional how many years of their lives children spend with their parents. This makes a big difference, as it makes (most) people social.
![../images/468986_2_En_14_Chapter/468986_2_En_14_Fig6_HTML.png](../images/468986_2_En_14_Chapter/468986_2_En_14_Fig6_HTML.png)
Results of computer simulations showing the evolution of “homo socialis” with other-regarding preferences and “networked thinking”, starting with a selfish type of human, called “homo economicus”. The transition is expected to take dozens of generations, but it may just be happening… (Fig. 1A of Grund et al. [33]. Reproduction with kind permission of the Springer Nature Publishing Group.)
Interestingly, such networked thinking and benevolent behavior can now be supported with digital technologies.103 I believe, however, technological telepathy104 is not needed for this—it might have even negative effects. But altogether, it is entirely in our hands to create a better world. “Love your neighbor as yourself” (i.e. be fair and give the concerns of others [and nature] as much weight as yours) is the simple success principle, which will eventually be able to create prosperity and peace. We could have known this before, but now we have the scientific evidence for it…
14.4 Appendix 1: Success Principles for Our Future
In this book, I have argued that we need to allow diverse sets of rules to create a socio-economic system that serves a large variety of functions, but also to allow companies and people to experiment in order to find better rules for the future. Nevertheless, it would be favorable to have a number of globally shared fundamental principles—a guiding set of rules small enough for everyone to remember, which would support interoperability and peaceful co-existence.
As I have demonstrated before, in a strongly connected world, maximizing individual payoffs does not produce the best results. To avoid undesirable systemic instabilities and tragedies of the commons, superior principles are needed. The following set of fundamental rules is the result of extensive discussions I have had with many people. The similarity of these principles with those advocated by philosophers and world religions is probably not by chance. It is clear that these principles have been the foundation on which the success of many societies has been based for thousands of years. As I pointed out before, these cultural principles are more persistent than steel and more powerful than wars. They also create social capital, which is one of the preconditions for economic well-being. However, the rules below are particularly attuned to the problems implied by complex interdependencies, strong interactions, and the increasing importance of information, which are characteristic of our current and future world.
- 1.
Respect: Treat all forms of life respectfully; protect and promote their (mental, psychological and physical) well-being.
- 2.
Diversity and non-discrimination: Support socio-economic diversity and pluralism (also by the ways in which Information and Communications Technologies are designed and operated). Counter discrimination and repression, prioritize signaling and rewards over punishment.
- 3.
Freedom: Support the principle of informational self-determination; respect creative freedom (opportunities for individual development) and the freedom of non-intimidating expression; abstain from mass surveillance.
- 4.
Participatory opportunities: Enable self-determined decisions, offer participatory opportunities and a choice of good options. Ensure to properly balance the interests of all relevant (affected) stakeholders, particularly political and business interests, and those of citizens.
- 5.
Self-organization: Create a framework to support flexible, decentralized, self-organized adaptation, e.g. by using suitable reputation systems.
- 6.
Responsibility: Commit yourself to timely, responsible and sustainable actions, by considering their externalities.
- 7.
Quality and awareness: Commit yourself to honest, high-quality information and good practices and standards; support transparency and awareness.
- 8.
Fairness: Reduce negative externalities that are directly or indirectly caused by your own decisions and actions, and fully compensate the disadvantaged parties (in other words: “pay your bill“); reward others in a fair way for positive externalities.
- 9.
Protection: Protect others from harm, damage, and exploitation; refrain from aggressive or war-like activities (including cybercrime, cyberwar, and misuse of information).
- 10.
Resilience: Reduce the vulnerability of systems and increase their resilience.
- 11.
Sustainability: Promote sustainable systems and long-term societal benefits; increase systemic benefits.
- 12.
Compliance: Engage in protecting and complying with these fundamental principles.
To summarize the above even more briefly, the most important rule is to increase positive externalities, reduce negative ones, and ensure fair compensation. Some would just say “Love nature and your neighbor as yourself!”
This fundamental principle takes care of the implications of our interactions, and is probably enough to create a better world that will benefit everyone! Mastering our future isn’t that complicated, after all!
This work was partially funded by the European Community’s H2020 Program under the funding scheme “FETFLAG-01-2018 (CSA)”, grant agreement #820437, “Toward AI Systems that Augment and Empower Humans by Understanding us, our Society and the World Around Us—Humane AI” (https://www.humane-ai.eu).