Fear came early. It arrived when the Internet became popular, and those who were already coming and going there asked themselves how to organize this deluge of information from everyone and everywhere. There were lists of links on websites and in newspapers, and there were even printed brochures with web addresses to provide a bit of clarity. The irony of this form of presentation is unmistakable, but it is not the only paradox to speak of. These lists of links on paper or online were recommendations from experts and the new know-it-alls who wanted to tame the new in an old way: with judgment.
Today it’s hard to imagine, but back then Yahoo had two dozen employees who carefully looked over each website. The information was arranged in a hierarchy determined by the offline world. Thus the website of the Messianic Jewish Alliance of America was placed in the category “Judaism,” which was part of the category “Religion,” which in turn was part of the category “Society and Culture.” It was an attempt to address the complexity of the digital world using the classification methods of the analog world. And it brought all of the problems of such a method along with it—for example, the protests of “real” Jews against the inclusion of the Messianic Jewish Alliance of America. Most Messianic Jews were born to a Jewish mother, but they believe in Jesus. That makes them a heretical sect in the eyes of many, one that doesn’t fit in the category of Judaism, or rather shouldn’t be found in it, and in any case shouldn’t be listed there by Yahoo, so the complaints went.
Then came Google and the end of the hierarchy. You no longer rifled through cabinet A, drawer B, file C to find something; you simply entered the search term. The old ontological model of organization was replaced by the relational database, which could assign the Messianic Jewish Alliance of America the tag “Jewish” as well as “heretical” and let the public decide. It was the transition from the order of experts to the referendum of the masses, which determined the characterization of a website based on its links and tags, dictating its popularity and its visibility along with it: the more people link to a site, the more important it is, according to Google’s simple but multibillion-dollar-winning idea.
This switch from qualitative to quantitative assessment was based on one final qualitative decision: that quantity is the more reliable criterion. Numbers, according to the solution of the hour, say more than a thousand words. It didn’t matter what anyone thought about an idea or a person, only if they were popular. That’s how it is with every vote whose winners are determined by a count. It’s a deeply democratic process. That’s precisely the point: the switch from Yahoo to Google wasn’t just a transition of power in the online search-engine market segment. It also wasn’t just a technological paradigm shift. It was a political revolution that replaced the judgment of a few with the power of the masses, expertocracy with numerocracy.
Long before the term “wisdom of the crowd” became popular, there was a book with the title Collective Intelligence. Its author, the French philosopher Pierre Lévy, would later describe the Internet as a knowledge community that extended beyond all borders, and as “universality without totality.”1 The Internet is antitotalitarian because content can no longer control its environment. Instead, everything is connected to everything, and must prove itself in relation to the other with messages, comments, and observations. Lévy viewed this as an expansion of the project of the Enlightenment. At least as far as Wikipedia is concerned, he proved to be correct.
But the masses can also be wise when they vote, not just when they discuss something. On a fall day in 1906, the British polymath Francis Galton proved this when he had eight hundred visitors at an agricultural exhibition guess the weight of an ox. Of course, there were many wrong guesses, but since they overshot in both directions (sometimes too little, sometimes too much), the errors evened each other out, and taken together, they ultimately led to an average that was only one pound over the correct number. It was a moment of glory for the wisdom of the crowd, but also for democracy. As Galton later wrote about the experiment, the average participant in this estimate was as well prepared for it as the average voter is prepared to assess political issues. If the crowd reliably hit on the weight of the ox, according to Galton’s logical conclusion, their judgment in democratic decisions was also to be trusted.2
Galton was more confident about the matter in 1906 than Nicolas de Condorcet over a century before him. In his jury theorem, Condorcet emphasized that the odds of getting a correct answer to a yes or no questions from a group increases along with the size of the group, and that the impartiality of laypeople is even a benefit—which, I would note, also ultimately defines the concept of trial by jury. However, when faced with the masses’ lack of education and voting competence, he preferred representative democracy to direct democracy, so that the power remains with those who know what they’re talking about based on reflection and discussion, and what they must consider when making their decision.
This demand for enlightened voting competence seems to have been diminished by the Internet, since the world can be reduced to whatever seems personally obvious to us based on algorithms’ filters and our own preferences. The most recent answer to the plethora of information in the information age ensnares people in a “me-loop” of auto-propaganda. Contrary to what Galton and Condorcet surmise, the result is the folly of the crowd: a majority of like-minded people mutually affirming and encouraging each other, instead of balancing their opinions with numerous others.3
The Internet means a changing of the gatekeepers. Before, experts, administrators, and what Michel Foucault called the “discourse police” determined who could communicate what in public, but since the advent of the Internet you no longer need the approval of an editor, publisher, or organizer. What’s needed now is attention, which generally used to be assured as soon as you passed the gatekeepers. At the same time, this entails a change in the way success is measured. Before, you were already successful if you had access to the public space because it meant you had passed the experts’ test. Now the only test is that of public reception. In the currency of social networks, the power of the superior argument is assumed based on the higher number of likes; quality equals quantity.
The criteria for assessment in the digital meritocracy is not subject-specific, but rather based on the attention economy. Whatever you do, however good or bad it is, its success is based on views, shares, likes, followers, and retweets. As we know from popularity-driven websites like Reddit and Digg, the deepest content is rarely to be found under the highest number. “Social bookmarking” (the specialist term for recommendation lists created by Internet users) robs the logic of rating of its last opposition, which editors, directors, publishers, and other traditional gatekeepers still voice based on professional ethic. The much-touted wisdom of crowds degenerates into the power of the many, who push articles on Kim Kardashian or cute dogs to the top of the “must read/see” list. But that isn’t the only problem with the change of gatekeepers. The paradox lies in the fact that being measurable doesn’t prevent the system from being manipulated.
This is the case when invisible editors in the backrooms of platforms “correct” the results of social bookmarking. This secret return of the experts caused a stir—and rightly so—in May 2016 in reaction to Facebook’s manipulation of its statistical trend notifications. It was an instance of plying the old principle in the garb of the new, except this time without the rules and proof of qualifications that the old system brought with it. The discourse police had been privatized in a sense, and as a secret rapid deployment force, they answered only to the owners of the platform: the monopoly on state force had been transferred to the head of Facebook.
In this case, the statistics were manipulated after the fact by human intervention, but before that they had already been distorted by the conditions of communication that produce the data. Given the short time that young people today spend on the rapidly turning communication carousel, they often like a video report or newspaper article that already has many likes. Some of them will look briefly at the video or article to be sure. But most of them trust blindly in the fact that quantity means quality. Likes breed likes, as Karl Marx would have said, had he written his Capital under the influence of the attention economy.
The phenomenon of numeric populism is older than the Internet. Quantification is the magic word in any form of administration, and it has always rendered experts unnecessary. Under the banner of numbers, even the clueless can reach a decision, because everyone knows that five is more than four. It’s well known that important political discussions have already fallen victim to entertaining talk shows through the logic of the ratings, as is the fact that politicians base their statements on survey results. Numerical superiority always gave the masses dominance over the old gatekeepers. But this feedback loop isn’t the only factor, and certainly not the most absurd aspect of numeric populism.
Quantifiability has also become the basis for assessment in universities. Therefore, standardized tests and grade-point averages for students have been introduced, income scales for graduates, citation indexes, impact factors, and seminar evaluation statistics, as well as ranking lists for universities. The craze for scoring and ranking aims at accountability through accounting, allowing even freshly minted administrative assistants to make statements about the reputations and professional success of veteran professors in their department. With the digitization of society and the cult of the interaction paradigm, this accounting model has now extended to all possible domains. Not even art curation has been spared.
At the end of 2014, the Wall Street Journal published an article titled “When the Art Is Watching You.” The lead image showed sculptures of torsos with cameras for heads.4 German literary scholars will be immediately reminded of Rainer Maria Rilke’s famous poem “Archaic Torso of Apollo” (1908) when they think of artists gazing expectantly out of their works (in Rilke’s case a headless statue) at observers. The well-known closing verse has given generations of literature students goosebumps: “for here there is no place / that does not see you. You must change your life.”5 But the article isn’t about Rilke’s poem, or at least it’s about the poem only insofar as the modern, technical eye inverts his conclusion into its complete opposite.
The audience isn’t being watched by the art, but rather by the mediators of art, the data analysts who evaluate the public’s behavior: how often do they come to the museum, to which exhibitions, how long do they stand in front of which works of art, what do they buy in the museum shop? Then the tech people show the curators and art educators diagrams and tables to demonstrate which topics will ensure more visitors. And so in this case, ratings also rule in favor of the economy. The question is no longer “What is meaningful in the history of art?” but rather “What’s trending?” It is the death of experts at the very center of high culture.
As is so often the case, what became a societal problem through the Internet had its beginnings in art. Loss of privacy, for example, is an idea that began with the avant-garde, who conceived of transparency as an attack on bourgeois culture, which was still the case for self-surveillance projects in the Internet’s pioneering days. Similarly, the death of experts was also proclaimed in art as the death of the artist, which some artists began to press for in the 1960s under the names of chance, behaviorist, and participatory art. These “suicide theorists” demanded that the artist be less involved in the production of art, promoting a corresponding relocation of power from the artist to the audience.6
One example of this transfer of power were interactive installations, which reacted to the audience’s behavior—that is, the audience “completed” (as it was grandiosely termed) the installation. Another is participatory art with a focus on cultural activism, which since the end of the twentieth century has aimed to generate more open social situations. Dialogue and respect are the central tenets of this art form, an empathetic identification with the other rather than an “arrogant” appeal to change your life. It no longer employs the shock with which the Dadaists and many other artists alienated audiences; this is a group cuddle session. Art critics are inevitably included in the cuddling, since, if the goal is to free art from aesthetic or pedagogical objectives of “know-it-all” artists, there can hardly be positions from which to evaluate works of art. Instead, the measure of success is how many people participate, which ultimately makes the art inherently numerocratic.7
The death of experts ultimately occurs at the expense of those who are supposed to benefit from it. That becomes evident first and foremost in the realm of art, which serves the public by challenging it, unsettling it, making demands on it, and thus pushing the observers to move beyond themselves—as Adorno would later translate Rilke’s verse. But if art is made to please people, it deprives them of the opportunity to ever be more than they already are. Adorno recognized this deception in the culture industry; now it is the unavoidable side effect of the death of experts, which is generally the effect of the Internet.
If we are no longer pestered by experiences that challenge us, and if we always have the option to choose what’s fun now, we will never experience what a joy it can be to comprehend complex ideas and even be able to produce them ourselves. As in sports, the endorphins can’t be had without effort and perseverance. But these virtues are dying in the culture of immediate gratification. Who has the patience anymore for something that isn’t immediately comprehensible? In such cases, who thinks anymore that the problem lies with themselves? Along with the experts, we’ve also gotten rid of the aspiration that they brought to their audiences. There are no longer “chaperons” who admonish people to see things through or try a change of course: this is cowardice disguised as democracy.