9
ALGORITHMS
AT the heart and soul of all software are algorithms, the fundamental components, composed of simple instruction, of the computer world: If A then B. The “if-then” chain can be complex and may even be able to produce new “ifs” from “thens” in several consecutive steps. But we are always dealing with a finite number of steps that result, finally, in a certain output of information from a certain input. Algorithms are as old as arithmetic and geometry; Euclid’s procedure for the calculation of the greatest common divisor of two numbers is nothing but the processing of certain if-then relationships in order to solve a mathematical problem. However, the golden era for algorithms arrived with the computer and the Internet, that is, once a machine based on if-then compositions could be fed with significant and now vast amounts of data. Everyone is familiar with Amazon’s recommendations based on the statistical analysis of existing sales: If you are interested in such and such a book, then it is very likely you will also be interested in their recommendation. This if-then constellation—which I am invoking here as composed from socially determinative formulas, over and above the procedural if-then clauses of programming languages—is Pandora’s gift from the silent revolution.
From the perspective of cultural theory, the term algorithm is of course understood in a more general way than it is in the computer sciences, whose representatives might object that even within the field’s discourse the term is ambiguous.1 On the other hand, it is precisely this distancing from the discourse of computer science as well as the metaphorical honing of the term that allows for the current gathering of different themes and debates and the development of a more generative and critical view of media technology’s effects on culture. In this context I would like to note three distinct points:
1.    Algorithms serve as tools to which one can turn for the solution of certain problems, adjusting them, if necessary, to concrete purposes. They are “logic plus control,” as the famous 1979 dictum of the computer scientist Robert Kowalski put it.
2.    With regard to computers, algorithms are not only instructions guiding technological processes (rather than human behavior, as in Euclid’s case); they are always also the implementation of such instructions outside human control or regulation.
3.    Algorithms interact in combination with other algorithms in complex systems and can increasingly operate according to their own directives: “Algorithms act, but they do so as part of an ill-defined network of actions upon actions, part of a complex of power-knowledge relations, in which unintended consequences, like the side effects of a program’s behavior, can become critically important.”2
This means that even though, in principle, algorithms are mathematical manifestations of cultural insight (for Amazon an insight into “similar tastes”), on a theoretical level, they develop a life of their own through their complex interactions, so that their output no longer necessarily represents a cultural input. The questionable social effect of algorithms lies in these automatic forms of data evaluation. They cannot be understood solely and instrumentally as procedures for solving problems; rather, from the perspective of social philosophy they are also the source of new problems.
But apart from the problem of algorithms’ unwanted consequences, their evaluation remains ambivalent, depending on social perspective. A famous example is the anecdote at the beginning of Eli Pariser’s book The Filter Bubble: How the New Personalized Web Is Changing What We Read and How We Think (2011), which tells us that Pariser does not receive current-events updates from his conservative friends on Facebook because Facebook’s newsfeed algorithm EdgeRank recognizes that he is more interested in those updates from his left-wing friends. The logic of the algorithm (which is a statistical one) thus overwrites Pariser’s decision (which is an ideological one) to keep tabs on the opinions of the other side. In a world of advertisements and recommendations by Amazon, Netflix, Last.fm, and online dating sites, it may well be a welcome phenomenon that Facebook and other portals on the net are striving toward personalized information design on the basis of statistically calculated self-interest. But when search engines like Google present different results for the same word depending on the recent search and online activities of the user, things become problematic. If the consumer wants to be a zoon politikon but the filter bubble operates as an information cocoon, shielding the consumer from contradictions, it is just as bad for the individual as books that no longer spark controversy.
Those who consider the confrontation of different points of view the elixir of democracy will not regard the Internet as the promised land of democratic communication but as the cozy home of an “autopropaganda”3 that permanently validates the individual and incessantly excludes all she or he is not (yet). Personalization algorithms suppress chance and encounters with the Other and thus can be said to generate a kind of information-specific xenophobia of which, for the most part, one is not even aware. Hence—with respect to the social—modernity’s impulse to measure turns away from the wish to know “whatever binds the world’s innermost core together” and into a tool for narcissism. It is high time to reread Berthold Brecht’s Keuner story on “Meeting Again”: “A man who had not seen Mr. K for a long time greeted him with the words: ‘You haven’t changed a bit.’ ‘Oh!’ said Mr. K and turned pale.”4
This judgment concerning the filter bubble is not undisputed. After all, it is the Internet that offers space for the most contradictory views as well as the means for finding these perspectives through links and search engines. But if political blogs link to similarly opinionated websites 90 percent of the time,5 one has to ask: To what extent is the possibility of escaping the bubble ever really taken advantage of? What if the real problem is not the algorithm, which, after all, could be counteracted by human intelligence? What if the problem is people themselves? This concept of a far more existential “enemy” motivated discussion on the filter bubble long before Pariser made the term popular and turned it against the algorithm. Around the turn of the millennium, Internet theoreticians assumed that humanity was an obstacle to itself insofar as it excluded all that it did not want to hear or read. One saw in digital media the possibility for people to control the information confronting them more effectively than was possible under classical mass-media newspaper and television. Although from a philosophical and political perspective this increased control is cause for concern, from a psychological perspective it fulfills the human desire for cognitive consistency. Thus the filter bubble merely automates and stabilizes, through the algorithm, something that is a human impulse. In the last analysis, it is not a technological problem but an anthropological one.6
Personalized recommendations and the filter bubble illustrate the deployment of the algorithms as tools for information management. The effect of this kind of information management on society is controversial because its motivation is controversial. There is also controversy as to whether the algorithm should be used as a tool for the production of information such that, with a contrary impetus, instead of being controlled by human beings, it becomes their controller.
Every platform for intercommunication on the net becomes a technology of surveillance and control through algorithmic analysis. Linguistic analysis of the data collected by Twitter and then given over to third parties provides access to social behavior, political mood, and thematic interests. This is useful as a barometer or early-warning signal not only in journalism and finance but also in healthcare, business management, and public policy. With the webcam observing us as we surf websites, every eye movement will become a contribution to statistics, and with Google Glass (or Microsoft’s HoloLens, or whatever technology succeeds), it will not only be glances at the monitor but every glance that count and are counted. The algorithm turns Web 2.0—whose predecessor was regarded as anarchic—into a network for access and control. It radicalizes sociology’s inner conflict: the measurement of social processes also always enables and optimizes their manipulation.
Within the context of digitization and datafication in sociology—which continues in its ambition to be an empirical, quantitative, “hard” science at least since the theoretical impetus of Gabriel Tarde (1843–1904)—a new movement was announced: computational social science.7 This branch of social science grounds its understanding of social processes on new ways of recording data such as the “sociometer” (which compares the physical proximity of colleagues with their efficiency in communicating)8 and on the analysis of large amounts of data from social-network profiles to patterns of movement. By being able to make credible statements concerning different if-then constellations, the question of intervention (in order to eliminate the if-basis from unwelcome then-effects) becomes more central. Interventions may certainly occur, even in the interest of the individual, when, for example, certain substances or occupations are recognized as health hazards. However, because of dwindling social-services funds and the increased aging of society, interventions are likely to be justified more often in the “interest of society as a whole.” Already today accident reports and traffic tickets lead to higher insurance premiums for speeding drivers. Higher rates, however, can be prevented with the help of a “black box” in the car that sends data about driving habits to the servers of the insurance company (thereby of course also making possible the tracking of movements). In this same way, people who have unhealthy eating habits, who don’t move much, or who are abusing drugs may be called to account. A foretaste of future debates took place in the discussion following the attempt of former New York City mayor Michael Bloomberg to ban supersized soft-drink bottles in 2012. While the opponents of such paternalism referred to John Stuart Mill’s essay On Liberty (1859), according to which the individual knows best what is good for him or her, advocates of the policy were pointing out the “present-bias” problem (that is, the lack of foresight) as a justification for governmental intervention in the interest of the individual and of society.9
This juridical-political discussion may be the unwelcome but inevitable consequence and future of data love. Recognized correlations confront society with the question as to whether it can accept dangerous or unwanted effects knowingly or whether it is obliged to intervene. The consequence is an inversely proportional relationship between collective knowledge and individual freedom. Individual freedom is based on social ignorance—or on the socially accepted refusal to apply knowledge that has been retrieved. While algorithms bring forth more and more knowledge in the context of big-data mining, the dark side of this illumination is rarely discussed or problematized. The question of which preventive measures are justified from a social perspective must be addressed, particularly when, in the wake of the currently much-discussed predictive analytics (as a sort of statistical update of the science fiction of Minority Report), recognizable patterns lead to the profiling of perpetrators and to actions against “potential criminals.” To a certain extent, society is held captive by the data scientists, whose discoveries can no more be reversed than those established as natural laws. Knowledge not only empowers; it also creates responsibilities. This is something we know, and not only since the publication of Friedrich Dürrenmatt’s 1961 tragicomedy The Physicians.
However, the problematic nature of the algorithm exceeds its actual consequences. Aside from the question of how to deal with the retrieved knowledge, danger lies in the fact that social processes are increasingly defined by the algorithm’s logic of if-then. If reason is reduced to formal logic, then any discussion becomes unnecessary because if-then relations do not leave any room for the “but” or the “nevertheless,” nor for ambivalence, irony, or skepticism; algorithms are indifferent to context, delegating decisions to predetermined principles. As soon as algorithms operate on the basis of so-called decision-tree learning, developing classifications that are no longer defined by the programmer, the process will soon elude an understanding that is expressible in human language. The prospect of algorithmic analyses and then, based on these analyses, algorithmic regulations whose parameters cannot be understood but whose results are nevertheless binding brings to mind Kafka’s scenarios of alienation more than Orwell’s dystopia of surveillance. The technocratic rationality of these procedures undermines the openness of thought and thus also eradicates two of its basic virtues: the potentiality for misunderstanding and for seduction. If one clears away the gaps and strategic ambivalences from communication, thinking becomes calculation, and all that remains are determinative decisions—or simply mistakes in the combinatorial calculations.
From the point of view of the humanities-oriented sociology that Adorno opposes to an empirical-positivistic sociology, the algorithm is not an object of love but of enmity. It reduces the social to mathematics and blocks divergent, alternative positions.10 The desire for control that underlies the concept of an algorithmic, cybernetic society is reminiscent of Ernst Bloch’s utopias of order that—in contrast to the utopias of liberty and of course always in the interest of the society as a whole—aim at the efficient regulation of social life. Such desire also recalls the attempts to realize order in former socialist countries, which attempted to prevent unwelcome then-results by early intervention, suppression, and censorship at the if level. To be sure, it would be unjustified, historically, to consider the ascendancy of the algorithmic regime as Stalin’s belated revenge. However, one wonders how and under what circumstances the algorithmic analysis and regulation of social behavior will, eventually, prove to be different from socialist state paternalism.
Until we can answer this question we may consider another historical analogy. The algorithm is the “Robespierre of the twenty-first century.” As it was to the most famous hero of the most famous political revolution, to the hero of the present technological revolution anything human is alien; both are determined to follow a categorical idea and principle, leaving no room for misunderstandings and negotiations. The Robespierre-like steadfastness of the algorithmic is illustrated in Morozov’s book The Net Delusion, which compares the Stasi officer in Florian Henckel von Donnersmarck’s film The Lives of Others to an algorithm. Not only does the algorithm accomplish the same orders of surveillance more effectively than the officer; it is also reliably immune to the human. In the film, the unethical interests of the officer’s superior (who desires the wife of the supposed dissident) and the musical tastes of the victim being surveilled eventually move the officer to side with his victim. Algorithms that understand music only as data to be analyzed and that do not question the moral constitution of their creators never judge their own undertaking. Their operations are not subject to any theories of good and right that might be discredited. Algorithms are the final destination of an “adiaphorized” society. They free human activity from moral concerns by letting machines take over action.11
Reducing rationality to calculation and logic is not simply a theoretical problem. With the Internet of things—when the floor talks to the lighting system and the swimming pool to the calendar—many situations requiring a decision will soon be (and already are) regulated by predetermined instructions: If barbecue, then heat up pool; if weight on tiles, then lights on. The Internet platforms ifttt.com and Zapier.com invite users to produce and share “Personal Recipes” for if-then processes: “Mail me if it rains tomorrow,” “When you are tagged on Facebook the photo will be downloaded to Google Drive,” “If a reminder is completed, append a note in Evernote.”12 At first this can be understood as a sort of man-computer symbiosis, as the computer scientist J. C. R. Licklider had foreseen in 1960: A person sets tasks and aims, and the computer carries out the routine work. However, eventually the Internet of things will also take over the process of evaluating and making decisions or, rather, have people other than the individual—the software developers, entrepreneurs, and the nerd next door—configure the different if-then relationships. The technology magazine Wired already sees a market for apps: “Users and developers can share their simple if-then apps and, in the case of more complex relationships, make money off of apps.”13 One no longer sells or buys objects or information but models of information processing that will inevitably disregard specific contexts.
The standardization taking place with smart things naturally develops according to the instructions of programmers. Even if users can adjust the if-then mechanism, there is always a default setting. Hackers may be able to manipulate this if-then automatism; however, the chief, essential manipulation occurs when decisions are outsourced on the grounds that one no longer has to reflect on, or be responsible for, the appropriateness of a certain reaction to a particular situation. Delegating responsibility may lie outside any moral significance when we are dealing with an automatic lighting system that by now has become standard in almost every car and many public buildings. However, this delegation becomes questionable when software programs react automatically to our environment or when the maps app on our iPhone, for example, shows street names in Bangkok in Thai script. The outsourcing of decision becomes highly charged when, with the increasing complexity of cybernetic feedback loops, it is less and less possible to comprehend on what grounds of what data the if-then loops are generated and with what consequences. It is the apotheosis of “adiaphorization” when people do not feel responsible for the effects of their actions, even when these effects and actions concern them individually.