2
The Coming of the Information Society
American efficiency is that indomitable force which neither knows nor recognizes obstacles; which continues on a task once started until it is finished, even if it is a minor task; and without which serious constructive work is impossible.
Hughes, 2004: 251
Speed is everything. It is the indispensable ingredient of competitiveness.
Jack Welch, former CEO of General Electric Corporation, in Jack Welch Speaks (2001)
The capitalist ‘mode of production’ and the role of speed
The first question we need to consider is: how did it come to be that the societies that emerged out of the post-1970s era contracted a severe case of what Doug Henwood (1995) calls ‘info fetishism’? To put it another way, what is it that motivated the widespread conviction that computers and computerization are the solution to just about every problem that confronts us, and that the faster they are made to run, the better we all will be for it?
To help answer these questions, we need to understand something about how economies work (or sometimes don’t work). In the late 1700s Adam Smith, Chair of Moral Philosophy at Glasgow University, was one of the first modern thinkers to construct a theory of what we now call ‘economics’. In his The Wealth of Nations (1776) he elaborated upon the processes of ‘supply and demand’ and how self-interest, or what we would later call ‘individualism’, would, if left to its own devices, eventually work in the interests of everybody. Smith was concerned to show that market processes were guided by an ‘invisible hand’ that, if unrestricted, led to a natural equilibrium. His central point was that trade and production needed to be free from government and other institutional meddling or encroachment so as to find their natural levels. Smith also pointed out that a ‘division of labour’ in society was the key to unlocking its productive potential. Here, tasks were divided up into an assembly-line process involving many people, as opposed to one or two persons completing the whole job – the dominant method in previous guild and craft-based production systems. In his famed discussion of the pin factory, for example, Smith was able to show how the new-fangled manufacturing processes of a nascent capitalist system that functioned through a division of labour were able to achieve astronomical gains in productivity. He calculated that if one man was to try to make a pin himself he could perhaps make twenty per day. Visiting an actual factory where eighteen individuals were assigned specific tasks such as drawing, cutting, sharpening, etc., Smith observed that 4,800 pins were produced in the same time (Smith, 1776/1965: 69).
Smith wrote at the time of the European Enlightenment, when the decline of the power of religious authority compelled thinkers to pose new explanations for the nature of the momentous social changes that were taking place. A term used for the evolving and cumulative method that sought to understand the processes of an increasing rational and industrializing Europe (and North America) was political economy. This was an intellectual framework for a new way of considering society and it concerned itself primarily with the causes and effects of aggregate economic activity. The political economy theorizing of men such as Smith, David Hume and Adam Ferguson of the so-called ‘Scottish Enlightenment’ began to view the evolution of human society as developing in a historical process. This was along a steady progressive trajectory wherein both the individual and society were constantly improving. Indeed ‘improvement’, especially in agriculture, was something of a catchphrase of the time, indicating the attitude of ‘change for the better’ across all realms of life. They developed what has become known as the ‘stages’ theory where, over historical time, societies moved progressively through stages of development. These were hunting, pastoralism, feudalism and (now) commerce. The role of production was key to allowing one stage to ‘progress’ from one to the next. Enlightenment contemporaries viewed the transition from feudal agriculture to capitalist industry as particularly revolutionary. Given that historiography and political economy were nascent disciplines at this time, the works of these philosophers may be seen as little more than bold speculation. However, they developed a tremendously influential set of ideas that shaped much subsequent thinking concerning what it meant to be modern. Indeed, ‘modern’ came to be the generic Enlightenment term for a humanity seen to be freed from the shackles of religion and superstition, and set upon a course of progressive improvement of all its social, economic, political and cultural conditions (Berman, 1981).
To align oneself during the Industrial Revolution with the optimistic theory of progress tended to help assuage issues of what industrialism actually entailed. Liberal intellectuals, politicians, and laissez-faire capitalists were often able to convince themselves that what Friedrich Engels (1987) described as the ‘dark satanic mills’ of the English factories he visited, from 1842 to 1844, were but a necessary (and temporary) circumstance of progress and improvement. The slave-labour conditions, the exploitation of children and the terrible death and injury rates could thus be almost justifiable as the birth pangs of a new world.
Not everyone thought so at the time. Notwithstanding the fact that the division of labour and the factory system more generally was certainly more productive and efficient, they were also perceived by some as alienating and dehumanizing processes. So-called ‘radicals’ such as Marx and Engels, and ‘reformers’ such as Robert Owen, sought to better the conditions of the emerging working class through, on the one hand, catalysing worker militancy, and, on the other, through paternalistic reform by ‘enlightened’ capitalists and politicians.
Radicals and reformers may have disliked the terrible conditions of the early factory system, but the materialism of Enlightenment political economy, that is to say, the stress on those same productive forces of society, as being the engine of growth and progress, was viewed as a tremendously important and necessary aspect of it. Karl Marx, of course, has been the most influential proponent of what has been termed ‘historical materialism’. Marx accepted, and further developed, the idea of a materialist conception of history and that capitalism represented the latest stage in human social development. The difference, as Marx saw it, was not that capitalist commerce was the final stage in history, where the role of the individual pursuing his or her own inclinations would (by default) contribute to the common good; capitalism and its productive base represented for him the penultimate stage that created the material means for capitalism itself to be transcended and for communism to be realized. Marx argued that capitalism’s own contradictions of exploitation and the creation of antagonistic classes would create, ultimately, the basis for a socialist revolution where the working class would come to power and accomplish its historic mission (Marx, 1975).
Setting aside questions of socialism and ‘stages’, in the perspective of a general political economy analysis, capitalist industrialism was indisputably the dominant mode of production in the modernizing societies of the eighteenth and nineteenth centuries. It was a process dominated by machines and factories and was geared constantly towards speed, flexibility and efficiency. It was also a process motivated (to a greater or lesser extent throughout history) by competition in the context of a free-market system. According to this theory, the ‘productive forces’ of capitalism, i.e., the machines, and forms of industrial organization, are bound to continually ‘progress’ and ‘improve’ because of the pressure of competition. New techniques, new equipment and new ways of producing things need to be constantly introduced, or the system begins to stagnate and economic and social crisis ensue.
Capitalism and the ‘need for speed’
It is necessary at this point in the discussion of the dynamics of capitalism to slightly interrupt the loose chronology of the narrative so as to introduce the idea that, as a mode of production, capitalism has what might be termed a ‘need for speed’ at its core. This is important if we are to understand the nature of the information society that emerged from industrial society. This ‘need’, as we shall see in later chapters, has been delimited by technological capacities, and by political/ideological imperatives since the Industrial Revolution began. With the rise of the information society, however, it is my argument that computerization has unleashed the technological restraints, and neoliberal globalization has unfettered the political and ideological boundaries that surrounded a previously more ‘organized’ form of capitalism (Lash and Urry, 1987).
The ‘need for speed’ is not an innate human propensity. Some of us may be addicted to cars, or to motorcycles or speedboats, but this obsession may represent a reflection of elements of our modern, and now postmodern, cultures. Partly, it may be a more deep-seated thrill at the thought of we mere humans somehow defying the laws of physics. However, most of us, I venture, would prefer not to drive at 170 mph down a highway even if it were legal. And most of us, I would further suggest, would not volunteer to become expert multitaskers – that is to say, people who dash from one thing to the next in a perpetual motion, working faster all the time, and in more concentrated bouts – if presented with a genuine choice in the matter. As I see it, the ‘need for speed’ comes not from any entrenched psychological need, but from the social system of capitalism as it has evolved since the eighteenth century. Jack Welch, who is quoted at the head of this chapter, certainly knows business. He may or may not have read Marx, but he does understand the motive forces of capitalism: speed and competition. Speed enables a company to compete, and effective competition is predicated upon the company’s ability to move quickly to innovate and produce things faster.
In his writings on the temporal factors in the production process (which were few and scattered) Marx tended to underplay the extent to which speed (the rate of) was implicated in the creation of value. In ‘Notebook 1’ of his Grundrisse, however, he makes the connection explicit:
Every commodity . . . is equal to the objectification of a given amount of labour time. Their value, the relation in which they are exchanged against other commodities, or other commodities against them, is equal to the quantity of labour time realized in them. (Marx, 1973: 168)
This is the famous ‘time is money’ connection that had already been identified by Benjamin Franklin as early as 1736 when he stated in his ‘Necessary Hints to Those that would be Rich’ that: ‘Remember, that time is money. He that can earn ten shillings a day by his labour, and goes abroad, or sits idle, one half of that day, though he spends but sixpence during his diversion or idleness, ought not to reckon that the only expense; he has really spent, or rather thrown away, five shillings besides’ (cited in Adam, 2004: 42).
What Marx does is to theorize and explain what every business person intuitively knew from at least the beginnings of the Industrial Revolution: time is of the essence in business, and so the faster the production processes, the better. From this it follows that in the general ‘circulation of capital’ idleness and waiting literally costs money. The faster the circulation of capital, the more profitable it is. David Harvey in his Limits to Capital emphasized this point, where he wrote that:
There is . . . considerable pressure to accelerate the velocity of circulation of capital, because to do so is to increase the sum of values produced and rate of profit. The barriers to realization are minimized when the ‘transition of capital from one phase to the next’ occurs ‘at the speed of thought’. (Harvey, 1983: 86)
Increases in the ‘velocity of circulation’ or ‘speed of capital’ in order to compete has always been found through technological innovation, the invention of faster machines, the ‘forcing the pace’ of work through ‘speed-ups’, or through more efficient and productive forms of work organization – the latter tactic having evolved into a whole science and academic discipline called ‘organizational management’.
Technology and the systematization of capitalism
As capitalist industrialism develops, and becomes more complex and faster, then the role of technology inevitably moves to centre-stage. Constant innovation in technology allows capitalism to do what Marx argued that it must do – to constantly expand. Profits, or at least part of them, must always be ploughed back into technological solutions that allow the individual business to grow. The alternative is to stagnate and be the victim of competition. As a generalized system, capital accumulation (profit) must be invested into physical space (geographic expansion) in order to seek new markets, new sources of raw material, and new sources of labour, and to create more accumulated capital. This logic develops its own cycle that continues endlessly (Harvey, 1983). Throughout modern history, new technologies have been developed to allow this to occur. Investments in the train network and in the telegraph system, for example, along with macadam-surfaced roads, had the effect of shrinking time and space, making the ‘sphere of influence’ of the industrial way of life paradoxically larger, more manageable and potentially more profitable.
For much of the eighteenth and nineteenth centuries, this growing into space and the effective use of time through the efficiencies of speed was conducted in the context of a Smithean free market that was loosely regulated, and where the push and pull of market forces was relatively unrestrained – and therefore unpredictable. Uneven economic development between countries, and within countries too, contributed to political tensions. Imperialist expansion exacerbated these tensions yet further, and was itself connected to the capitalist imperative to constantly grow. Technological development and concomitant spatial expansion was one thing, but latenineteenth-century capitalism had begun to move out of its classical liberal phase of free markets and free trade. Political and economic tensions began to move to the fore. As an official of the US State Department noted in 1900, at the apogee of imperialist expansion: ‘Territorial expansion [and the political tensions it creates] is but a by-product of the expansion of commerce’ (cited in Hobsbawm, 1996: 45). The last quarter of the nineteenth century, then, was a time when capitalism was beset by a lack of confidence and what Eric Hobsbawm termed ‘the breakdown of its old intellectual certainties’ that had sustained the classical model and the liberal market that it advocated (1996: 308).
By the turn of the twentieth century, the volatile nature of the world economy and the political and social tensions generated by the boom-and-bust swings led to growing working-class disenchantment in the industrial centres. A burgeoning socialist and communist movement meant that, even at this late stage of its development, capitalism and the liberal democracy of free markets that sustained it were still on trial and by no means an inevitability. More immediately, such swings were, obviously, bad for business and blunted the impacts of technological innovation and efficiency. Industrial capitalism was badly in need of productive processes that would be more systematic and predictable.
It was in this context, in 1911, that Frederick Taylor published his book The Principles of Scientific Management. It contained a set of ideas that were to lead to a revolution in the ‘mode of production’. In its essence, Taylor’s was a systematic attempt to infuse the work process, in the factory, in the building site or the office, with a logic of information, based on numbers (i.e., the time it took to perform a particular job). The objective was to align the human worker more closely to the rhythms of the machine – machines that were themselves constantly being developed to run faster and more efficiently. In this, it was a proto-computer way of looking at life and work, by seeing them as J. C. R. Licklider would half a century later, in terms of human–computer interaction (HCI) (Licklider, 1960). For Taylor, however, workers had to adapt to the machine, not interact with it or control it. He argued that by studying the movement of the worker performing his or her work task, detailed information would be gathered and analysed, and the modification of the work practice – usually the modification of the worker – would automatically and logically suggest itself. Speed, flexibility and efficiency were the driving force behind Taylor’s thinking, and, by linking the worker to these machinic tropes, he had hit upon something profound at the core of the capitalist economy.
Henry Ford, the automobile magnate and towering figure in American capitalism during the first half of the twentieth century, took up the ideas of Taylor with enthusiasm. Two years after Taylor’s book hit the shelves, Ford was busy applying ‘scientific management’ techniques to initiate a revolution in manufacturing called the assembly line. The production processes whereby workers stand at specific points along a conveyor belt have become the standard across the world and are today taken for granted by those who work within this logic. However, the gains made in speed, flexibility and efficiency by the Ford Motor Company, who were the first to systematize this technique, were tremendous. Its success gave birth to the term ‘Fordism’. The original object of the assembly-line technique was the famous Model T car that symbolized the dawn of the age of true mass production. By 1918, 50 per cent of all cars in the USA were Model Ts. Ford’s genius was to see the connections between economies of scale – which could dramatically drive down the price of a finished product – and mass consumption, whereby workers should be able to afford to buy mass-produced objects. This was not a simple matter of supply and demand as the classical liberal economist imagined, but was a process that needed the help of conscious intervention and planning. And so in 1914 Ford began paying his assembly-line workers $5 a day, which at the time was more than double the average wage. Through such a device his workers could for the first time buy the things they produced, thereby creating more demand, spurring increased production, creating the need to hire more workers who would themselves be the source of yet more demand, and so it goes on. In theory, then, the volatile ‘business cycle’ of boom and bust could be erased through technological innovation and social planning.
All this was a major step forward and inaugurated the beginnings of a greater role for the organizing of capitalist production and consumption on a more systematic basis. It meant that government and its bureaucracies would become more involved, an involvement that would, in the post-Second World War era, lead to the development of a new political articulation in the shape of a social democracy to replace liberal democracy. But Fordism as the dominant means of production in capitalist society would not last. To understand the rise of the information society, we need first to look in more detail at the nature of the Fordist industrial society that preceded it, a society, polity and culture that was oriented around Fordism and a ‘total way of life’.
Fordism as a ‘total way of life’: 1950–1973
The essences of Fordism are mass production, mass consumption and, importantly, the insertion of organization, planning and predictability into the historically volatile business cycle. Ford’s pioneering techniques became the basis for a new paradigm in capitalist production – Fordism was even enthusiastically taken up by the Soviet Union in the 1920s and 1930s as it tried to industrialize and modernize as quickly as possible. However, it was during the Second World War that Fordism gradually became dominant across whole economies and across the world as the standard mode of industrial production. In the USA in particular mass-production Fordism made that economy the foremost industrial nation. It was during the decades after the Second World War, however, that Fordism really came into its own. The success of planning and the organization of production and consumption went way beyond the factory assembly line. The philosophy of planning and organization, of linking of mass production and mass consumption, and of the partnership between organized labour, government and big business created more than the so-called ‘managed economy’ (Olson, 1984). Its effect was felt in every sphere of life in the industrial economies in the decades of boom that followed the end of the Second World War. In the wider process of organizing and planning society along Fordist lines, the new social democratic tide introduced comprehensive welfare programmes, where education, social security and health services would keep workers (as producers and consumers) healthy and secure. Indeed, so deeply infused was the economic and cultural ethos of this ‘high Fordism’ as a social system, that David Harvey, in his The Condition of Postmodernity, wrote:
Post-war Fordism has to be seen, therefore, less as a mere system of mass production and more as a total way of life. Mass production meant standardization of product as well as mass consumption; and meant a whole new aesthetic and a commodification of culture. (1989: 135)
Why was Fordism, especially in its post-war ‘high Fordism’ variant, so successful? Why was it able to produce (that is, at the time) the longest uninterrupted boom in history (Brenner, 1998)? And why did it eventually become crisis-prone? In France in the mid-1970s the neo-Marxist ‘Regulation School’, comprised of thinkers such as Robert Boyer, Alain Lipietz and Michel Aglietta, emerged in the attempt to make sense of the crisis of Fordism – which was by then well into its demise in the Anglo-American economies. What they termed ‘regulation theory’ argued that the process of capital accumulation might be given a degree of stability (avoiding violent swings in boom and depression) through what they called ‘modes of regulation’. For Aglietta (1979), the continued ‘concentration and centralization’ of capital accumulation was made possible by the growing tendency of Fordism to ‘fix’ capital in space through a reliance on fixed assets such as factories, plant and machinery – as well as a relatively ‘fixed’ and stable workforce who had grown accustomed to the Fordist notion of ‘a job for life’. These were processes and cultural assumptions that were given the backing of governmental policy in all the post-war social democracies.
To understand why this regulated system broke down almost completely, and led to the inauguration of a post-Fordist information society, we need to go back briefly to what I noted on the nature of capital accumulation in space and time. We saw that accumulated capital must expand into fresh territories, create new markets and so on if it is to avoid the problem of ‘overaccumulation’ where there is too much production and not enough consumption. A precondition for successful accumulation is the relative mobility and flexibility of capital (Harvey, 1989: 187). For much of the post-war period this was not a problem, because in the wake of the war’s destruction, the national economies of Western Europe and Japan needed to be rebuilt – and so there was enough flexibility (and space) in the system itself for this to occur. However, as the Fordist system grew, and evolved and matured, with its regulation and planning ethos pervading a ‘total way of life’, then the need for the mobility of capital became more pronounced. Whole areas of the economy in the social democracies had become off-limits to private investment – however much it was needed. And so nationalized or heavily subsidized industries in the core sections of the economy, such as steel, coal, airlines, shipping and so on, had become common – and had become inefficient and costly. Lack of investment and growing union power meant that research and development into new technologies, such as computer technologies with which to increase speed, flexibility and efficiency, were low and/or heavily regulated.
In the Marxist analysis, regulation tended to be a problem over the long term. However, as the 1970s progressed, business leaders, revisionist economists and influential politicians began to view regulation per se as the core problem. Red tape, tariff walls, industry subsidies, overly powerful unions and ‘socialist’ governments were argued to be choking the life out of rigidifying economies. David Harvey, a Marxist, but not one of the Regulation School, saw the over-accumulation problem primarily in terms of there being insufficient space for capital to expand into, within a crowded global system. He wrote that the crises of Fordism in the late 1960s and early 1970s:
can be to some degree interpreted . . . as a running out of those options to handle the overaccumulation problem . . . As these Fordist production systems came to maturity, they became new . . . centres of overaccumulation. Spatial competition intensified between geographically distinct Fordist systems, with the most efficient regimes (such as the Japanese) and the lower labour-cost regimes (such as those found in the third world) driving other centres into paroxysms of devaluation through deindustrialization. Spatial competition intensified, particularly after 1973, as the capacity to resolve the overaccumulation problem through geographical displacement ran out. (Harvey, 1989: 185)
The world was on the brink of a new revolution, a paradigm shift regarding how society would be organized. Indeed, according to Scott Lash and John Urry this phase signalled the ‘end of organized capitalism’. This was a new era where the market would again come to the ascendancy and where social-interventionist policies would decline (Lash and Urry, 1987). It is a revolution that continues still, through new technological means, and through new cultural attitudes towards work.
Efficiency, efficiency, efficiency . . .
It was with the demise of Fordism that culture, politics and society began also to experience the eclipse of Marxism, socialism, and the decline of the rather more tepid social democracy as ways of seeing the world and organizing it. What this meant in practice was the withering and marginalization of any group, body of thought or individual that would stand in the way of the freeing of business from the perceived ‘rigidities’ of government regulation. The mobility and flexibility of capital were the new watchwords of the newly revitalized right-wing economic theorists and philosophers who were coming to the fore. Modes of thought that had been subsidiary for decades in the universities and the think tanks were now having their day in the sun as a direct outcome of global economic crises. Prominent among these was Friedrich von Hayek whose Adam Smith-derived ideas of individualism and economic freedom were best expressed in his dusted-off 1945 book, The Road to Serfdom, that warned that social democracy would inevitably lead to tyranny and crises. Hayek’s old ideas of a return to laissez-faire capitalism were being eagerly received by powerful political and economic figures in the Anglo-American countries who attributed the chronic crisis mode of the period to over-regulated societies where ‘socialism’ had become out of control (Jenkins, 1989).
The rising influence of neoliberalism was aimed at freeing up capital, to make it flexible and mobile and able to promote faster innovation and more rapid change, and then respond to change with yet more innovation. The mantra of ‘efficiency, efficiency, efficiency’, was to return as the solution to stagnation, to loss of productivity and (of course) to the loss of profits that derived from these. Computerization was seen by some visionary engineers and business people as a way to radically improve speed, flexibility and efficiency. With computers automating human tasks and taking fatigue and error out of much of the production process – and injecting potentially limitless and untiring speed into it – productivity levels could, in theory, go off the charts. As Michel Aglietta put it: ‘neo-Fordism, like Fordism itself, is based on an organizing principle of the forces of production dictated by the needs of capitalist management . . . The new complex of productive forces is automatic production control or automation; the principle of work organization now in embryo is known as the recomposition of tasks’ (1979: 122; my italics).
This embryonic ‘recomposition of tasks’ was a central part of what has been described as the ‘remaking of the world’ (Harvey, 2005: 1). And, as the political and ideological barriers to this began rapidly to fall, computerization would rush into that vacuum to make it all possible.
The ‘closed world’ of computing opens up
In his 1956 classic, The Power Elite, sociologist C. Wright Mills described the power that had concentrated in the USA within the nexus of the political, military and economic establishments. He observed that:
As each of these domains becomes enlarged and centralized, the consequences of its activities become greater, and its traffic with the others increases. The decisions of a handful of corporations bear upon the military and political as well as upon economic developments around the world. The decisions of the military rest upon and grievously affect political life as well as the very level of economic activity. The decisions made within the political domain determine economic activities and military programs. (1956: 7)
The prose is somewhat convoluted but the point is clear enough: that there existed, in the USA, a powerful elite who makes the decisions that have tremendous consequences for ordinary people. President Eisenhower, in his outgoing speech in 1961, referred to this relationship as the ‘military-industrial complex’ that democratic forces should be vigilant against. Emphasizing the role of computerization in this, Eisenhower elaborated:
In this revolution, research has become central (. . .) Today, the solitary inventor, tinkering in his shop, has been overshadowed by task forces of scientists in laboratories and testing fields. In the same fashion, the free university, historically the fountainhead of free ideas and scientific discovery, has experienced a revolution in the conduct of research. (. . .) For every old blackboard there are now hundreds of new electronic computers. Yet, in holding scientific research and discovery in respect, as we should, we must also be alert to the equal and opposite danger that public policy could itself become the captive of a scientific technological elite. (Eisenhower, 1961)
The 1950s, 1960s and 1970s were the decades when the Cold War between the USA and the USSR was liable to become ‘hot’ at any time. Both sides were paranoid about the other’s nuclear capabilities. It was this mind-set that led the USA to launch the ‘technological revolution’ in computing that Eisenhower referred to. The determination of the US military-industrial complex to have the most up-to-date research into computers and the knowledge on bomb-creating that this could provide was central to the decision to allocate huge project funding for it. It was realized during the early theoretical work that these new kinds of armaments, as well as the rocket technology needed to make them deliverable, were impossible to develop without the number-crunching capabilities of increasingly powerful computers.
In his book The Closed World: Computers and the Politics of Discourse in Cold War America, Paul N. Edwards argues that ‘from the early 1940s until the early 1960s, the armed forces of the United States were the single most important driver of computer development’ (1996: 43).
However, the intense research effort into uncovering the fundamentals of computing as a scientific discipline was much more than an exercise in gaining the necessary ability to crunch the numbers on the trajectory of a rocket or the explosive power of nuclear fission. Computers were seen as a way to eliminate the human element from the nuclear equation, and the all-too-human propensity for error and the disaster that this may bring in a world balanced on a nuclear precipice. Planners and strategists recognized that appropriate ‘command and control’ systems in the US military’s structures were of central importance. Their faith in the capabilities of computer systems led them to believe that to be able to respond to an outside attack they had to institute command and control systems that are ‘pre-programmed because their execution must be virtually automatic’ (Edwards, 1996: 131). To emphasize the importance of speed within the automated command and control systems during the 1940s and 1950s, Edwards cites MIT engineer Jay Forrester who played a ‘major role’ in the military research and development into computing:
the speed of military operations increased until it became clear that, regardless of the assumed advantages of human judgement decisions, the internal communication speed of the human organization simply was not able to cope with the pace of modern air warfare . . . In the early 1950s experimental demonstrations showed that enough of the decision making process was understood so that machines could process raw data into final weapons guidance instruction and achieve results superior to those being accomplished by the manual systems. (1996: 65)
The lessons of ‘superior’ automated forms of control of complex processes through computing had not been lost on a business community that was already, if sporadically, automating many of its own administrative and production processes in the USA and elsewhere. In a field research essay in the early 1960s, called ‘Automation and the Employee’, sociologists Faunce, Hardin and Jacobson argued, in a possibly overly optimistic tone, that ‘Automation may affect the significance of work in our society by changing job content, redistributing employment opportunities, or decreased working hours. Its effects will probably be a decrease in the importance of work and a continuation of the trend toward a leisure-oriented society’ (1962: 60; my italics). Workers who participated in the study were not so sanguine. Office workers were worried about the ‘disruptions’ and the perceived tendency for automation to eliminate jobs, but then again ‘they often welcome change and rarely reject mechanization as such’ (60). Workers in mass-production factories, such as automobile plants, were rather less ambivalent, with the study finding the unions against almost any form of automation (73).
The essay was part of a journal edition that was devoted to workplace computerization, with viewpoints being expressed in dedicated articles by academics, unions, management and government. Only the essay that forwarded the case of management, by Malcolm Denise, a Vice-President of the Ford Motor Company, was unambiguously keen on it. His fundamental argument, somewhat predictably given his position, was that ‘The relationship between automation and unemployment is widely misunderstood’ (90). For his part, the Secretary of Labor, a certain Mr Arthur J. Goldberg, stressed the need to ‘minimize’ through government regulation ‘the complex problems that technological change and automation have been causing’ (110). These essentially ideological positions were typical for the post-war Fordist economies: business may not have been happy with the situation, but went along with a relative lack of automation because this was a context of high profits that were the short-lived fruit of the ‘golden age’ boom.
However, by the mid-1970s the system of Fordism had broken down. In the Anglo-American economies, especially, profits were plummeting, unemployment was soaring and levels of productivity and efficiency were going through the floor. This represented an economic crisis, of course, but it was also a political one. The time, as we discussed previously, was quickly becoming ripe for a political-economic solution and the rising neoliberal elite began to provide and implement one. Much effort went into pressing for the automation of as much of industry as possible to drive up efficiency rates. Accordingly, the immense progress in computer science that had been made in the ‘closed world’ of the military-industrial complex began to filter through into commercial applications. Of course, big corporations such as IBM had been supplying the private sector with computer applications since the 1940s, but with the end of the post-war compact between labour, business and government, the way was being cleared (through defeats inflicted upon organized labour by a newly militant business sector aided and abetted by neoliberal governments) for the unrestricted and widespread introduction of automation. In the 1970s, the government ‘Internetworking system’ that had been developed to share information and research between scientists began to spill out into the private sector, inaugurating what would become the Internet. Neoliberal pioneers such as the elite policy circles of the Reagan administration in the USA, and the Thatcher government in Britain, fully supported the restructuring of the old Fordist economy and, as much as they were able, created the political and economic conditions to automate and make more flexible the processes of production. Business, in other words, was increasingly calling the shots. The perceived need to focus on cost-cutting, labour-saving and competitiveness, which innumerable management theorists, politicians and neoclassical or neoliberal economists began to stress from the 1980s onwards, meant that investment in the domestic market was increasingly channelled into capital-intensive, high-technology production – flexible automation which cut out, as far as possible, the human factor in the production processes. Significantly, during the early part of the 1980s, more of the investment dollar in the USA went into computer and related high-technology equipment than into traditional labour-intensive machinery (Kolko, 1988: 66).
The radical shift from Fordism to ‘flexible accumulation’ – a production system based upon the centrality of the flexibility of machines and workers – was fully under way by the mid-1980s (Sabel, 1989; Kumar, 1995). The 1960s dreams of the Ford Motor Company Vice-President were at last becoming a reality, and fully automated production processes, processes that had already been implemented by Japanese companies, notably by Toyota, were being implemented – and not only at Ford. The ‘just-in-time’ system developed by Toyota in the 1950s became a widespread production technique by the 1980s. It was the very antithesis of the mass-production systems that had dominated in the Western economies for the previous thirty-five years. In Fordist production methods, in auto manufacture and elsewhere, mass production usually meant mass inventories of parts that were stored and waiting to go into the assembly line at some point. This was a cost in terms of both space and the capital tied up in unused stocks. However, automated systems and the timely flow of information throughout the system meant that production could be streamlined and made more flexible and, importantly, able to be accelerated. Instead of the assembly line drawing required parts from large inhouse inventories, it simply ordered these from a networked supplier, arriving ‘just-in-time’ to be used immediately. The flow of information to all parts of the process is of course central here – from supplier to customer to use in production – and results in much more efficiency. Just-in-time has the additional attraction of being extremely flexible. Electronic data is collected to keep track of where the production flow runs smoothly or where it develops bottlenecks, and so ‘timely’ information on system problems means they can be resolved quickly. Moreover, warehousing and transportation (and the cost associated with these) can be dramatically minimized. For example, General Electric in the USA was able to close twenty-six of its thirty-four warehouses between 1987 and 1997 (Rifkin, 2000: 34). And on a more general level, through such flexibility, businesses were able to respond to fluctuations in demand in an ever more volatile and uncertain market economy that was rapidly sloughing off the relative rigidity, inflexibility and predictability that had been the markers of Fordism.
Of course, systems such as these in auto plants and across the manufacturing sector more generally were never wholly automated – people (workers) had to be part of the process. Flexible systems thus necessitated ‘flexible workers’ who had to synchronize with the new rhythms and higher speeds. Unions initially resisted the introduction of ‘flexibility’ into the workplace. However, neoliberalized governments and employers who were more powerful than they had been for decades were able to wear down worker resistance and weaken industrial militancy. A consequence of political defeat, then, was that workers in many instances were compelled to become more flexible in the ‘recomposition of tasks’ that Aglietta had observed. Just as important was the fact that for new workers entering into economic life from the 1980s onwards, ‘flexible systems’ and ‘flexible working’ seemed as natural as the ubiquitous computer systems that were by now integral to the job, be it in the restructured manufacturing sector or in the burgeoning service industries.
Some intellectual responses to the revolution in information
Not quite Utopia: Daniel Bell
It was into this emergent milieu of confidence in the transformative powers of computing for the economy, and for social life more generally (observed in the perceived ‘trend toward the leisure society’, for example), that Daniel Bell published The Coming of the Post-Industrial Society in 1973. It was an influential book in policy, in academia and (with some interpretive licence) in business circles too. Indeed, it was Bell who was given credit for coining the term ‘information society’ that he used as a substitute expression for ‘post-industrial society’ in his subsequent works. In concrete terms, Bell noted that this society would be characterized by a shift from manufacturing to a more service-oriented economy. That is to say, the heavy- and mass-production techniques that had marked Fordism would be replaced by service industries that are based more on the application of information and knowledge. Indeed, when Bell wrote his book, he was reporting structural changes in economy and society that were already well under way. In 1973 some 65 per cent of the US workforce was in the service sector, and nearly 48 per cent of the Western European workforce was similarly employed (Rifkin, 2000: 84). In 2006 the percentage in the USA had reached 80 per cent (Reuters, 2006). The pattern is broadly similar in the developed economies. It is clear that in terms of a shift to a services-based economy, this part of his thesis was unarguable.
More abstractly and more controversially, Bell’s thesis argues that through what he terms ‘knowledge technologies’ or ‘intellectual technologies’, the main constitutive axis of this new society will be theoretical knowledge where new sources of innovation ‘are increasingly derived from a new relation between science and technology’ (1973: 212). This is a transformation, he cautions, that will not lead ipso facto to a utopian knowledge-based society where wisdom and happiness would prevail. Rather, he envisages the rise of a technocratic elite and with it the ‘primacy of theoretical knowledge’ that organizes society for the purpose of social control and the directing of innovation and change. For Bell, what is central to the successful emergence and development of this knowledge society is that it needs to be ‘managed politically’ (1973: 18–19). He ends the book with the hope that by seeing Utopia as an ideal, and by keeping it as one, men and women can go about the ‘sober construction’ of an always-better ‘social reality’ through democratic and controlled use of the new knowledge technologies (489).
A problem that has been identified with Bell’s theory is that he conflates information with knowledge (see Webster, 2002: 8–29 for a fuller discussion). In this Bell builds upon and at the same time tries to distinguish himself from a pioneer in this realm, Fritz Machlup. In the early 1960s, Machlup was beginning to think about the role of communications, computers and knowledge in the service of the US economy. In respect of knowledge (‘knowledge as a product’ and the ‘value’ factor in his equations) (1962: 5), Machlup distinguished between five ‘types’ of knowledge. These are: practical knowledge, which is knowledge related to work, etc.; intellectual knowledge, which is related to intellectual curiosity in all manner of realms; pastime knowledge, which is the satisfaction of non-intellectual pursuits (entertainment and so on); spiritual knowledge, which is related to the study and profession of religion; and unwanted knowledge, which is ‘ incidental’ and ‘aimless’ knowledge (1962: 21–2). Importantly, Machlup saw computers as primarily ‘machines of knowledge production’ and as vectors for their transmission in the new information-based economy (295).
For Bell also, it is the computer-driven ‘knowledge technologies’ of post-industrialism that were creating the information economy and society. However, Bell rejected Machlup’s broader view of knowledge, containing both subjective and objective definitions, for what he terms a ‘narrower’ definition that is more ‘objectively known’ and more applicable to the material world (Bell, 1973: 175–6). This is an important distinction when we consider the logic underpinning the information society – especially for the role of computers. When Bell speaks of the ‘information’ that is constitutive of the information society, he refers to that which is contained in and produced by computerization. It is, in other words, information based on binary logic: on/off, yes/no, stop/go or 0/1. From the perspective of its industrial application, the beauty of computer (binary) logic is that information relates to a means–end instrumentalism. That is to say, it is a process that is narrowly defined in terms of possibilities, is able to be strictly planned out, and success is measured by the simple criteria that it seems to work for a particular task. Such instrumental logic fits easily with productive processes in the capitalist economy, such as the assembly line or the processing of data into quantifiable information (‘practical knowledge’). Its chief attraction, for those who plan the production process, is that it is highly efficient and rational because the computerized process does what it is programmed to do – and no more. There is no space for the interpretation of meaning, no grey areas where context or inflection or error can affect what kind of information knowledge we are dealing with and the various ways it could be represented as knowledge. There is no uncertainty, in other words. As Theodore Roszak has put it: ‘Information [had] come to denote whatever can be coded for transmission through a channel that connects a source with a receiver, regardless of semantic content’ (cited in Webster, 2002: 24). In a system where time is money, and where speed of operation is the key factor in competitiveness, a black-and-white instrumentalism is able to achieve a level of predictability and certainty that knowledge based upon trial and error, and constant experimentation, cannot.
Knowledge, we can begin to see, is of a different conceptual order from information. Knowledge emerges through the open and experiential and diverse (and often intuitive) working and interpreting of raw data and information. If codified and computerized into digital form, data, information, and even knowledge become frozen – formalized and oriented towards the purpose the programmer has set – into sequences of instructions that follow a predetermined path. Roszak goes on to argue that ‘For the information theorist, it does not matter whether we are transmitting a fact, a judgment, a shallow cliché, a deep teaching, a sublime truth or nasty obscenity’ (cited in Webster, 2002: 22). It can thus be argued that, in the ‘knowledge society’ that Bell tentatively welcomed, ‘knowledge’ is reduced in its neoliberal context to that which has a measurable and commodifiable outcome.
Writing in the 1970s, French theorist Jean-François Lyotard called this kind of computer-driven knowledge ‘performative’ – formalized knowledge that has been designed to ‘perform’, to act in a certain way and produce seemingly effective and efficient results. Knowledge creation and knowledge production under the auspices of computer logic undergoes a particular process of change. As Lyotard puts it, ‘technological transformations’ in computerization, information storage in databanks, etc., ‘can be expected to have a considerable impact on knowledge’ (1979: 3–4). He continues: ‘the miniaturization and commercialization of machines is already changing the way in which learning is acquired, classified, made available, and exploited’ (4). Anticipating the commercialization of the university, Lyotard notes that the old notion that knowledge and pedagogy are inextricably linked has been replaced by a new view of knowledge as a commodity, and, as a result, teaching and learning have become part of an alienated and alienating process: ‘Knowledge is now produced in order to be sold; it is and will be consumed in order to be valorized in a new process of production: in both cases, the goal is exchange’ (1979: 4).
Computerization put to such specific use hollows out what is human in the production of knowledge and reduces it to abstract information. What we lose through the formalization of knowledge into means–end information is beautifully described by the photographer Peter Gullers (cited in Rochlin, 1997: 67–8), who writes on the subject of expert knowledge of light in photography:
When faced with a concrete situation that I have to assess, I observe a number of different factors that affect the quality of light and thus the results of my photography. Is it summer or winter, is it morning or evening? Is the sun breaking through a screen of cloud or am I in semi-shadow under a leafy tree? Are parts of the subject in deep shadow and the rest in bright sunlight . . . In the same way I gather impressions from other situations and other environments. In a new situation, I recall similar situations and environments that I have encountered earlier. They act as comparisons and as association material and my previous perceptions, mistakes and experiences provide the basis for my judgment.
It is not only the memories of the actual practice of photography that play a part. The hours spent in the darkroom developing the film, my curiosity about the results, the arduous work of re-creating the reality and graphic worlds of the picture are also among my memories . . . All of the memories and experiences that are stored away over the years only partly penetrate my consciousness when I make a judgment on the light conditions. The thumb and index finger of my right hand turn the camera’s exposure knob to a setting that ‘feels right’ while my left hand adjusts the filter ring. This process is almost automatic.
What Gullers describes is the creation and inculcation of knowledge through doing and through experience. It is contextual knowledge in that it may not work for everyone. But it is also a form of universal knowledge in that it contains patterns of ‘common sense’ that may be applicable to many situations with more or less success. The key point that Lyotard and Roszak make is that this kind of knowledge does not sit easily with codification and pre-programming. It does not fit the efficiency criteria of ‘performativity’, nor is it easily made into exchangeable or commodifiable forms of knowledge. It is a form of knowledge, in other words, that may be viewed as marginal and not really ‘useful’ in an efficiency-seeking society.
Pushbutton fantasies and cybernetic capitalism: Mosco, Robins and Webster
As the spread of information technologies deepened and became ever more ubiquitous over the 1980s and 1990s, other scholars, coming from differing intellectual backgrounds, began to apply their own methodological training to the analysis of this growing phenomenon. A relatively early – and subsequently influential – collection of essays edited by Vincent Mosco and Janet Wasko (1988) brought together a range of self-consciously critical perspectives on the effects of information technologies. The book contains the approach that Bell adopted in that it views information technology as more than simply a technology, and argues that its effects reach across culture, politics and, of course, the economy itself. Where these perspectives differ from Bell is in their use of what Mosco calls in the title of the volume a ‘political economy of information’ that overtly critiques the pre-eminent role of capitalism in the process (18–27).
In his own contribution, Mosco is scathing about the sloganeering and hyperbole that accompanied the coming of the information society. He views it as an ideological process that is used primarily to glorify and justify radical technological change and the social upheaval that was its corollary. For Mosco, it is akin to a form of fantasy – ‘pushbutton fantasies’ he calls them – that serves to conceal a very different underlying reality. His essay is titled ‘Information in the Pay-per Society’, which can be read as a witty play on words on the ideology of the ‘paperless office’ that was supposed to accompany the computerization of office processes in the late 1970s.
For him, the revolution in computers is not simply a technological one, but a revolution in the way the capitalist system organizes production. Mosco thus proceeds from the basic premise that:
A fundamental source of power in capitalist society is profit from the sale of commodities in the marketplace. In fact, a basic driving force in the development of capitalism has been the incorporation of things and people into the commodity form. (Mosco, 1988: 3)
He goes on to argue that the collection of essays he has brought together show that the information technology revolution is in fact ‘the process of incorporating information into the commodity form’ (3). In other words, the fruit of the research in science and technology that we discussed as germinating within the ‘military-industrial complex’ of secret laboratories and restricted-access work has been taken over, developed and globalized by the imperatives of the capitalist marketplace. Computerization, based as it is on a binary system of numbers, is well suited (indeed was initially conceived) to measure and monitor and instrumentalize, and thus accelerate, almost any process. This provides the technical ability to conduct production and consumption on a tightly scaled cost-benefit analysis where everything is quantified and everything has its price. Spread this logic across economy and culture and we have the basis of our neoliberal ‘user-pays’ society where the market (augmented by information technologies) decides what is worthwhile based upon the simple criteria of whether it can be made profitable or not.
In Mosco’s analysis the trope of computer-based speed, flexibility and efficiency are once more seen to be the driver of capitalist momentum. Culture and society, in turn, are colonized by this logic as computerization pervades socio-economic life. What this means is that the individual has to be able to pay to take part in what the information society has to offer. To be unable to pay is to become marginalized and gradually more invisible as a member of society. It was from this political economy perspective that Mosco was able to identify, at a relatively early stage, the outlines of what would come to be known as the digital divide. As he observes: ‘We have been so caught up in the . . . Pushbutton Fantasies of the computer society that we have lost sight of a growing class of people who cannot afford the prices of admission to the information age’ (10). The barring from certain technologies is of course more than a simple lack of access. It is a question of politics and social justice. The existence of a technological divide in a highly technologized society is deeply bound up with processes of social exclusion, of deprived areas and people, of the breakdown of social capital and the fracturing of those forms of community relations that emphasized social inclusion (Warschauer, 2003). This was a theme Manuel Castells had expanded upon when he argued that those who have no place in the information society comprise a ‘fourth world’:
composed of people and territories that have lost value for the dominant interests in informational capitalism . . . because they offer little contribution as either producers or consumers. (. . .) Thus while valuable people and places have been globally connected, devalued locales become disconnected and people from all countries and cultures are socially excluded by the tens of millions. (Castells, 1999: 10)
And yet the picture is more complex than this critique would suggest, notwithstanding the ‘tens of millions’ who are cut off from the alleged benefits of ‘informational capitalism’. Research by the United Nations Research Institute for Social Development (UNRISD) suggests, for example, that access to all kinds of digital technologies had ‘jumped markedly’ during the mid- to late-1990s (Hewitt de Alcántara, 2001: 18). Moreover, China today is the fastest growing market for mobile telephony. And even in the rich developed countries where there are substantial levels of unemployment and relative poverty, the falling cost of computer equipment, together with the availability of mobile phones to all but the most destitute, and the sheer ubiquity of networked computers in public libraries, in universities, schools and colleges, mean that access is available. But whether or not this offers the kinds of opportunities that ‘being connected’ may provide is another question. Brian Loader has noted the experience of Britain’s UK Online centres, a government digital divide ‘solution’ that aimed to provide Internet access to all. He writes:
It is scarcely surprising perhaps that the anecdotal picture which is emerging in the UK is of large numbers of grossly underused Online centres packed with state of the art digital equipment and providing formal training which is regarded as irrelevant to the needs of their intended users. (2002)
Clearly, there is scope for deeper and more fine-grained research into why some regard access to such equipment as ‘irrelevant’. Could it be that through simple lack of information, many people are ignorant of such facilities? The evidence of declining standards of education in countries such as the UK, where functional illiteracy is perceived as a major problem, may mean that such individuals are intimidated by high-tech equipment (TUC, 2005). Or it could be that by being on the wrong side of the digital divide, one’s outlook and day-today life is shaped in such a way that high-speed Internet access may indeed be considered ‘irrelevant’ when a job has to be got or kept, when rent has to be found, when kids have to be fed, and where the struggles of everyday existence block out almost everything else.
The critical approach by theorists such as Mosco is useful in the context of a social, political, economic and technological revolution that seemed to be both unstoppable and held out as desirable. And so a lesson we might draw from it is that some hidden realities of the information society are not so hard to find if we use our critical faculties and proceed from the relatively simple criteria of asking whether the processes of ‘informational capitalism’ are just and fair in relation to a logic that inserts the market and technology into more aspects of human relationships whether we want it to or not.
In the same volume, authors Kevin Robins and Frank Webster took a similar perspective to Mosco, initiating in the process a distinctive style of critique that they would develop over the next twenty years in response to the expansion of the information society. They too are concerned to deliver a critique of capitalism – but they push it further than Mosco to describe the information technology revolution as an unremitting nightmare. And again, like Mosco, the analysis ‘goes beyond the purely economic’ and is applied to the realms of media and culture. First, they take a particular view on the role of information. ‘Information’, they insist, ‘is not a thing, an entity; it is a social relation’ which under modern capitalism ‘expresses the characteristic and prevailing relations of power’ (70). In other words, the technologies of computerization and automation are being used by capitalism to expand and deepen its rule over society. They use Michel Foucault’s conception of ‘systems of micro-power’ to describe how societies are ordered and controlled. Foucault argued that power is based upon knowledge (power-knowledge) and is used to extend the field of power. In appropriating and reproducing knowledge, power thereby reproduces and strengthens itself (see, for example, Foucault, 1980).
Robins and Webster maintain that the forms of domination and control of society that capitalism (and government) extended through the system of Fordism has been immeasurably strengthened in this post-Fordist age through use and development of information technologies. The centralizing tendencies of Fordism, which, as we saw previously, were to emerge as a central weakness in the quest for efficiency, have become radically decentralized, flexible and accelerated through economic restructuring and the computerization of productive systems. At first glance, it may seem that the notions of power-knowledge and ‘decentralization’ are somewhat at odds. However, they say that the major player in this transformation, the multinational corporation, ‘can now use its communication network to coordinate the activities of decentralized units’ which means that ‘Decentralized activities can be coordinated as if they were centralized’ (56). And, of course, the extent of this transformation goes well beyond the economic. As information technologies become the norm in every process and activity, the power of capitalism ‘invades the very cracks and pores of social life’ (54). In this they again reflect Foucault and his insistence that power is a decentralized, ubiquitous and systemic phenomenon that is highly dynamic.
In this view of the all-pervasiveness of information technologies and the logic of capitalism that it enhances, the ancient boundaries between work and leisure dissolve into a vast and growing cycle of production and consumption. Here, everything is commodified and placed under the criteria of profit, from education and entertainment, to sport, health provision and social relationships more generally in a steadily individualizing society. Information technologies use the social relation of information, they argue, to exploit social relations across time and space. In the late 1980s, when their essay was published, neoliberal globalization was fast becoming a concrete reality, and in projection of what the power of networks would achieve for capitalism, Robins and Webster contended that:
Increasingly, leisure will become amenable to arrangement by capital, which can now access the consumer via electronic/information consoles capable of penetrating the deepest recesses of the home, the most private and inaccessible spheres to date, offering entertainment, purchases, news, education, and much more round the clock – and priced, metered, and monitored by corporate suppliers. In these ways ‘free’ time becomes increasingly subordinated to the ‘labor’ of consumption. (1988: 55; my emphasis)
Apart from the somewhat antiquated expressions ‘electronic/information consoles’, the point is clear and fresh, and what they describe (and much more) is now commonplace. Indeed, we take the ubiquity of information technologies for granted so much that we tend to forget that they are there. We ‘move through’ communication networks without a thought, emailing, doing our banking, shopping, using mobile phones and Blackberries with little concern that every keystroke, every number we dial, every transaction we make in our daily life is being recorded and that we leave a ‘data trail’ that can be as clear as footprints in the snow for those (people and automated systems) that take the trouble to look. The authors point to a fundamental dimension of the information society, one that allows, in theory at least, for the power of capitalism, and the power of political systems over individuals, to be augmented even further.
‘The real power and political implications of information technologies’, Robins and Webster maintain, is their ‘intelligence and surveillance capacities’ (1988: 57). The fact that information technologies can measure and control processes in society means also that they can track and scrutinize these too. They build on the work of Jeremy Bentham, an eighteenth-century social philosopher, who drew up plans for a new kind of prison (never built) based on what he called the Panopticon, which was a series of cells built around a circular observation tower. The point of this, in Bentham’s words, was to gain the ‘unbounded facility of seeing without being seen’. The intended effect would be that prisoners would not know precisely when they were being observed and that this would regulate their behaviour, ensuring that they ‘should always feel themselves as if under inspection’ (Robins and Webster, 1988: 57). Pervasive and largely unseen (or unnoticed) information technologies, Robins and Webster predict, will have the same panoptical effect in the information society. Invisible computer databases within anonymous buildings in parts of the world that you may never have been near will be able to store information about you. Expressing their darkest fears, they write that they:
consider the loops and circuits and grids of what has been called the ‘wired society’ or ‘wired city’, and we can see that a technological system is being constituted to ensure the centralized, and furtive, inspection, observation, surveillance, and documentation of activities on the circumference of society as a whole. (1988: 59)
In such a closely surveilled 1984-type society, not only is the individual or group able to be tracked and shaped as a consumer, as a profit centre, but can be politically monitored as well, to judge if they may be classified as subversive or dangerous. We see the ‘technological system’ put to its logical use today in many ways that would have been unimaginable only a generation ago (see Lyon, 2001, for a fuller discussion). Prisoners can now be subjected to what is a direct application of Bentham’s idea. Inmates selected for conditional release can now be ‘tagged’ to enable them to be electronically monitored, to make sure that they stick to a regime of ‘home detention’ or curfew (Dodgson et al., 2001). We see the same logic of monitoring and tracking on the open Internet through cookies, for example, which are a line of text that passes from a web server to your browser and back again to the server, with the details of your computer logged. In effect, a cookie can track where you browse and for how long, etc., compiling patterns of web use that could be useful to commercial interests – or to the authorities. In the political realm, governments such as those of China and Iran monitor ‘suspect’ websites to try to find expressions of dissident opinion and try to locate from where they originate. Moreover, these governments use software applications to ‘block’ sites such as the BBC and CNN, or any site they may feel threatens their political prerogatives.
So far so grim, it would seem. And, to make matters worse, significant elements of Robins and Webster’s dark vistas appear to have come true. World production and consumption is now heavily (if not completely) dependent upon automated computer systems and networks. Only a decade after it was popularized, the Internet became the backbone of the global economy, to serve as the network through which all kinds of monitoring and tracking take place. And, no matter how we look at it, the information society seems to have been originated for and developed for the interests of capital first. In many ways we do seem to be simply ‘nodes’ of production and consumption in a vast, interconnected marketplace, where the individual is considered only if he or she can pay – and then becomes a consumer – or falls foul of the law, in which case surveillance may be in order. Finally, these gloomy theorists cannot even end on a bright note of optimism, or point to a chink of light that may offer some hope:
Our response to the information society is sombre. We are not talking of what information technologies might do; of how cable could further democracy if it was run by the right people; of the possibilities for satellite television or viewdata systems or word processors. We are talking about actually existing technologies – technologies that threaten to constitute a mega-machine, a systematic and integrated mechanism . . . We must confront the reality of existing technologies in the present tense. And we must confront the reality of an ‘information age’ . . . We can expect no utopia . . . If we want one we will have to invent it ourselves, and the new technologies do not provide a short cut. (Robins and Webster, 1988: 72)
Has the Internet become the pitiless and exploitative and all-seeing ‘mega-machine’? The authors come close to saying that we need to bypass information technologies to create our own Utopia, and to create a new kind of politics not based upon digital networks. But how is it possible to uninvent computers, or to make them less pervasive? I would say that it is not. Information technologies, with the Internet and its growing complexity, are here to stay. The note of Luddism that can be read into Robins and Webster’s work is no solution. It may be more useful to consider who controls the mega-machine. When the authors wrote their provocative article, economic restructuring and social transformation were just beginning to be left to the abstract forces of the market. Today that process is well developed, and, to paraphrase the futurist Alvin Toffler, in the age of neoliberal globalization, no one is in control (Toffler, 1970: 290). As we shall see in chapter 7, however, the ideas of democracy and reason and justice are simply too ingrained in Western society for them to evaporate in the space of a couple of decades. New currents, new political languages, and, crucially, new political spaces and times are emerging that utilize information technologies in ways that the market logic did not intend. It is just possible that deep currents of justice and fairness are powerful and ingenious enough in Western societies to turn information technologies and networks of communication into socially useful platforms that go beyond the instrumental uses of a neoliberal economy.
Ideology wars in the information society
The restructuring of the world economy during the 1970s and 1980s – from a post-war Fordist mode to one dominated by ‘flexible accumulation’ and driven by information technologies – was by no means a smooth transition. It was in fact distinguished by a profound political and economic upheaval that disrupted the lives of millions upon millions of people, and continues to do so. Above all, restructuring was a war of ideas, ideas over how economies should be run, who should run them, and the technological basis upon which they would run. In such a battle, people can only be pushed so far. To force people to be more ‘efficient’ and ‘flexible’, to compulsorily require them to synchronize with an increasingly accelerated way of life, and to develop an obsession for computers, is clearly not going to work in ostensibly democratic societies. What was needed, as David Harvey has noted in his analysis of the period, was, following Noam Chomsky, the ‘construction of consent’; that is to say, for the engineering of a shift in what kind of ideas dominate society and to make these ideas so deep-seated that they appear as ‘common sense’ (Harvey, 2005: 39–63). In short, the fundamental battle was (and still is) an ideological one.
Neoliberalism became dominant because its ideas were able to capture and motivate powerful people in government, in elite business circles and in the global media. Speed, flexibility and efficiency – and the primacy of market mechanisms vis-àvis the work process – were the watchwords repeated over and over again, in all these forums, until they became embedded principles. If these terms had a slightly harsh and onerous ring to them, then there were other central tenets such as ‘empowerment’ and ‘freedom’, and ‘entrepreneurialism’ and ‘individuality’, to act as emolument. These more palatable-sounding ideas were revived from the ‘classic’ liberalism of the eighteenth century. This was a semi-mythic time when the ‘state’ and ‘state interference’ was minimal and men were freer. The state’s primary role in this perspective was merely to conduct wars and to levy the absolute minimum of taxes. Moreover, it was supposedly a time when individuals were responsible for themselves and the conduct of their lives and did not require the state to get in the way. As Friedrich von Hayek, that very eminent hero of neoliberal intellectual circles – and someone lauded by early information society theorists such as Fritz Machlup – put it, ‘A society that does not recognize that each individual has values of his own which he is entitled to follow can have no respect for the dignity of the individual and cannot really know freedom’ (1978: 344).
The ideology of neoliberal individualism filled the space left by the ebb of 1960s counter-culture, but it was inflected with a selfish tone that the 1960s hippies would have blanched at. Margaret Thatcher expressed this more hard-edged Hayekian individualism in an interview in 1987 when she asserted that ‘there’s no such thing as society. There are [only] individual men and women and there are families . . . and people must look after themselves first.’ It was a growing perspective that had been powerfully critiqued in 1979 by Christopher Lasch in his Culture of Narcissism, which described the weakening of social bonds due to his impression that many people had ‘retreated to purely personal preoccupations’ (1979: 4). This process of social individualization was echoed a generation later by Robert Putnam in his Bowling Alone which similarly lamented the perceived diminishing fabric of society and the depletion of the ‘social capital’ that kept communities vibrant and able to form the basis of a democracy (2000). Both these books (and titles with a similar concern, such as Richard Sennett’s 1998 The Corrosion of Character) were best-sellers, and generated much debate about the nature of post-Fordist society.
However, this is ‘theory’ compared to the reality that confronts many people every day. They encounter it in their jobs, in the media, in their relationships with other people, in the pressure to consume, to have a bigger house, an even bigger SUV or a private education for their kids. It is a pressure to be flexible and to be oriented towards the self and to consumption and to the immediate gratifications of the present. It is a life that reflects, as Lasch put it, ‘the waning of the sense of historical time’ (1979: 3). And, in an effect of life seeming to speed up, our sense-making becomes attuned to the here and now, and to the imperatives of multitasking. It is a life, as some writers see it, where the individual has less sense of being part of a larger social history, or much of a feeling for what has gone before or what is to come. It is a situation where, as Donald Barthelme puts it, the individual ‘is constantly surprised. He cannot predict his own reaction to events. He is constantly being overtaken by events. A condition of breathlessness and dazzlement surrounds him’ (cited in Lasch, 1979: 4).
In the grip of such cognitive helplessness, where the time to reflect and to consider becomes subsumed by a constant present, the individual is exposed to the ‘dazzlement’ of powerful and pervasive ideology. He or she is susceptible in particular to the ideology of information technologies as offering the way to function in such a life. Without ICTs there would be no neoliberal globalization, and so what Theodore Roszak called the ‘cult of information’ (1986) became a sub-set of ideology that was able to grow and pervade with the globalization process. New media technologies were promoted not only as a way to make us more productive, but also to engender forms of community, albeit ‘virtual’ ones (Rheingold, 1993). Neoliberalism claims that information technologies are necessary for the new world that is being created. They are also necessary for you. And in the rising culture of individualism, ICTs are engineered towards use by the individual: personal computers, personal digital assistants, mobile phones dedicated only to you, personal email addresses, iPods, peer-to-peer file-sharing where you share with a ‘virtual community’ in cyberspace, the MySpace website and its clones, or YouTube where you can promote yourself to the virtual world, and of course online gaming where the virtual world is a place where the only flesh and blood around is your own. And, just as in the real-world economy, the free-market ideology will put the failure of a business down to its not being flexible enough, or not responsive enough to market conditions, then so too if you feel the existential pain of digital anomie, what you need are more digital connections. The direct corollary of this is that computers or neoliberal society aren’t to blame in an individualist society – you are. If you feel lonely, buy a laptop. It is a condition of purported freedom, where, according to Michael Heim: ‘When online we break free’. We no longer need the physical proximity of people in the information society, Heim goes on to insist, because:
Telecommunication offers an unrestricted freedom of expression and personal contact, with far less hierarchy and formality than are found in the primary social world. Isolation persists as a major problem of contemporary urban society, and I mean spiritual isolation, the kind that plagues individuals even on crowded city streets. With the telephone and television, the computer network can function as a countermeasure. The computer network appears as a godsend in providing forums for people to gather in surprisingly personal proximity. (1993: 84)
Ideology made flesh
Finally, in any ideological framework it is always useful to elevate people as symbols to positions that the rest of us should aspire to. Accordingly, the coming of the information society saw the rise to prominence of an array of individuals who came to be seen as the embodiment of the ‘indomitable force’ that Thomas Hughes refers to in the quotation at the beginning of this chapter. Again, the ideology is of the ‘dreams’ that information technologies can make possible – notwithstanding the fact that these ‘dreams’ were themselves first suggested to us by neoliberals and by techno-utopians. Journal of choice for this latter group, and barometer of the technological zeitgeist of the 1990s, was Wired magazine which was first published in 1991. It was co-founded and part-funded by the MIT Media Lab director, Nicholas Negroponte, who also wrote influential utopian books such as Being Digital (1995). Partner in the Wired venture was Louis Rossetto who penned an article in a 1998 edition of the magazine in which he reflected upon the early years of the publication, and from where the founders drew their inspiration:
What we were dreaming about was profound global transformation. We wanted to tell the story of the companies, the ideas and especially the people making the Digital Revolution. Our heroes weren’t politicians and generals or priests and pundits, but those creating and using technology and networks in their professional and private lives . . . you. (Rossetto, 1998)
Rossetto speaks directly to ‘you’ as the individual, a ‘hero’ who is making the ‘Digital Revolution’ through creative and innovative uses of information technologies. This was the dawning of a new age that needed new role models, people who ‘bucked the trend’ and were resolutely non-conformist. Primary among these was Bill Gates, the habitually described ‘ex-Harvard dropout’ who rose from geeky obscurity to be the richest man in the world because, it is heavily implied in the popular perception, he understood and believed in the power of computation to transform the world. Gates himself forcefully promoted the notion of the individualistic entrepreneur in which he, through example, placed himself firmly in the role of leader in leading-edge software. By the mid-1990s, the needs of business and the making of lots of money through the application of software and computers was an area in which Gates had plenty of authority and ideological legitimacy – as well as chutzpah. For example, his first book, The Road Ahead (1995), proclaimed no less than the solution to capitalism’s problems through computerization and automation. He called it ‘friction-free capitalism’ and saw it as the fullest expression of Adam Smith’s concept of the ideal market where ‘would-be buyers and would-be sellers’ are able to share ‘complete information’ through which they could make fully informed decisions. The Internet, he proclaimed, would be the seamless electronic middleman that would provide this all-seeing, all-knowing perspective on all possible market conditions (1995: 180–208).
We see a similar trajectory in the stellar career of Steve Jobs, founder, with Steve Wozniak, of Apple Corporation. Another university dropout, Jobs fits the ideological typology of the individualistic hero who pursues his own way, using his own skills and driven by his own entrepreneurial impulses. ‘Think different’ was the famous Apple advertising slogan during the 1990s. And, along with the burgeoning success of Microsoft, Gates and Jobs were the prominent heroes behind the immense dot-com boom of the late 1990s, where the NASDAQ high-tech stock market ballooned and many multimillionaires were made overnight – and then went broke again when the dotcom bubble burst in 2001.
In retrospect, it seems that the thousands of would-be dotcom heroes were not thinking so differently after all. They were, in classical fashion, merely responding, herd-like, to market signals. ‘Complete information’ was conspicuous by its absence for many of the buyers of ‘red hot’ dot-com shares (Cassidy, 2002). Moreover, both Apple and Microsoft, as preeminent revolutionaries in the ideological fostering of the information society, have been searching unsuccessfully for ‘friction-free capitalism’ ever since the heady days of the 1980s and 1990s. Microsoft has had a running battle over a series of anti-trust suits that claimed they were using their powerful position to destroy competition and monopolize the global software market. In 2001 Microsoft was found to have used its market power in a way that Adam Smith, whom Gates looks to for inspiration, would have no doubt disapproved of. Apple, for its part, was found to have infringed the patent of the Creative Corporation for the iPod user interface that Creative said was theirs. In 2006 Apple agreed to pay Creative $100 million for a licence to use the patent (Bangeman, 2006).
Notwithstanding these hurdles, the powerful legitimizing ideology of digital dream-makers continues to pervade the mediascape today. Jaws still collectively drop in amazement and admiration when it is announced that a new generation of digital heroes, the individuals behind the creation of websites and applications such as YouTube and Skype, for example, become overnight billionaires when they sell the digital fruit of their ideas to the highest bidder.
And the old guard still fights the good fight. Bill Gates, for example, despite his corporation being found guilty of monopolization and anti-competitive practices, can still confidently write of the benefits of ‘friction-free capitalism’. In a 2006 essay, titled ‘The Unified Communications Revolution’ (the term ‘revolution’, it seems, has not gone out of fashion), Gates sets out his vision of the ‘coming communication convergence’, where communication will become seamless across telephony, mobile phone, desktop, laptop and handheld computers, streaming video, web conferencing, indeed anything that is networkable. This new digital ecology will create ‘peopleready business’ in the ‘new world of work’. And, again, as in the visions contained in The Road Ahead, software is key to the delivery of this new phase of the revolution. It is not known if such talk of the ‘Unified Communications Revolution’ just happened to coincide with the 2007 launch of Windows Vista, the major new Microsoft application, ‘the breakthrough computing experience’ that would enable all this to come true (Microsoft, 2006).
Gates was specifically addressing the business community in his essay. But he views with some admiration the new generation of people who have grown up during the age of information and who have no real conception of a pre-Internet life or a pre-Internet tempo. These are:
the young people in your organization – particularly the ones who are fresh out of college. They’ve lived their entire lives in the digital age, communicating in real-time via text messaging and instant messages. For some of them, even email lacks the immediate gratification they expect when they want to communicate with someone. To this generation, the desktop phone has about as much relevance as an electric typewriter does for those of us a generation or two older. (Microsoft, 2006)
The Microsoft chairman proceeds to confidently sketch a picture of the inhabitants of a new world, a world where the information society is no longer ‘coming’, as Bell put it, but is here, firmly entrenched and creating and sustaining a new way of relating to economy, culture and society. It is a world where information is so central that most of us now take it for granted. But there are questions we still need to ask and reflect upon. For example, how deep does the ‘cult of information’ go, and what does it mean to have digital logic at the very basis of how we see the world and how we see ourselves?