Sales effort: from the automobile to the microchip
In a modern economy, more and more of what looks like productive activity is in fact sales effort. This, plus manufacturers’ fear of reliance on skilled workers, has shaped the automobile as we know it, then aircraft. Microchips and microprocessors are shaped by the most extreme sales pressure yet, which spills over into almost every other industry as chip-makers scramble to find new markets for their latest devices before they go obsolete. A whole additional level of sales effort has emerged, aimed at shaping future consumer demand, in the form of the high-tech visionary movement.
In 1966, two Marxist economists, Paul A Baran and Paul M Sweezy, presented an analysis of capitalism’s evolution into an era of giant corporations, in which competition had been eclipsed by oligopoly and monopoly, and the problem of ‘diminishing returns’ by a surprising tendency of surplus to rise. Their book Monopoly Capital presents a picture of capitalism caught, like the Sorcerer’s Apprentice, between the need to extract bigger and yet bigger profits, and then find ways of investing those profits, so that they generate yet more profits, for which yet more profitable investments must be found, ad infinitum.
The surplus, for example, must be increased in order to support an ever-rising share-price and dividend, so wages must be suppressed, but that imperils demand, which threatens a slump – which is averted, traditionally, by spending on luxury goods and weapons, wars and their aftermaths, and prestige items (cathedrals, corporate headquarters buildings and salaries etc). Baran and Sweezy observed that two further escape-routes had presented themselves since Marx’s day: ‘epoch-making innovations’ and ‘the sales effort’.
Epoch-making innovations are ones that cause a wholesale rearrangement of the fabric of life so that everything has to be built all over again, absorbing the problematic surplus. Railways, electrification and automobilization were the examples Baran and Sweezy had in mind. Electronics and computers soon proved to be an even bigger case in point.
Baran and Sweezy’s second ‘escape route’, the sales effort, had already become much bigger than mere advertising. Branding had become a powerful economic force well before the end of the 19th century and some of the world’s biggest fortunes were made by the owners of well-known brands attached to essentially minor products: Pears Soap, Coca-Cola, Pepsi-Cola, Colman’s mustard-powder, Goddard’s silver polish, Oxo stock cubes, Lea and Perrin’s sauce. France’s wealthiest individual, Liliane Bettencourt, owes her fortune to the L’Oréal brand of inexpensive cosmetics, founded in 1907.
Brands that are well supported by advertising and well defended legally can command far greater, more reliable and more durable premiums (essentially rent) than those attainable by technical improvement. Consequently, branding ‘moved up the food chain’ in the 20th century, to the extent that (like Nike, mentioned in the last chapter) many manufacturing companies have in practice become brand owners first and foremost, farming out the less lucrative work of production, and even of innovation, to others.
Product design itself became increasingly an extension of the sales effort, rather than an exploration of technical possibilities and human needs. By the mid-20th century this was making it almost (but not quite) impossible to work out what the proper cost of anything would be, if it were produced simply for convenient and comfortable use. At the same time (and reinforcing the tendency) the greater earnings offered by the sales sector were making it a magnet for creative talent that in other circumstances might have sought work in engineering, medicine or the arts.
Recent writers have shown that this complex, self-reinforcing system grows as inequality grows. Thomas Piketty has shown that rent production is more lucrative, and so absorbs more economic effort, in countries and times when extremes of wealth are tolerated. Kate Pickett and Richard Wilkinson have shown that in more unequal countries advertising consumes a higher proportion of GDP than it does in less unequal ones; for example, the US and New Zealand now spend twice as much per head on advertising as Norway and Denmark.1 Baran and Sweezy focused on the automobile industry, and found mind-boggling discrepancies between what the industry was capable of achieving, in terms of affordable products, and what it was actually producing.
Styling had become such a focus of corporate research effort that vehicle safety was compromised (and erupted into public scandal with the publication of Vance Packard’s 1960 bestseller, The Waste Makers,2 and in 1965 by Ralph Nader’s Unsafe at Any Speed3).
Using a study by MIT economists Franklin Fisher, Zvi Grilliches and Karl Kaysen of the costs of styling changes to automobiles between 1949 and 1960,4 Baran and Sweezy estimated that an average car (then costing around $2,500) could under a purely function-oriented system be produced for around $700, and be more reliable, convenient and durable.
THE ALL-STEEL AUTOMOBILE AS AN ENERGY SUMP
Monopoly Capital was not taken seriously by the economic mainstream, but a few years after it appeared, a fuel crisis triggered a surge of official interest in the impacts of automobile manufacture. In 1973 the State of Illinois commissioned two physicists, R Stephen Berry and Margaret F Fels, to carry out a thermodynamic analysis of automobile production – an analysis of all the energy costs involved, from mining the deposits of iron ore to the showroom.
First, using government manufacturing statistics, they calculated the amount of energy in kilowatt-hours (kwh) used in manufacturing a typical 1967 automobile, weighing 1.6 tons (most of it steel). This came to 37,275 kwh per vehicle – mainly the cost of turning metal ores into the necessary steel and other metals.
Next, they worked out the ‘ideal’ energy cost for the same amounts of metals. They looked at the actual chemical changes involved in transforming ores into auto-grade metals, and calculated the known amounts of energy consumed or released by each change (the ‘absolute thermodynamic potential change’). This came to just 1,035 kwh per automobile: about three per cent of the amount actually used.
This rather striking difference arises from the fact that industrial metal production is still and may always remain an imperfect art. However, there is clearly plenty of room for improvement. Berry and Fels suggest that it would make a lot of sense to make improving these technologies an ‘institutional social goal’. But even so, using 1973 technology and resources, and making optimal use of recycled metal (which needs much less energy to smelt) they reckoned that it was perfectly possible to reduce energy cost by 12,640 kwh (more than a third). But the biggest savings could be made simply by increasing a typical car’s lifetime. Extending it from 10 to 30 years would save a further 23,000 kwh: a total saving of 96 per cent.
Since Berry and Fels made their calculations, nothing much has changed in the auto industry apart from the replacement of some mechanical and electro-mechanical components by electronic ones (which cannot be repaired easily or cheaply). And, despite improved recycling methods, even less recycled metal is used in cars; and even less of the metal that is used in them can be recycled (because of new alloying and laser-welding techniques, whose main purpose is to assist the design of eye-catching shapes).5
There is one final twist to this extraordinary tale: how cars came to be made of steel at all. Most automobiles were originally made largely of wood, using well-established and highly refined coach-building techniques. Wood is strong, resilient, easy to repair and modify, it does not rust (a major problem for owners until the 1980s, when the electrophoretic method of painting was introduced), and is much lighter than steel. In 1920, according to the UK’s Thatcham Motor Industry Research Centre, ‘85 per cent of the vehicle body was constructed of wood but within six years the wood content had been reduced to 30 per cent, being replaced with steel’.6 Why? Because paint on metal can be dried in hours, by baking; on wood the process could take days, and hastening it by heating was a dangerous option. The article continues:
This problem was so prominent that it is suggested that Henry Ford’s legendary quote ‘you can have any color as long as it is black’ was stated in the knowledge that black paint was the quickest drying and would result in an increase in output.
It might be argued that making car bodies from wood must have been more expensive, but apparently this was not so. Set-up costs would have been lower, and there was a huge reservoir of skilled labor from the coach-building and wheelwrighting industries, complete with enormous bodies of knowledge, systems of apprenticeship and so on. We get a sense of the scale and nature of this other world, and how very adaptable it was, from George Sturt’s 1923 classic, The Wheelwright’s Shop.7 And to get a sense of the ‘look and feel’ of the kind of world it could create, go to any stately home and focus just on the visual and tactile qualities of what you find there.
The shift to steel was almost certainly also motivated to some extent by the capitalist’s ancient preference for unskilled, unorganized labor, and fear of being reliant on employees’ personal skills.
As soon as Ford adopted all-steel, production-line manufacturing, the full force of positional competition came into play. All other manufacturers had to follow suit or lose their markets. As Ford increased the speeds of their production lines, so did everyone else; and everyone, including Ford, had to fall in line again when General Motors introduced annual model changes – a development that had only become possible thanks to wood’s displacement by steel, and its ability to be formed, cheaply, into almost any shape, just by changing dies.
This shift was above all else a positional phenomenon, starting with a bold decision by Ford to take control of all the options and decisions involved in manufacture, shifting to new methods and materials so that this could be done. He gambled that the inferior paint finishes would be of less concern to customers than the price and availability he could offer (if it all worked). From this flowed the familiar process of labor-alienation described by just about every book on auto-work since then (of which Paul Stewart and his colleagues’ 2009 book We Sell Our Time No More is a particularly rich example8). Needless to say, there is an utter lack of overlap between that literature and the world of auto magazines, ads and TV programs.
As Berry and Fels’s calculations show, the same power shift coerced nature into the process on an unprecedented scale; and we can now recognize this as a positional phenomenon.
Finally, a powerful cultural assumption had been created and then brought into play: that cars are made of steel; those that aren’t are odd or old-fashioned (and therefore presumably don’t work as well). Practical (and highly positional) considerations also come into play as investment flees technologies based on the older material, insurers demand higher premiums for insuring it, and so on, so that it becomes impossible for an individual or even a substantial group to buck the trend.
CULTURAL ‘MATERIALISM’
The assumption that steel is the obvious choice for automobiles could be called a kind of ‘materialism’: a discriminatory ‘ism’ of the same family as ‘racism’ and ‘sexism’. It runs deeper than we generally recognize. For example, during the Second World War and the immediate post-War decades, enormous progress was made in the development of new composite materials. One of the pioneers of the new science, JE Gordon, continually referred his students and readers to the properties of the original composite material, wood, even for aircraft, but felt he was swimming against the cultural tide: ‘metals are considered “more important” than wood’, he wrote. ‘[It] is hardly considered worthy of serious attention at all’.9 Yet size for size, wooden aircraft
weighed less than a 10th of modern hard-skinned machines… In its better forms… the wooden biplane is almost everlasting… they are much longer-lived than motor cars… only the other day (1975) I saw a de Havilland Rapide… flying around very happily; it had probably passed its 40th birthday’.10
War is the ultimate positional game, and it played a big part in the transition from wood to metal. The need for speed tells only part of the story – some wood-and-canvas and plywood aircraft were just as fast as aluminum monococque ones, could fly higher, and their construction required far less energy (aluminum smelting is even now one of the most energy-intensive processes on earth). Wooden-framed aircraft were also much more resilient and simple to repair. The larger reason for their supercession was that construction and maintenance needed more skilled labor, with a much greater understanding of materials: resources that the anxious or ambitious planner or manufacturer could never hold in his own two hands.
Whether or not aluminum monococque technology is necessary for modern airliners, they might not even have been under consideration had it not been for the Cold War (and, as I mentioned in Chapter 1, the Boeing 707 – developed at public expense as a military transport while Cold War hysteria was at its peak). A further intriguing fact mentioned by Gordon is that their payloads could have been increased threefold, even in the 1970s, by building them from modern composites, but this has not happened – further evidence that when power inequalities are at work, technological development is neither inevitable, nor guided by straightforward goals and rational means. Speed might seem to be an overriding consideration, but aircraft speeds have not increased much since the 1960s. The Anglo-French Concorde demonstrated that supersonic airliners were possible, and it operated for 27 years, but it turned out not to represent a future that the aviation industry as a whole wanted, and it was withdrawn from service after its one and only disaster, in 2003.
Composites have only found 21st-century niches in their more exotic and high-profit forms, such as Kevlar for soldiers’ helmets and body armor, a few prestige military aircraft, high-end sports equipment, and a very small number of genuinely transformative products, which are not made by large firms, or even firms at all, such as improved artificial limbs.
Science historian Eric Schatzberg has written about the way metal came to replace wood in aircraft and concludes that ‘proponents of metal used rhetoric to link metal with progress and wood with stasis’ so that ‘a far greater proportion of resources went into improving metal airplanes’.11 We will find the same ‘materialism’ at work when we look at how computers took their present form, in Chapter 11.
HOW THE SALES EFFORT SHAPED THE CHIP
The basic building block of almost all the digital electronic devices we use is ‘the chip’: a tiny, solid-state device made of selectively ‘doped’ layers of the common element silicon, so that it can be made to conduct or not to conduct electricity a few electrons at a time (hence, it is an ‘electronic’ device, rather than an electrical one which carries vastly heavier current and can give you a shock). The ability of this semiconductor, as it’s called, to switch between conducting and not conducting is the current basis of electronic digital computing: zillions of little switches going on and off, passing tiny electrical charges around very fast.
We tend to assume that the silicon chip, and the form of computing that follows from it, were necessary and obvious developments. But this particular technology possibly would not exist, or would not have become anything like as dominant as it has become, without Cold War anxiety to build nuclear missiles to counter a supposed threat to the US from Soviet long-range bombers. The historian Paul Edwards records that, between the 1950s and 1970s, nearly half of the cost of developing integrated circuit technology was paid for by a single missile project, the ‘Minute Man’. Up to 1990, up to 80 per cent of all the research carried out on ‘artificial intelligence’ was funded by the defense research agency ARPA. ‘The computerization of society,’ says Edwards, ‘has essentially been a side-effect of the computerization of war’.12
The silicon chip’s evolution since then has been shaped to a large degree by a self-reinforcing need to maintain sales. This has produced extraordinary increases in performance, but at considerable cost, and the benefits of that increase are not as clear as we think. The fundamental design principles of chips and the computers they are used in have not changed since John von Neumann wrote out his specification in 1946 (see Chapter 1). Fast though they now are, most computing devices can still only do one tiny thing at a time, so the speed increases need to be considerable to overcome the architecture’s limitations. Better alternatives have never been able to make headway because the frantic competition between manufacturers does not permit them to deviate from the path they are on – as we will see in Chapter 11.
MOORE’S SELF-FULFILLING PROPHECY: CHIPS WITH EVERYTHING
The microchip owes a great deal of its success to its role as a commodity in the traditional sense described in Chapter 7: something that can theoretically be turned out in any quantity one wants, as if by turning a handle. If the ideal of commodity production is printing banknotes, microchip production comes surprisingly close to it – not just in the abstract sense that each chip is identical, has a vouched-for provenance, and commands an internationally recognized price that declines over time, but also in the literal sense that chips are produced by processes based on ones that were originally developed for the printing industries: photo-lithography and etching.
The microchip is the first machine that can in principle be produced in the same way as copies of a best-selling novel are produced, in potentially limitless quantities, on demand, for as long as its price holds up in the marketplace. The aim of the game is to cash in while demand lasts, and the industry is shaped by that requirement. As with printing and publishing, the process depends heavily on highly skilled and inherently hard-to-control labor in the design and setting-up stages but, in principle, it is almost entirely free of labor constraints when it comes to production. In principle, all that’s needed is able bodies to deliver the goods and haul in the proceeds. But it is also true, as with printing and commodity production generally, that ‘in practice’ is tantalizingly different from ‘in principle’. The quest to solve that discrepancy has turned the industry into a global phenomenon.
Computer-chip manufacture is a high-stakes game: hugely profitable, but with the catch that serious profits are only made on the fastest and latest chips while they are still ‘leading edge’.
The industry is said to be driven by ‘Moore’s Law’, which began life as an observation in 1965 by Intel’s co-founder Gordon Moore that transistor densities had doubled every two years or thereabouts – a trend that then continued with impressive consistency. An Intel microprocessor of 1972, the 8008, had 3,500 transistors; its successor in 1982, the 80286, had 134,000; in 2000, the last of that particular line of processors, the Pentium 4, had 42 million13… and so it continues.
This trend is usually spoken of as if it were a natural law but, as sociologist Donald MacKenzie pointed out in 1996: ‘in all the many economic and sociological studies of information technology there is scarcely a single piece of published research… on the determinants of the Moore’s Law pattern.’14 It is, he contends, a ‘self-fulfilling prophecy’. The small number of firms with the resources to manufacture chips, but no means of co-ordinating their actions, have been drawn into a kind of bidding war – so that they end up committing astronomical and yet-more astronomical amounts to stay in the game – with relatively little positive effect on the wider world: the same kind of phenomenon as a traffic wave, a stampede or a stock-market bubble.
In 2013 the microprocessor industry was worth $213 billion15 to the 25 biggest manufacturers (Intel being the biggest; more than twice as big as its nearest rival, Samsung). It is extraordinarily capital intensive and getting more so at an extraordinary rate: a fabrication plant (known in the industry as a ‘fab’) cost around $3 billion to build in 2003 and that cost was said to be doubling every four years.16 By 2005, the most expensive new ‘megafabs’ were said to be costing around $10 billion.17 As Nathan Rosenberg has shown in his book Inside the Black Box, this makes it ever more important for the manufacturer to make each new device applicable to as many functions as possible. Merely serving a niche market would be financial suicide. ‘A firm with a nonstandard component would be forgoing the bulk of the market and would be sacrificing attendant economies of scale’.18 This means that any and every industry, however apparently non-technical, is likely to be a target for computerization, as we see in the conversion of so many industries from old established technologies to digital.
Hence, devices are integrated into standards-compliant packages – eventually, on a single chip – that can interface with everything they might be required to be used with. The process, known as ‘Very Large Scale Integration’ (VLSI), has come to dominate computer hardware development. Because of the escalating investment cost, and the very brief time window for recouping those costs before the device becomes ‘generic’ or obsolete, it is absolutely vital to sell its benefits and possible uses to as many other manufacturers as possible.
In consequence, microprocessors turn up in everything that can conceivably accommodate one. Just two per cent of all microprocessors sold went into personal computers in 2009 and a similar but growing number into mobile phones.19 From the early 2000s, a new generation of phones, the ‘smartphones’ (like the iPhone and Blackberry), and then tablet computers ‘came to the rescue’ of the chip industry because they needed large numbers of the high-profit leading-edge microprocessor chips: each of these devices used around four microprocessors in 2008, against just one or two lower-specification ones for basic or ‘feature’ phones.20 The vast majority of microchips are used for ‘embedded control’ – in washing machines, automobile engine-management systems, avionics, industrial process management, distribution systems, telephone switching systems and the telephones themselves, audio systems and so on.
These devices and systems are steadily being closed off to user intervention or even repair – as when the various trades involved in coachwork were locked out of the auto industry by the introduction of steel in the 1920s.
The peculiar economics of chip manufacture mean that there is very little profit to be made from older chips (I’ll explain more about this in Chapter 9). Firms cannot hope just to make a living; they have to make a killing, consistently, or perish. There is just a brief window of opportunity in which to produce and find markets for the latest high-profit, high-specification chips that go into ‘leading edge’ (in other words, premium-priced) consumer products. This, as much as public demand for new alternatives to things like 35mm film-based cameras and vinyl long-playing records, is the reason for the spectacular growth of digital photography, digital audio and TV, smartphones and games devices.
To take a specific type of processor, the digital signal-processor (DSP): DSP chips are typically used to convert an analog signal, typically the tiny, fluctuating electric current that a microphone generates from real-world sounds, into a stream of digits that can be manipulated by a computer program (or conversely, from digital into analog form through a loudspeaker). But they can also be used to filter graphic and video images in various ways, not to mention statistical data, seismic data, data from ultrasound scanners; they are even found in the anti-lock braking systems of automobiles, in fax machines, disk drives and DVD players. When DSP chips first appeared, an important market turned out to be ‘speak and spell’ toys and their current largest application, the mobile phone, was by no means at the front of anyone’s mind. Clearly, a successful DSP device had to be one that could slot into a number of these disparate markets as they appeared, with minimal fuss – and that is what Texas Instruments did, thereby capturing the market for DSP.
This tendency or pressure to standardize created ‘unprecedented and unanticipated opportunities to use and recombine devices in new ways’, says Rosenberg. But it has also meant that devices now have to be recombinable to justify the ever-increasing cost and risk of developing them. Development teams need to be able to ‘sell’ their ideas to management and to the marketing department, showing that they can be incorporated into a great many product-types. They should also have uses within as many different industries as possible, to avoid over-reliance on any particular market sector, which might suffer a downturn or a slump.
Constant, serious sales effort is required at every level. Some of this goes under the name of ‘market research’ but its aim is not so much to find out what people need, in order to go back to the lab and make something that will meet that need, as to find ways of meeting the manufacturer’s need for new markets before their present ones vanish.
More than 20 years ago, an Oxford sociologist, Steve Woolgar, wrote an amusing article called ‘Configuring the User’, which showed just how intensely computer-users’ wishes were already being stage-managed by experts, supposedly in the interests of finding out and giving them what they want. After spending some weeks with a ‘usability’ team working on a new microcomputer, he observed that:
Since the company tends to have better access to the future than the users, it is the company’s view which defines users’ future requirements.21
Woolgar noticed that, as if to add insult to injury, an imaginary entity called ‘the user’, disempowered in the flesh, had become a handy positional asset in corporate power play. He noticed ‘Horror stories about what “users” do to the machines, and conversely a sort of adoption of “the user” as a holy icon or weapon in arguments’.22 Similar distortions have been working their way up and down – and shaping – the electronics food chain for decades.
Chip manufacturers must sell their products to the ‘OEMs’ (Original Equipment Manufacturers) and to their customers – equipment firms, government departments and the computer press and opinion-formers that all these people read.
Because of the complexity of the product, the sheer range of possible uses, the tiny time window of profitability, simple diffusion of knowledge through traditional, scholarly and professional networks is hopelessly insufficient, so a whole new sector of ‘educational/sales’ activity has evolved in parallel with chip development itself. This runs from the provision of Software Development Kits (SDKs) and tools and courses for clients’ own developers, to expensive launch-events, seminars and ‘webinars’, to a new category of what Rosenberg calls ‘technological gatekeepers’: consultants, journalists, industry analysts, forecasting firms – all of whom ‘are crucial to the diffusion of electronic technologies’. They ‘may know the domain of ostensible application, but not necessarily the electronic/digital technology that might be applied to it’.
Training has become crucial to winning and keeping customers, and all software and hardware companies devote energy and resources to it. In the professional literature, one reads (and has read for years) how such-and-such a new technology (ActiveX, ASP, C#…) is flawed in certain unfortunate respects, and bloated, but that it will probably succeed because the ‘hard-pressed IT manager’ knows he can fall back on the lavish support and training offered by Microsoft (or Oracle, or Cisco, as the case may be).
Ordinary computer users are only aware of this to the extent that they depend for technical support on other members of staff who disappear from time to time to attend Microsoft, Oracle or perhaps Cisco-accredited courses.
All of these ancillary activities, which are certainly training in some sense, are also essential parts of the sales process. The information covered is highly specific to the particular manufacturer (which is part of its appeal). Less and less training of this kind takes place in the traditional way, in Further Education or Community colleges, let alone under apprenticeship schemes – which in any case struggle to keep up with the rapid rate of change in job requirements as computer work becomes more and more a matter of learning how to use particular software and hardware packages, and less and less a matter of mastering basic principles – giving rise to the ‘learning the same thing over and over’ phenomenon social geographer Chris Benner noted in 1999, in connection with ‘lifelong learning’ (see Chapter 6).
In 2002, Benner published a detailed study of the way this new, constantly changing, learning and employment regime was playing out among computer workers and their employers in Silicon Valley. A major unexpected outcome was the massive expansion of recruitment firms such as Adecco and Manpower: skills were now seen as ephemeral, so employers were less and less willing to engage staff directly (with the expensive long-term commitments that entailed), hiring them instead project by project, and disposing of them again as soon as possible. Benner’s study suggests that the expansion of precarious employment throughout the economy in the 1990s, and the rise of outsourcing firms, started in Silicon Valley – to a large extent as a side-effect of Moore’s Law, and the logic of sales pressure, as the chief determinant of the course of the electronics industry. Prior to that time, recruitment agencies had been a small sector, providing mainly secretarial ‘temping’ services.23
Sales effort in the high-tech age has become a culture-wide force that embraces areas of the media, politics and academia previously assumed to be quite separate from the advertising industry. As citizens, we have never had the time, information or space we need to discuss and articulate our possible needs as technology advances; instead, visions of possible futures are thrust at us, all apparently worked out in impressive detail and ready to go (and likely to go with us or without us).
High-tech visions, and the people known as ‘visionaries’ (and even ‘evangelists’24) who produce them, have become such basic ingredients of the consumer and business media that we do not really think of them as sales effort – yet great effort and expense have been lavished on them by the electronics and computer firms and those around them.
The ‘visionary turn’ became institutionalized in 1985, with the creation of the MIT Media Lab by Nicholas Negroponte (wealthy and well-connected professor of architecture, brother of President George W Bush’s Director of National Intelligence and co-founder in 1993 of the influential high-tech style magazine Wired) as ‘the pre-eminent computer science laboratory for new media and a high-tech playground for investigating the human-computer interface’.25 Negroponte was able to engage major companies, eminent academics, famous artists and the military in lavishly funded, futuristic projects, ostensibly ‘inventing the future’ (the subtitle of a book about the Lab26 written in 1987 by Negroponte’s friend Stewart Brand, the founder of the ‘hippy bible’, The Whole Earth Catalog). Fred Turner’s 2006 book, From Counterculture to Cyberculture27 describes how the Catalog’s carefully modulated, cool but unchallenging rebelliousness became a supremely effective sales-script for big business – ultimately epitomized by Apple’s ability to present itself as an anti-corporate, ‘rebel’ organization, long after it had become one of the world’s biggest and most ruthlessly monopolistic corporations (strapline: ‘Think Different’).
The Lab made it its business to produce high-profile, eye-catching ‘demos’ of its projects, and to produce exciting new words such as ‘virtual reality’ (3D graphics), ‘telepresence’, ‘personal digital assistant’, and so on, and established the genre of ‘scientific visionary as showman’ familiar from the TED conferences, which were founded by another wealthy and influential architect, Richard Saul Wurman.28 The subtext is: there is no need to wonder about the future because brilliant minds are working on it, and it will be nice. Everything you could wish for is being taken care of.
The visionary promise is an individualistic, competitive, optimistic one, which comes with an implied warning that nasty things will happen if you don’t buy into it: ‘think different, or else’. It is hard to reconcile the fact that ‘the future’ is not quite the bed of roses that was promised, without questioning the massive societal consensus that the visionaries appear to represent. Crazier ideas emerge from the effort to square this circle.
EMBRACING CARNAGE: FAITH IN DISRUPTION
In 1997, a professor at Harvard Business School, Clayton M Christensen, looked at the new precariousness, and the carnage taking place in the name of progress, and persuaded himself that, far from being a disaster, it was a sign of much, much better things to come. A completely new economic era had dawned. Present discomforts merely indicated that people did not yet understand the new world we had entered. The key to this world was to jettison history (because old systems are defunct in the new environment and their rules are bound to mislead us), and become more individualistic and reckless.
The pioneers of this new world, he believed, were the ‘small, aggressive startups’ which he believed were already devouring the large, complacent organizations of yesteryear by seizing the latest technologies and rushing them to market, in a botched-together state if necessary, rather than waste precious time getting things right. Christensen coined a new name for the phenomenon in his book The Innovator’s Dilemma – ‘disruptive innovation’ – and the idea became an instant, wildfire success, turning up in magazines, books, university courses and company reports throughout the economy – and shaping business policy. Wired magazine’s editor Kevin Kelly rushed out his own bestseller New Rules for a New Economy the following year, and many others followed. The idea is still going from strength to strength. This is from a leaked New York Times management report, quoted by historian Jill Lepore in an article analyzing the Christensen phenomenon for The New Yorker in June 2014:
Disruption is a predictable pattern across many industries in which fledgling companies use new technology to offer cheaper and inferior alternatives to products sold by established players (think Toyota taking on Detroit decades ago).29
‘Disruption’ fitted perfectly with the ‘everything has changed’, ‘don’t get left behind!’ rhetoric of the high-tech industries and was avidly adopted in Silicon Valley – as George Packer found in 2013 (in his New Yorker article on Silicon Valley culture ‘Change the World’, mentioned in Chapter 1). Facebook’s HQ sported posters bearing the motto ‘Move fast and break things’.30
Lepore’s article, ‘The Disruption Machine’, sketches out the scale of Christensen’s success: ‘disruption consultants’ and ‘disruption conferences’ and even a ‘degree in disruption’ at the University of Southern California. She then subjected all of the examples of ‘disruptive innovation’ in Christensen’s book to historical analysis. None of them emerged with a shred of credibility, not even ones from the industry he claimed to know best: computer disk-drive manufacture:
Christensen argues that the incumbents in the disk-drive industry were regularly destroyed by newcomers. But today, after much consolidation, the divisions that dominate the industry are divisions that led the market in the 1980s.
To take just one example, Christensen had claimed that Seagate, the long-established manufacturer of drives, had fallen prey in the 1980s to disruptive smaller rivals who introduced 3.5-inch hard disks while Seagate was still, complacently, focusing on 5.25-inch models. Yet even as Christensen went to press, in 1997, Seagate was unshaken as the world’s largest maker of disk drives and in 2016 remains one of the two largest. Two of the ‘disruptive upstarts’ that he claimed had toppled Seagate, Micropolis and MiniScribe, had both collapsed and vanished by 1990.
Lepore found similar differences between claim and reality in all the industries Christensen had covered, and that he had ignored the many major examples that ran counter to his thesis. In essence, it boiled down to ‘a set of handpicked case studies’ that supported his convictions – and proved to be just what an uneasy business elite wanted to hear. ‘Disruption by small aggressive startups’ is an example of an easily falsifiable idea taking root in a well-insulated elite and then driving policy.
‘Disruption theory’ has served as a distraction from the wholesale disruption of lives, jobs, workplaces and communities that happens when any new technology comes under the control of a self-interested elite. It is nothing new, but a more advanced version of the phenomenon Baran and Sweezy described in 1967, whereby capitalism fends off implosion by appropriating a new technology, at ever higher human and environmental cost, just as it did with steam power, electricity and the automobile in their days.
The next two chapters describe just two aspects of the ‘new economy’ in some detail, to gain a stronger idea of the scale of their impact and how it arises in the capitalist environment. First (Chapter 9) we’ll look at the microchip industries that underpin the whole sector, then (Chapter 10) at the unexpectedly large impact of the data whose transmission and storage it supports. In both cases, I hope it will be clear that the impact is not caused by the technologies themselves, but by their deployment in the cause of capitalist competition.
1 Richard Wilkinson & Kate Pickett, The Spirit Level, Penguin, p 223.
2 Vance Packard, The Waste Makers, D McKay Co, 1960.
3 Ralph Nader, Unsafe at Any Speed, Grossman, 1965.
4 Franklin M Fisher, Zvi Griliches & Carl Kaysen, ‘The Costs of Automobile Model Changes since 1949’, Journal of Political Economy 70, no 5, 5 Oct 1962, 433–451.
5 Eugene Incerti, Andy Walker, John Purton ‘Trends in vehicle body construction and the potential implications for the motor insurance and repair industries’, The Motor Insurance Repair Research Centre, Thatcham, June 2005. In International Bodyshop Industry Symposium: Montreux, Switzerland, 2005.
6 Ibid.
7 G Sturt, The Wheelwright’s Shop, Cambridge University Press, 1930.
8 Paul Stewart et al, We Sell Our Time No More, Pluto Press, 2009.
9 JE Gordon, The New Science of Strong Materials, Princeton University Press, 1976, p 112.
10 Ibid, pp 164-5.
11 E Schatzberg, Wings of Wood, Wings of Metal, Princeton University Press, 1999; quoted by JS Small, The Analogue Alternative, Taylor & Francis, 2013, p 13.
12 PN Edwards, The Closed World, MIT Press, 1996, pp 64-66.
13 Wikipedia contributors, ‘Transistor count’, Wikipedia, nin.tl/transistorcount Accessed 9 March 2016.
14 Donald MacKenzie, Knowing Machines, MIT Press, 1996.
15 iHS ‘Global Semiconductor Market Set for Strongest Growth in Four Years in 2014’, Dec 2014, nin.tl/semiconductorgrowth Accessed 9 March 2016.
16 This is according to venture capitalist Andrew Rock in 2003; see ‘Rock’s Law’ – nin.tl/Rockslaw
17 B Lüthje, ‘Making Moore’s Law Affordable’, Bringing Technology Back In, Max Planck Institut, Köln, 2006.
18 N Rosenberg, Inside the Black Box, Cambridge University Press, 1983, p 183.
19 Michael Barr, Real men program in C, Embedded.com 8 Jan 2009 (retrieved 18 June 2010); see also Jim Turley, Embedded Processors by the Numbers Embedded.com 1999 and The Two Percent Solution Embedded.com 2002.
20 ‘Mobile: increasing value per handset’, ARM annual report 2008, retrieved 29 Aug 2011, nin.tl/ARMmobile
21 Steve Woolgar, ‘Configuring the user: the case of usability trials’, The Sociological Review, 38, S1, pp 58-99.
22 Woolgar, op cit.
23 Chris Benner, Work in the New Economy, ed M Castells, Blackwell, 2002.
24 The term ‘evangelist’ was deployed for marketing purposes at Apple in 1984; Guy Kawasaki, a member of the original Macintosh team, was the first to bear the title. See Robert X Cringely, Accidental Empires, Penguin, 1996, pp 217-8.
25 ‘Nicholas Negroponte’, Wikipedia, nin.tl/negroponte Accessed 1 Feb 2016.
26 Stewart Brand, The Media Lab; Viking 2007.
27 F Turner From Counterculture to Cyberculture, University of Chicago Press, 2006, p 99.
28 ‘TED (conference)’, Wikipedia, nin.tl/TEDWiki Accessed 9 March 2016.
29 Jill Lepore, ‘The Disruption Machine: what the gospel of innovation gets wrong’, The New Yorker, 23 June 2014.
30 G Packer, May 2013, ‘Change the World’, The New Yorker, nin.tl/packerNY Accessed 12 Dec 2013.