14 Please do not adjust your set

‘The everyday is what we never see for a first time, but
only see again’

Maurice Blanchot 1

Of all our daily habits, watching TV is the one we are most confused about. Since entering mainstream culture in the 1950s, it has been the most popular way of spending our free time. But synonyms for the television – goggle box, idiot box, plug-in drug, chewing gum for the eyes – are rarely flattering. Is watching television a harmless pleasure or a mindless addiction? The confusion arises from the dual nature of telly-watching as both a personal choice and an unthinking habit – something that we have actively decided to do, but that consists of things we all do and see over and over again. A night in with the television is taken up not just with watching programmes but with routine activities, like flipping through channels or viewing those repetitive in-between bits – trailers, adverts, announcements, theme tunes and closing credits – that take up more time in the schedules than we imagine.

Ever since they invented the art of scheduling in the 1950s, broadcasters have exploited the habit-forming nature of watching television. In these early days, watching TV was an evening ritual with its own familiar longueurs. Much time would be spent waiting for the set to warm up and hum into life, and fiddling with the aerial to get a good reception, which could be affected by anything from bad weather to car ignitions. The first onscreen continuity announcers – women in evening gowns, men in dinner jackets – would guide the viewer gently through the night’s entertainment. Since programmes often finished early, viewers also had to sit through a stock series of visual fillers, such as a turning windmill, waves crashing on rocks and a clay pot being fashioned on a potter’s wheel.2 Until 1957, there was a ‘toddler’s truce’ between 6 and 7 p.m. to allow parents to put their young children to bed, on the pretence that TV had finished for the night – a collective white lie as zealously maintained as the existence of Santa Claus.3 Television even told the grownups when to go to bed. Programming stopped between 10.30 and 11 p.m. with evening prayers, the National Anthem and the instruction to remember to turn off your set. Perhaps it was the novelty of these new rituals that kept people watching; with only one channel, the range of programmes was rather limited. In the week leading up to the first night of ITV in September 1955, primetime BBC highlights included a programme about the presenter Wilfred Pickles’ silver wedding anniversary, the world speedway championships, the searchlight tattoo from Woolwich Stadium, ‘Puzzle Corner’ and a documentary about Hawick knitwear.4

The arrival of commercial television created a new ritual: sitting through the ad breaks. Opponents of the new commercials saw them as a virulent foreign invasion, specifically an American one. The BBC’s founding director-general, Lord Reith, compared the arrival into Britain of ‘sponsored television’ with previous influxes of smallpox and the bubonic plague.5 The Postmaster General had to reassure the public that Hamlet would not ‘interrupt his soliloquy to tell us the brand of toothpaste popular at Elsinore’;6 British television commercials would be ‘spot’ ads, slotted between the programmes so they could not influence them. The ad breaks could not exceed an average of six minutes an hour over the course of the day, with no more than eight minutes at peak time.7 When the first commercials were shown on ITV on 22 September 1955, the News Chronicle commented that these subdued ads for mundane items like toothpaste, drinking chocolate and margarine were ‘muffled, as if making their entrance like well-mannered tradesmen at the side door’.8

The 1954 Television Act had ruled that adverts could be inserted not only at the beginning and end of programmes but also in ‘natural breaks’ in the middle of them. Of course, the ads in the middle of programmes were the hardest to miss and the most coveted by advertisers, but no one could agree what constituted a ‘natural’ break. The Labour MP Christopher Mayhew thought that adverts were being inserted willy-nilly, and introduced a bill to ban breaks that interrupted programmes. The critic Milton Shulman disagreed, arguing that the normal viewer could only concentrate for about fifteen minutes, after which ‘he wants to put the cat out, brew some tea, go to the bathroom – without missing anything.’ Shulman pointed out that ending programmes on the hour or half-hour was just as arbitrary: ‘Has Mr Mayhew never heard the BBC Brains Trust arbitrarily interrupted in the middle of a fascinating discussion to make way for something like Sooty?’9 This argument won the day and Mayhew’s bill failed.

In fact, there has never been any serious opposition to adverts from viewers. Some find them irritating, others find them entertaining – but a large majority has always considered them a good way of funding programmes. A 1967 survey discovered that more than half of viewers continued to watch during the ad breaks – although, among the ‘best educated groups’, a small majority did not.10 Researchers soon found that viewers had outstanding recall of the commercials.11 This was the age of advertising jingles, those short, simple tunes designed to drill themselves into the collective unconscious like grown-up nursery rhymes. As one advertising executive said at the time, ‘If you’ve nothing to say, then sing it.’12 One tune, composed by the classic jingle-writer Howard ‘Boogie’ Barnes (‘Murray mints, Murray mints, too good to hurry mints’) became a sort of unofficial national anthem, chanted in school playgrounds and whistled on building sites.

The advent of commercial television produced another habitual accompaniment to watching the box: the channel ‘identification’. ITV was organised regionally, and its separate companies had to identify their own ‘channel’ (a word rarely used in Britain until the mid-1950s, except by engineers in its original sense of a band of frequencies used for a radio or television signal). Almost all of them came up with a musical and visual fanfare similar to those used by Hollywood studios at the start of films, like the seminal one for 20th Century Fox, with searchlights shining on the company logo, set in stone, accompanied by the sound of a crescendo of drums, horns and fortissimo strings. The fanfare for Anglia Television, for example, showed a revolving silver hunting trophy of a knight on horseback (seen by the company chairman in a New Bond Street jewellers shop window the day after Anglia had won the franchise13) over a soundtrack of Handel’s Water Music. Non-Anglia viewers mainly saw the revolving knight before the syndicated quiz show Sale of the Century, when the fanfare ended with the anti-climactic declaration, ‘And now … from Norwich.’

These Hollywood-style fanfares were slightly jarring because waiting for a programme to start was now a banal routine, not at all like the light-dimming, audience-hushing start to a film in the cinema. When few people had TV sets, watching them had been a social event. The Manchester Evening Chronicle claimed in 1949 that putting an aerial outside your house was a guarantee of a social life. It found one man who had installed eight tip-up seats in his drawing room to create a mini-cinema for his neigh bours.14 Where houses had both a living room and a front ‘parlour’ reserved for ‘best’, the TV was often put in the parlour, on the assumption that it would be watched on special occasions.15 Early press adverts for television sets show families dressed up in their smartest clothes to watch TV, with amazed expressions on their faces.16 But it did not take long for the television to become just another item of living-room furniture. TV sets were sold in domesticated wooden or Bakelite cabinets, and people put knick-knacks and mementos on top of them. A television set was the most complex piece of technology in people’s homes, but it was also one of the easiest to use, and its operation soon became routine.

According to one social historian, television was the ‘social solvent’ of the 1950s in the way that the war had been in the 1940s. It ‘put the nation into a sort of trance, soft and shallow, from which it presently emerged to find itself living its dream’.17 Television-watching became normal, perhaps rather ‘common’. Before about 1950, having a television aerial signalled your social superiority to the rest of the street. But by 1957, the working classes watched telly most of all, 45 per cent of them for an average of four hours every night, compared with an average of 1 hour 41 minutes for the population as a whole.18 As the TV insinuated itself into people’s daily routines, some critics began to evoke a prelapsarian era before its arrival. The novelist Doris Lessing wrote later in her autobiography that ‘television had arrived and sat like a toad in the corner of the kitchen. Soon the big kitchen table had been pushed along the wall, chairs were installed in a semi-circle and, on their chair arms, the swivelling supper trays. It was the end of an exuberant verbal culture.’19 Doctors identified new ailments like ‘TV Neck’, ‘TV Crouch’, ‘TV Dyspepsia’ and ‘TV Stutter’. Dentists warned that children could get buck teeth from watching TV lying on the floor with their chins in their hands.20

On his return visit to Bolton in 1960 (Mass-Observation’s large-scale survey of the town having ended twenty years previously), Tom Harrisson refused to join in the moral panic. He wrote that, for the man who had just returned to Britain from several years in Borneo – which, as a matter of fact, he had – one of the most visible changes was the number of television antennae on roofs, and the growth of shops selling televisions, with signs in the window saying ‘Slot TV, 96 hours for 1 shilling’ and ‘Generous allowance on your old set against advance payments’. But he did not believe that television had destroyed a rich communal life:

What did people do on a winter’s day or a Sunday afternoon of rain before TV – especially if they hadn’t much money? The short answer for millions of people then: ‘NOTHING MUCH’. They ‘messed about’ or did odd things (e.g. read pulp mags and Westerns, argued, titivated each other), which can hardly be said to be ‘worse’ than looking into polypact television.21

What the telly-watcher did seem to like was habit and routine. Kenneth Adam, then the BBC’s director of television, spoke in 1964 about the most common cause of outrage among viewers: not sex, violence or indeed anything to do with programme content, but late running and rescheduling. Adam claimed that a sizeable minority of TV viewers were ‘addicts’. These were the people who wrote ‘somewhat illiterately’ to complain that TV closed down too early for the night. They made up the several hundred thousand viewers who, during a recent television strike, had turned on to watch the test card.22

Test cards were there to test flicker, focus and proportions when there were no programmes – a frequent occurrence, since the government limited the number of hours that channels could broadcast. The number had gone up throughout the 1950s, but even in 1960 it was only sixty hours a week.23 When colour TV arrived in Britain in 1967, the BBC needed a new test card that would incorporate flesh tones. This was ‘Test Card F’, the famous image of a BBC technician’s daughter, Carol Hersee, playing noughts and crosses on a blackboard easel (so that one of the crosses could mark the centre of the card). Even after the government ended all restrictions on programme hours in 1972, there were still many gaps between the programmes, leaving ample time for Test Card F, accompanied by cheesy lounge music.

The arrival of colour TV also revived two themes associated with television ever since ITV arrived to challenge the BBC’s middle-class hegemony: anti-Americanism and class distinction. American television had used colour since 1953, and throughout the 1960s there were long discussions, in Parliament and elsewhere, about whether to use the American NTSC (National Television System Committee) colour system or to wait for more advanced European technology offering subtler tones and a wider range.24 When the British government eventually adopted the European PAL (Phase Alternating Line) system, there was much patriotic celebration about the quality of the colour on British TV when compared with the primitive hues of American TV, which according to the Spectator made ‘human complexions look like accidents in the embalmer’s workshop’.25 But in 1969 another critic, George Melly, attacked this ‘self-induced hysteria’ about colour. He prophesied a ‘small resistance movement of middle-class intellectuals, the children of those who in the early fifties wouldn’t have the telly at all’ who would ‘equate black and white with high seriousness’.26 This guerrilla organisation never quite materialised, but the monochrome set did become something of an unofficial symbol of middle-class asceticism. By 1972, despite the much greater cost of the colour sets and the colour TV licence, there were more of them in working-class households (52 per cent) than in middle-class ones.27

As well as the all-colour Test Card F, one of the familiar sights on television in the 1970s, the great era of strikes and shutdowns, was a screen apologising for the interruption in programming. Sometimes this was due to events outside of television’s control. In the winter of 1973–4, during the last months of Edward Heath’s government, the shortage of power created by the miners’ strike and a worldwide oil crisis conspired to end television at 10.30 p.m. every night, one of many events at the time that must have seemed, to the middle-aged and older, like a return to wartime privations. More usually, the loss of television resulted from union strikes over extended programming hours or extra pay for transmitting colour signals. Like the test card, the standard apology caption had a certain hypnotic allure, and was rumoured to get good ratings.28

Test cards and apology captions were seen much less in the 1980s. Teletext services began to fill in more of the gaps between the programmes, and there were fewer of these gaps anyway, as daytime TV arrived in 1986 and all-night television a year later. This was the period when academics began to study systematically what people actually did when they watched TV, either by placing cameras on top of televisions or conducting questionnaires and surveys. What they discovered, perhaps because TV-watching had by now become so routine, was that viewers rarely gave the set their undivided attention. While the telly was on (which was most of the time), people also ate dinner, knitted, argued, listened to music, did their homework or hoovered the carpet.29

In 1988, the British Film Institute added to this ongoing television research by asking volunteers to help them document ‘the impact of television on the life and culture of the nation on a single day: November 1’. The BFI distributed 620,000 free diary leaflets through bookshops, newsagents and supermarkets, with volunteers asked to record what they watched and how they felt about it.30 Some of the 22,000 diary entries submitted were collected together in a book, One Day in the Life of Television. It was clear from the entries that the diarists constantly discriminated between what they saw as high-quality and tacky television, and felt guilty about watching the latter.31 In a sense, they had internalised the historical anxiety that watching television was a time-wasting activity – a view that has also influenced the unfairly disdainful response to ‘media studies’, which is supposed to waste academic time investigating it. It was also clear that, although the diarists’ attention to television was peripheral and their attitudes ambivalent, they felt that it formed part of the shared history and daily conversation of the nation. As the TV critic Sean Day-Lewis recognised in an elegiac conclusion to the book, the expansion of cheap channels threatened this sense of national community.32 Thatcherism had always been suspicious of the BBC–ITV duopoly, and especially the BBC’s licence-fee monopoly, and the 1990 Broadcasting Act consolidated the arrival of cable and satellite TV by making outside commercial intervention into the industry much easier.

The gadget that cemented these changes was the remote control. Primitive remotes had been around in America since 1950. The first model, the Zenith Corporation’s hand-grenade-shaped ‘Lazy Bones Station Selector’, was a motor-driven tuner attached to the set via a cable. It was envisaged less as a channel-changer than as a device to switch off the set as viewers fell asleep, or to mute the adverts, which the Zenith company president loathed.33 Later models, marketed like boys’ toys as ‘Flash-Guns’ and ‘Space-Commanders’, used light beams or ultrasonic waves. But it was not until the introduction of infrared remotes in the mid-1970s that they became more common in the UK. From the late 1980s onwards, the rising number of channels helped to universalise the remote: when there were only three or four channels, no one realised they needed a zapper. Meanwhile, academic surveys reinforced the popular folklore that men were remote-control control freaks. Many men used the remote ‘obsessively, channel flicking across programmes when their wives were trying to watch something else’.34 Later studies qualified these early findings, suggesting that men rarely exercised a veto on what to watch. The remote might be the ‘symbolic possession of the father’, but the power he gained was ‘relatively illusory’.35

The remote is a semi-invisible technology. It only works in tandem with other appliances like televisions and DVDs, and we don’t give it much thought unless we are looking for it down the back of the sofa or moaning about other members of the family hogging it. Although mainly designed for flipping through channels, its most significant effect has been to change the nature of scheduling. In the schedule in today’s newspaper, for example, I read that the UKTV History channel is showing seventeen hours of continuous programming featuring the late broadcaster and ex-steeplejack Fred Dibnah, from Fred Dibnah’s World of Steam, Steel and Stone to Fred Dibnah’s The Building of Britain. If this conjures up the unlikely image of an army of Fred Dibnah fanatics settling themselves down on the sofa for an all-day viewing marathon, it is indeed unlikely: this channel has simply given up the traditional role of structuring a varied evening’s entertainment for the viewer. The schedule is contained not in the TV listings but in the zapper, which allows you to select your evening’s entertainment by scrolling down menus. Digital recording devices like Sky Plus allow you to watch programmes when you like, draw up wish lists and even zap through the ads. Viewers live in constant anticipation of some other programme that is better than the one they are watching.

Broadcasters and advertisers, of course, want to stop you zapping. In a classic diversionary tactic, like distracting a baby with a squeaky toy, the noughties innovation of interactive TV is a way of deterring viewers from flipping channels by keeping them preoccupied with the remote (‘press your red button now’). Commercial channels now often leave out the ad break at the end of a programme, carrying you straight into the next one before you can reach for the channel-changer. In order to dissuade you from zapping out of credit sequences, broadcasters employ the ‘squeeze and tease’, a method pioneered by the American channel NBC, whose audience research found that a quarter of viewers switched channels during the credits. This method squashes the closing credits into part of the frame (which is bad news for the vision mixer, whose name is now too small to read) and uses the rest of it for ‘promo-tainment’, previewing next week’s episode or advertising the accompanying DVD.36

As the transitions between programmes have become more important, the notion of the continuity announcer as master of ceremonies has all but disappeared, replaced by a new preoccupation with ‘television brand identity’. Announcers play second fiddle to the visual and sound interludes known as ‘idents’ which fill in the gaps between programmes. The inventor of the modern ident is Martin Lambie-Nairn, whose design company devised the famous ‘flying bricks’ logo for Channel 4 in 1982. In 1987, despite a spirited ‘Save Our Knight’ campaign in a Norwich newspaper, Lambie-Nairn replaced the Anglia Knight with a computerised billowing pennant.37 Then, in 1991, the BBC asked him to help them alter the focus-group view of the typical BBC2 viewer as ‘a Volvo-driving schoolteacher with leather patches on the sleeves of his tweed jacket’.38 Lambie-Nairn’s new idents were variations on the number ‘2’: a metallic 2 splashed by paint, a 2 dropping into flour (or some other unspecified white powder) and the outline of a 2 underneath shimmering silk. Eventually, the 2 acquired an animated personality: it became a fluffy, somersaulting toy, a remote-control car, and a plane gliding across a wooden floor. The 2 began to receive fan mail, and focus groups now identified the channel not as snobbishly highbrow but as ‘sophisticated, witty and stylish’.39

Soon all the channels were using idents, many of them Lambie-Nairn’s work. BBC1 had basketball players in wheelchairs, Haka-dancing rugby players and acrobats suspended from ceilings; Channel 4 had hedges, skyscrapers and pylons magically aligning to form the number 4; ITV1 had, for more obscure reasons, people hugging trees, examining their beer bellies in front of mirrors, and falling asleep on trains. Modern idents were like visual poems, accompanied by ambient music and sound effects rather than old-fashioned fanfares. Their Zen-like calm was a creative response to channel multiplication, founded on the hope that people would tire of this dizzying choice and simply stick with one channel, with its familiar ident and the reassuring voice of the announcer saying, ‘You’re watching BBC1.’ Some people complain about how much money they cost, particularly to BBC licence-payers, but idents do their job: like the jingles and fanfares of the 1950s, they have become part of the everyday, repetitive experience of watching telly.

In George Orwell’s 1984 (1949), written just as the BBC was resuming its post-war broadcasting, Winston Smith’s life is ruled by a two-way television called the telescreen, which both broadcasts programmes and provides a live feed to the authorities. Everyone has to ‘live, from habit that became instinct – in the assumption that every sound you made was overheard, and, except in darkness, every movement scrutinised’.40 In the early days of television, critics attributed this kind of Orwellian influence to the box in the corner. Some people thought that the people on TV could see into their living rooms. One woman wrote to the newsreader Robert Dougall, ‘I wonder what you think of me in my cosy chair by the set?’ Another wrote to him to ask if she was sitting too near to the television.41 After a while, people got the hang of the technology, and complaints about TV’s pernicious effects began to receive the stock answer: ‘If you don’t like it, there is always the off switch’ – a switch soon joined by buttons to change channels, mute and fast-forward. But Orwell’s vision still transfixed us. Viewers watching Channel 4 in the early 2000s would come across a brief, almost subliminal ident (called, appropriately enough, a blink) of a large eye: a silent reminder that the next series of Big Brother was imminent. Big Brother is not watching you. You are watching Big Brother, and he is terrified you are going to change channels.

Big Brother could learn a lot from the economist Albert Hirschman’s classic book, Exit, Voice, and Loyalty, which outlines two contrasting ways of responding to dissatisfaction in companies and organisations. The first, exit, involves a customer switching to another brand or an employee leaving an organisation. This strategy, traditionally favoured by classical economists, is straightforward and impersonal, like the anonymous choices you make in the aisles of a supermarket. But while exit helps individuals in the short term, it may not alter the underlying reasons for their dissatisfaction. The second strategy, voice, allows employees or customers to express their views and try to produce change from within – a messier and sometimes painful process but one that may address these deeper reasons.42

In broadcasting, the contrast between exit and voice is stark. BBC Radio 4 listeners, for example, rely on voice. They are loyal whingers: if they do not like an announcer’s accent or a new theme tune, they respond with righteous indignation as dedicated listeners to the station, and exercise real influence as a result. The television-watcher, by contrast, relies on the threat of exit – more so than ever before, thanks to the endless multiplication of channels and the remote control. We all have a choice, but few TV-watchers use their voice. Broadcasters worry all the time about a hypothetical viewer, who is young, advertiser-friendly and in tune with the zeitgeist. The real, flesh-and-blood viewers are ignored. And so they activate the ultimate exit strategy: they turn over.