In 1906 a group of Egyptians, headed by Prince Ahmad Fuad, issued a manifesto to campaign for the establishment by public subscription of an Egyptian university ‘to create a body of teaching similar to that of the universities of Europe and adapted to the needs of the country.’ The appeal was successful, and the university, or in the first phase an evening school, was opened two years later with a faculty of two Egyptian and three European professors. This plan was necessary because the college-mosque of al-Azhar at Cairo, once the principal school in the Muslim world, had sunk in reputation as it refused to update and adapt its mediaeval approach. One effect of this was that in Egypt and Syria there had been no university, in the modern sense, throughout the nineteenth century.1
China had just four universities in 1900; Japan had two – a third would be founded in 1909; Iran had only a series of specialist colleges (the Teheran School of Political Science was founded in 1900); there was one college in Beirut and in Turkey – still a major power until World War I – the University of Istanbul was founded in 1871 as the Dar-al-funoun (House of Learning), only to be soon closed and not reopened until 1900. In Africa south of the Sahara there were four: in the Cape, the Grey University College at Bloemfontein, the Rhodes University College at Grahamstown, and the Natal University College. Australia also had four, New Zealand one. In India, the universities of Calcutta, Bombay, and Madras were founded in 1857, and those of Allahabad and Punjab between 1857 and 1887. But no more were created until 1919.2 In Russia there were ten state-funded universities at the beginning of the century, plus one in Finland (Finland was technically autonomous), and one private university in Moscow.
If the paucity of universities characterised intellectual life outside the West, the chief feature in the United States was the tussle between those who preferred the British-style universities and those for whom the German-style offered more. To begin with, most American colleges had been founded on British lines. Harvard, the first institution of higher learning within the United States, began as a Puritan college in 1636. More than thirty partners of the Massachusetts Bay Colony were graduates of Emmanuel College, Cambridge, and so the college they established near Boston naturally followed the Emmanuel pattern. Equally influential was the Scottish model, in particular Aberdeen.3 Scottish universities were nonresidential, democratic rather than religious, and governed by local dignitaries – a forerunner of boards of trustees. Until the twentieth century, however, America’s institutions of higher learning were really colleges – devoted to teaching – rather than universities proper, concerned with the advancement of knowledge. Only Johns Hopkins in Baltimore (founded in 1876) and Clark (1888) came into this category, and both were soon forced to add undergraduate schools.4
The man who first conceived the modern university as we know it was Charles Eliot, a chemistry professor at Massachusetts Institute of Technology who in 1869, at the age of only thirty-five, was appointed president of Harvard, where he had been an undergraduate. When Eliot arrived, Harvard had 1,050 students and fifty-nine members of the faculty. In 1909, when he retired, there were four times as many students and the faculty had grown tenfold. But Eliot was concerned with more than size: ‘He killed and buried the limited arts college curriculum which he had inherited. He built up the professional schools and made them an integral part of the university. Finally, he promoted graduate education and thus established a model which practically all other American universities with graduate ambitions have followed.’5
Above all, Eliot followed the system of higher education in the German-speaking lands, the system that gave the world Max Planck, Max Weber, Richard Strauss, Sigmund Freud, and Albert Einstein. The preeminence of German universities in the late nineteenth century dated back to the Battle of Jena in 1806, after which Napoleon finally reached Berlin. His arrival there forced the inflexible Prussians to change. Intellectually, Johann Fichte, Christian Wolff, and Immanuel Kant were the significant figures, freeing German scholarship from its stultifying reliance on theology. As a result, German scholars acquired a clear advantage over their European counterparts in philosophy, philology, and the physical sciences. It was in Germany, for example, that physics, chemistry, and geology were first regarded in universities as equal to the humanities. Countless Americans, and distinguished Britons such as Matthew Arnold and Thomas Huxley, all visited Germany and praised what was happening in its universities.6
From Eliot’s time onward, the American universities set out to emulate the German system, particularly in the area of research. However, this German example, though impressive in advancing knowledge and in producing new technological processes for industry, nevertheless sabotaged the ‘collegiate way of living’ and the close personal relations between undergraduates and faculty that had been a major feature of American higher education until the adoption of the German approach. The German system was chiefly responsible for what William James called ‘the Ph.D. octopus’: Yale awarded the first Ph.D. west of the Adantic in 1861; by 1900 well over three hundred were being granted every year.7
The price for following Germany’s lead was a total break with the British collegiate system. At many universities, housing for students disappeared entirely, as did communal eating. At Harvard in the 1880s the German system was followed so slavishly that attendance at classes was no longer required – all that counted was performance in the examinations. Then a reaction set in. Chicago was first, building seven dormitories by 1900 ‘in spite of the prejudice against them at the time in the [mid-] West on the ground that they were medieval, British and autocratic.’ Yale and Princeton soon adopted a similar approach. Harvard reorganised after the English housing model in the 1920s.8
Since American universities have been the forcing ground of so much of what will be considered later in this book, their history is relevant in itself. But the battle for the soul of Harvard, Chicago, Yale, and the other great institutions of learning in America is relevant in another way, too. The amalgamation of German and British best practices was a sensible move, a pragmatic response to the situation in which American universities found themselves at the beginning of the century. And pragmatism was a particularly strong strain of thought in America. The United States was not hung up on European dogma or ideology. It had its own ‘frontier mentality’; it had – and exploited – the opportunity to cherry-pick what was best in the old world, and eschew the rest. Partly as a result of that, it is noticeable that the matters considered in this chapter – skyscrapers, the Ashcan school of painting, flight and film – were all, in marked contrast with aestheticism, psychoanalysis, the élan vital or abstraction, fiercely practical developments, immediately and hardheadedly useful responses to the evolving world at the beginning of the century.
The founder of America’s pragmatic school of thought was Charles Sanders Peirce, a philosopher of the 1870s, but his ideas were updated and made popular in 1906 by William James. William and his younger brother Henry, the novelist, came from a wealthy Boston family; their father, Henry James Sr., was a writer of ‘mystical and amorphous philosophic tracts.’9 William James’s debt to Peirce was made plain in the title he gave to a series of lectures delivered in Boston in 1907: Pragmatism: A New Name for Some Old Ways of Thinking. The idea behind pragmatism was to develop a philosophy shorn of idealistic dogma and subject to the rigorous empirical standards being developed in the physical sciences. What James added to Peirce’s ideas was the notion that philosophy should be accessible to everyone; it was a fact of life, he thought, that everyone liked to have what they called a philosophy, a way of seeing and understanding the world, and his lectures (eight of them) were intended to help.
James’s approach signalled another great divide in twentieth-century philosophy, in addition to the rift between the continental school of Franz Brentano, Edmund Husserl, and Henri Bergson, and the analytic school of Bertrand Russell, Ludwig Wittgenstein, and what would become the Vienna Circle. Throughout the century, there were those philosophers who drew their concepts from ideal situations: they tried to fashion a worldview and a code of conduct in thought and behaviour that derived from a theoretical, ‘clear’ or ‘pure’ situation where equality, say, or freedom was assumed as a given, and a system constructed hypothetically around that. In the opposite camp were those philosophers who started from the world as it was, with all its untidiness, inequalities, and injustices. James was firmly in the latter camp.
He began by trying to explain this divide, proposing that there are two very different basic forms of ‘intellectual temperament,’ what he called the ‘tough-’ and ‘tender-minded.’ He did not actually say that he thought these temperaments were genetically endowed – 1907 was a bit early for anyone to use such a term – but his choice of the word temperament clearly hints at such a view. He thought that the people of one temperament invariably had a low opinion of the other and that a clash between the two was inevitable. In his first lecture he characterised them as follows:
Tender-minded | Tough-minded |
Rationalistic (going by principle) | Empiricist (going by facts) |
Optimistic | Pessimistic |
Religious | Irreligious |
Free-willist | Fatalistic |
Dogmatic | Pluralistic |
Materialistic | |
Sceptical |
One of his main reasons for highlighting this division was to draw attention to how the world was changing: ‘Never were as many men of a decidedly empiricist proclivity in existence as there are at the present day. Our children, one may say, are almost born scientific.’10
Nevertheless, this did not make James a scientific atheist; in fact it led him to pragmatism (he, after all, had published an important book Varieties of Religious Experience in 1902).11 He thought that philosophy should above all be practical, and here he acknowledged his debt to Peirce. Beliefs, Peirce had said, ‘are really rules for action.’ James elaborated on this theme, concluding that ‘the whole function of philosophy ought to be to find out what definite difference it will make to you and me, at definite instants of our life, if this world-formula or that world-formula be the true one…. A pragmatist turns his back resolutely and once for all upon a lot of inveterate habits dear to professional philosophers. He turns away from abstraction and insufficiency, from verbal solutions, from bad a priori reasons, from fixed principles, closed systems, and pretended absolutes and origins. He turns towards concreteness and adequacy, towards facts, towards action, and towards power.’12 Metaphysics, which James regarded as primitive, was too attached to the big words – ‘God,’ ‘Matter,’ ‘the Absolute.’ But these, he said, were only worth dwelling on insofar as they had what he called ‘practical cash value.’ What difference did they make to the conduct of life? Whatever it is that makes a practical difference to the way we lead our lives, James was prepared to call ‘truth.’ Truth was/is not absolute, he said. There are many truths, and they are only true so long as they are practically useful. That truth is beautiful doesn’t make it eternal. This is why truth is good: by definition, it makes a practical difference. James used his approach to confront a number of metaphysical problems, of which we need consider only one to show how his arguments worked: Is there such a thing as the soul, and what is its relationship to consciousness? Philosophers in the past had proposed a ‘soul-substance’ to account for certain kinds of intuitive experience, James wrote, such as the feeling that one has lived before within a different identity. But if you take away consciousness, is it practical to hang on to ‘soul’? Can a soul be said to exist without consciousness? No, he said. Therefore, why bother to concern oneself with it? James was a convinced Darwinist, evolution he thought was essentially a pragmatic approach to the universe; that’s what adaptations – species – are.13
America’s third pragmatic philosopher, after Peirce and James, was John Dewey. A professor in Chicago, Dewey boasted a Vermont drawl, rimless eyeglasses, and a complete lack of fashion sense. In some ways he was the most successful pragmatist of all. Like James he believed that everyone has his own philosophy, his own set of beliefs, and that such philosophy should help people to lead happier and more productive lives. His own life was particularly productive: through newspaper articles, popular books, and a number of debates conducted with other philosophers, such as Bertrand Russell or Arthur Lovejoy, author of The Great Chain of Being, Dewey became known to the general public as few philosophers are.14 Like James, Dewey was a convinced Darwinist, someone who believed that science and the scientific approach needed to be incorporated into other areas of life. In particular, he believed that the discoveries of science should be adapted to the education of children. For Dewey, the start of the twentieth century was an age of ‘democracy, science and industrialism,’ and this, he argued, had profound consequences for education. At that time, attitudes to children were changing fast. In 1909 the Swedish feminist Ellen Key published her book The Century of the Child, which reflected the general view that the child had been rediscovered – rediscovered in the sense that there was a new joy in the possibilities of childhood and in the realisation that children were different from adults and from one another.15 This seems no more than common sense to us, but in the nineteenth century, before the victory over a heavy rate of child mortality, when families were much larger and many children died, there was not – there could not be – the same investment in children, in time, in education, in emotion, as there was later. Dewey saw that this had significant consequences for teaching. Hitherto schooling, even in America, which was in general more indulgent to children than Europe, had been dominated by the rigid authority of the teacher, who had a concept of what an educated person should be and whose main aim was to convey to his or her pupils the idea that knowledge was the ‘contemplation of fixed verities.’16
Dewey was one of the leaders of a movement that changed such thinking, in two directions. The traditional idea of education, he saw, stemmed from a leisured and aristocratic society, the type of society that was disappearing fast in the European democracies and had never existed in America. Education now had to meet the needs of democracy. Second, and no less important, education had to reflect the fact that children were very different from one another in abilities and interests. For children to make the best contribution to society they were capable of, education should be less about ‘drumming in’ hard facts that the teacher thought necessary and more about drawing out what the individual child was capable of. In other words, pragmatism applied to education.
Dewey’s enthusiasm for science was reflected in the name he gave to the ‘Laboratory School’ that he set up in 1896.17 Motivated partly by the ideas of Johann Pestalozzi, a pious Swiss educator, and the German philosopher Friedrich Fröbel, and by the child psychologist G. Stanley Hall, the institution operated on the principle that for each child there were negative and positive consequences of individuality. In the first place, the child’s natural abilities set limits to what it was capable of. More positively, the interests and qualities within the child had to be discovered in order to see where ‘growth’ was possible. Growth was an important concept for the ‘child-centred’ apostles of the ‘new education’ at the beginning of the century. Dewey believed that since antiquity society had been divided into leisured and aristocratic classes, the custodians of knowledge, and the working classes, engaged in work and practical knowledge. This separation, he believed, was fatal, especially in a democracy. Education along class lines must be rejected, and inherited notions of learning discarded as unsuited to democracy, industrialism, and the age of science.18
The ideas of Dewey, along with those of Freud, were undoubtedly influential in attaching far more importance to childhood than before. The notion of personal growth and the drawing back of traditional, authoritarian conceptions of what knowledge is and what education should seek to do were liberating ideas for many people. In America, with its many immigrant groups and wide geographical spread, the new education helped to create many individualists. At the same time, the ideas of the ‘growth movement’ always risked being taken too far, with children left to their own devices too much. In some schools where teachers believed that ‘no child should ever know failure’ examinations and grades were abolished.19 This lack of structure ultimately backfired, producing children who were more conformist precisely because they lacked hard knowledge or the independent judgement that the occasional failure helped to teach them. Liberating children from parental ‘domination’ was, without question, a form of freedom. But later in the century it would bring its own set of problems.
It is a cliché to describe the university as an ivory tower, a retreat from the hurly-burly of what many people like to call the ‘real world,’ where professors (James at Harvard, Dewey at Chicago, or Bergson at the Collège de France) can spend their hours contemplating fundamental philosophical concerns. It therefore makes a nice irony to consider next a very pragmatic idea, which was introduced at Harvard in 1908. This was the Harvard Graduate School of Business Administration. Note that it was a graduate school. Training for a life/career in business had been provided by other American universities since the 1880S, but always as undergraduate study. The Harvard school actually began as an idea for an administrative college, training diplomats and civil servants. However, a stock market panic of 1907 showed a need for better-trained businessmen.
The Graduate School of Business Administration opened in October 1908 with fifty-nine candidates for the new degree of Master of Business Administration (M.B.A.).20 At the time there was conflict not only over what was taught but how it was to be taught. Accountancy, transportation, insurance, and banking were covered by other institutions, so Harvard evolved its own definition of business: ‘Business is making things to sell, at a profit, decently.’ Two basic activities were identified by this definition: manufacturing, the act of production; and merchandising or marketing, the act of distribution. Since there were no readily available textbooks on these matters, however, businessmen and their firms were spotlighted by the professors, thus evolving what would become Harvard’s famous system of case studies. In addition to manufacturing and distribution, a course was also offered for the study of Frederick Winslow Taylor’s Principles of Scientific Management.21 Taylor, an engineer by training, embraced the view, typified by a speech that President Theodore Roosevelt had made in the White House, that many aspects of American life were inefficient, a form of waste. For Taylor, the management of companies needed to be put on a more ‘scientific’ basis – he was intent on showing that management was a science, and to illustrate his case he had investigated, and improved, efficiency in a large number of companies. For example, research had discovered, he said, that the average man shifts far more coal or sand (or whatever substance) with a shovel that holds 21 pounds rather than, say, 24 pounds or 18 pounds. With the heavier shovel, the man gets tired more quickly from the weight. With the lighter shovel he gets tired more quickly from having to work faster. With a 21-pound shovel, the man can keep going longer, with fewer breaks. Taylor devised new strategies for many businesses, resulting, he said, in higher wages for the workers and higher profits for the company. In the case of pig-iron handling, for example, workers increased their wages from $1.15 a day to $1.85, an increase of 60 percent, while average production went up from 12.5 tons a day to 47 tons, an increase of nearly 400 percent. As a result, he said, everyone was satisfied.22 The final elements of the Harvard curriculum were research, by the faculty, shoe retailing being the first business looked into, and employment experience, when the students spent time with firms during the long vacation. Both elements proved successful. Business education at Harvard thus became a mixture of case study, as was practised in the law department, and a ‘clinical’ approach, as was pursued in the medical school, with research thrown in. The approach eventually became famous, with many imitators. The 59 candidates for M.B.A. in 1908 grew to 872 by the time of the next stock market crash, in 1929, and included graduates from fourteen foreign countries. The school’s publication, the Harvard Business Review, rolled off the presses for the first time in 1922, its editorial aim being to demonstrate the relation between fundamental economic theory and the everyday experience and problems of the executive in business, the ultimate exercise in pragmatism.23
What was happening at Harvard, in other business schools, and in business itself was one aspect of what Richard Hofstadter has identified as ‘the practical culture’ of America. To business, he added farming, the American labor movement (a much more practical, less ideological form of socialism than the labor movements of Europe), the tradition of the self-made man, and even religion.24 Hofstadter wisely points out that Christianity in many parts of the United States is entirely practical in nature. He takes as his text a quote of theologian Reinhald Niebuhr, that a strain in American theology ‘tends to define religion in terms of adjustment to divine reality for the sake of gaining power rather than in terms of revelation which subjects the recipient to the criticism of that which is revealed.’25 And he also emphasises how many theological movements use ‘spiritual technology’ to achieve their ends: ‘One … writer tells us that … “the body is … a receiving set for the catching of messages from the Broadcasting Station of God” and that “the greatest of Engineers … is your silent partner.” ’26 In the practical culture it is only natural for even God to be a businessman.
The intersection in New York’s Manhattan of Broadway and Twenty-third Street has always been a busy crossroads. Broadway cuts through the cross street at a sharp angle, forming on the north side a small triangle of land quite distinctive from the monumental rectangular ‘blocks’ so typical of New York. In 1903 the architect Daniel Burnham used this unusual sliver of ground to create what became an icon of the city, a building as distinctive and as beautiful now as it was on the day it opened. The narrow wedge structure became known – affectionately – as the Flatiron Building, on account of its shape (its sharp point was rounded). But shape was not the only reason for its fame: the Flatiron was 285 feet – twenty-one storeys – high, and New York’s first skyscraper.27
Buildings are the most candid form of art, and the skyscraper is the most pragmatic response to the huge, crowded cities that were formed in the late nineteenth century, where space was at a premium, particularly in Manhattan, which is built on a narrow slice of an island.28 Completely new, always striking, on occasions beautiful, there is no image that symbolised the early twentieth century like the skyscraper. Some will dispute that the Flatiron was the first such building. In the nineteenth century there were buildings twelve, fifteen, or even nineteen storeys high. George Post’s Pulitzer Building on Park Row, built in 1892, was one of them, but the Flatiron Building was the first to rule the skyline. It immediately became a focus for artists and photographers. Edward Steichen, one of the great early American photographers, who with Alfred Stieglitz ran one of New York’s first modern art galleries (and introduced Cézanne to America), portrayed the Flatiron Building as rising out of the misty haze, almost a part of the natural landscape. His photographs of it showed diminutive, horse-drawn carriages making their way along the streets, with gaslights giving the image the feel almost of an impressionist painting of Paris.29 The Flatiron created downdraughts that lifted the skirts of women going by, so that youths would linger around the building to watch the flapping petticoats.30
The skyscraper, which was to find its full expression in New York, was actually conceived in Chicago.31 The history of this conception is an absorbing story with its own tragic hero, Louis Henry Sullivan (1856–1924). Sullivan was born in Boston, the son of a musically gifted mother of German-Swiss-French stock and a father, Patrick, who taught dance. Louis, who fancied himself as a poet and wrote a lot of bad verse, grew up loathing the chaotic architecture of his home city, but studied the subject not far away, across the Charles River at MIT.32 A round-faced man with brown eyes, Sullivan had acquired an imposing self-confidence even by his student days, revealed in his dapper suits, the pearl studs in his shirts, the silver-topped walking cane that he was never without. He travelled around Europe, listening to Wagner as well as looking at buildings, then worked briefly in Philadelphia and the Chicago office of William Le Baron Jenney, often cited as the father of the skyscraper for introducing a steel skeleton and elevators in his Home Insurance Building (Chicago, 1883a–5).33 Yet it is doubtful whether this building – squat by later standards – really qualifies as a skyscraper. In Sullivan’s view the chief property of a skyscraper was that it ‘must be tall, every inch of it tall. The force and power of altitude must be in it. It must be every inch a proud and soaring thing, rising in sheer exaltation that from top to bottom it is a unit without a single dissenting line.’34
In 1876 Chicago was still in a sense a frontier town. Staying at the Palmer House Hotel, Rudyard Kipling found it ‘a gilded rabbit warren … full of people talking about money and spitting,’ but it offered fantastic architectural possibilities in the years following the great fire of 1871, which had devastated the city core.35 By 1880 Sullivan had joined the office of Dankmar Adler and a year later became a full partner. It was this partnership that launched his reputation, and soon he was a leading figure in the Chicago school of architecture.
Though Chicago became known as the birthplace of the skyscraper, the notion of building very high structures is of indeterminable antiquity. The intellectual breakthrough was the realisation that a tall building need not rely on masonry for its support.*
The metal-frame building was the answer: the frame, iron in the earlier examples, steel later on, is bolted (later riveted for speedier construction) together to steel plates, like shelves, which constitute the floors of each storey. On this structure curtain walls could be, as it were, hung. The wall is thus a cladding of the building, rather than truly weight bearing. Most of the structural problems regarding skyscrapers were solved very early on. Therefore, as much of the debate at the turn of the century was about the aesthetics of design as about engineering. Sullivan passionately joined the debate in favour of a modern architecture, rather than pastiches and sentimental memorials to the old orders. His famous dictum, ‘Form ever follows function,’ became a rallying cry for modernism, already mentioned in connection with the work of Adolf Loos in Vienna.36
Sullivan’s early masterpiece was the Wainwright Building in Saint Louis. This, again, was not a really high structure, only ten storeys of brick and terracotta, but Sullivan grasped that intervention by the architect could ‘add’ to a building’s height.37 As one architectural historian wrote, the Wainwright is ‘not merely tall; it is about being tall – it is tall architecturally even more than it is physically.’38 If the Wainwright Building was where Sullivan found his voice, where he tamed verticality and showed how it could be controlled, his finest building is generally thought to be the Carson Pirie Scott department store, also in Chicago, finished in 1903–4. Once again this is not a skyscraper as such – it is twelve storeys high, and there is more emphasis on the horizontal lines than the vertical. But it was in this building above all others that Sullivan displayed his great originality in creating a new kind of decoration for buildings, with its ‘streamlined majesty,’ ‘curvilinear ornament’ and ‘sensuous webbing.’39 The ground floor of Carson Pirie Scott shows the Americanisation of the art nouveau designs Sullivan had seen in Paris: a Metro station turned into a department store.40
Frank Lloyd Wright was also experimenting with urban structures. Judging by the photographs – which is all that remains since the edifice was torn down in 1950 – his Larkin Building in Buffalo, on the Canadian border, completed in 1904, was at once exhilarating, menacing, and ominous.41 (John Larkin built the Empire State Building in New York, the first to have more than 100 floors.) An immense office space enclosed by ‘a simple cliff of brick,’ its furnishings symmetrical down to the last detail and filled with clerks at work on their long desks, it looks more like a setting for automatons than, as Wright himself said, ‘one great official family at work in day-lit, clean and airy quarters, day-lit and officered from a central court.’42 It was a work with many ‘firsts’ that are now found worldwide. It was air-conditioned and fully fireproofed; the furniture – including desks and chairs and filing cabinets – was made of steel and magnesite; its doors were glass, the windows double-glazed. Wright was fascinated by materials and the machines that made them in a way that Sullivan was not. He built for the ‘machine age,’ for standardisation. He became very interested also in the properties of ferro-concrete, a completely new building material that revolutionised design. Steel was pioneered in Britain as early as 1851 in the Crystal Palace, a precursor of the steel-and-glass building, and reinforced concrete (béton arme) was invented in France in the same year, by François Hennebique. But it was only in the United States, with the building of skyscrapers, that these materials were exploited to the full. In 1956 Wright proposed a mile-high skyscraper for Chicago.43
Further down the eastern seaboard of the United States, 685 miles away to be exact, lies Kill Devil Hill, near the ocean banks of North Carolina. In 1903 it was as desolate as Manhattan was crowded. A blustery place, with strong winds gusting in from the sea, it was conspicuous by the absence of the umbrella pine trees that populate so much of the state. This was why it had been chosen for an experiment that was to be carried out on 17 December that year – one of the most exciting ventures of the century, destined to have an enormous impact on the lives of many people. The skyscraper was one way of leaving the ground; this was another, and far more radical.
At about half past ten that morning, four men from the nearby lifesaving station and a boy of seventeen stood on the hill, gazed down to the field which lay alongside, and waited. A pre-arranged signal, a yellow flag, had been hoisted nearby, at the village of Kitty Hawk, to alert the local coastguards and others that something unusual might be about to happen. If what was supposed to occur did occur, the men and the boy were there to serve as witnesses. To say that the sea wind was fresh was putting it mildly. Every so often the Wright brothers – Wilbur and Orville, the object of the observers’ attention – would disappear into their shed so they could cup their freezing fingers over the stove and get some feeling back into them.44
Earlier that morning, Orville and Wilbur had tossed a coin to see who would be the first to try the experiment, and Orville had won. Like his brother, he was dressed in a three-piece suit, right down to a starched white collar and tie. To the observers, Orville appeared reluctant to start the experiment. At last he shook hands with his brother, and then, according to one bystander, ‘We couldn’t help notice how they held on to each other’s hand, sort o’ like they hated to let go; like two folks parting who weren’t sure they’d ever see each other again.’45 Just after the half-hour, Orville finally let go of Wilbur, walked across to the machine, stepped on to the bottom wing, and lay flat, wedging himself into a hip cradle. Immediately he grasped the controls of a weird contraption that, to observers in the field, seemed to consist of wires, wooden struts, and huge, linen-covered wings. This entire mechanism was mounted on to a fragile-looking wooden monorail, pointing into the wind. A little trolley, with a cross-beam nailed to it, was affixed to the monorail, and the elaborate construction of wood, wires and linen squatted on that. The trolley travelled on two specially adapted bicycle hubs.
Orville studied his instruments. There was an anemometer fixed to the strut nearest him. This was connected to a rotating cylinder that recorded the distance the contraption would travel. A second instrument was a stopwatch, so they would be able to calculate the speed of travel. Third was an engine revolution counter, giving a record of propeller turns. That would show how efficient the contraption was and how much fuel it used, and also help calculate the distance travelled through the air.46 While the contraption was held back by a wire, its engine – a four-cylinder, eight-to-twelve-horsepower gasoline motor, lying on its side – was opened up to full throttle. The engine power was transmitted by chains in tubes and was connected to two airscrews, or propellers, mounted on the wooden struts between the two layers of linen. The wind, gusting at times to thirty miles per hour, howled between the struts and wires. The brothers knew they were taking a risk, having abandoned their safety policy of test-flying all their machines as gliders before they tried powered flight. But it was too late to turn back now. Wilbur stood by the right wingtip and shouted to the witnesses ‘not to look sad, but to laugh and hollo and clap [their] hands and try to cheer Orville up when he started.’47 As best they could, amid the howling of the wind and the distant roar of the ocean, the onlookers cheered and shouted.
With the engine turning over at full throttle, the restraining wire was suddenly slipped, and the contraption, known to her inventors as Flyer, trundled forward. The machine gathered speed along the monorail. Wilbur Wright ran alongside Flyer for part of the way, but could not keep up as it achieved a speed of thirty miles per hour, lifted from the trolley and rose into the air. Wilbur, together with the startled witnesses, watched as the Flyer careered through space for a while before sweeping down and ploughing into the soft sand. Because of the wind speed, Flyer had covered 600 feet of air space, but 120 over the ground. ‘This flight only lasted twelve seconds,’ Orville wrote later, ‘but it was, nevertheless, the first in the history of the world in which a machine carrying a man had raised itself by its own power into the air in full flight, had sailed forward without reduction of speed, and had finally landed at a point as high as that from which it had started.’ Later that day Wilbur, who was a better pilot than Orville, managed a ‘journey’ of 852 feet, lasting 59 seconds. The brothers had made their point: their flights were powered, sustained, and controlled, the three notions that define proper heavier-than-air flight in a powered aircraft.48
Men had dreamed of flying from the earliest times. Persian legends had their kings borne aloft by flocks of birds, and Leonardo da Vinci conceived designs for both a parachute and a helicopter.49 Several times in history ballooning has verged on a mania. In the nineteenth century, however, countless inventors had either killed themselves or made fools of themselves attempting to fly contraptions that, as often as not, refused to budge.50 The Wright brothers were different. Practical to a fault, they flew only four years after becoming interested in the problem.
It was Wilbur who wrote to the Smithsonian Institution in Washington, D.C., on 30 May 1899 to ask for advice on books to read about flying, describing himself as ‘an enthusiast but not a crank.’51 Born in 1867, thus just thirty-two at the time, Wilbur was four years older than Orville. Though they were always a true brother-brother team, Wilbur usually took the lead, especially in the early years. The sons of a United Brethren minister (and later a bishop) in Dayton, Ohio, the Wright brothers were brought up to be resourceful, pertinacious, and methodical. Both had good brains and a mechanical aptitude. They had been printers and bicycle manufacturers and repairers. It was the bicycle business that gave them a living and provided modest funds for their aviation; they were never financed by anyone.52 Their interest in flying was kindled in the 1890s, but it appears that it was not until Otto Lilienthal, the great German pioneer of gliding, was killed in 1896 that they actually did anything about their new passion. (Lilienthal’s last words were, ‘Sacrifices must be made.’)53
The Wrights received a reply from the Smithsonian rather sooner than they would now, just three days after Wilbur had written to them: records show that the reading list was despatched on 2 June 1899. The brothers set about studying the problem of flight in their usual methodical way. They immediately grasped that it wasn’t enough to read books and watch birds – they had to get up into the air themselves. Therefore they started their practical researches by building a glider. It was ready by September 1900, and they took it to Kitty Hawk, North Carolina, the nearest place to their home that had constant and satisfactory winds. In all, they built three gliders between 1900 and 1902, a sound commercial move that enabled them to perfect wing shape and to develop the rear rudder, another of their contributions to aeronautical technology.54 In fact, they made such good progress that by the beginning of 1903 they thought they were ready to try powered flight. As a source of power, there was only one option: the internal combustion engine. This had been invented in the late 1880s, yet by 1903 the brothers could find no engine light enough to fit onto an aircraft. They had no choice but to design their own. On 23 September 1903, they set off for Kitty Hawk with their new aircraft in crates. Because of unanticipated delays – broken propeller shafts and repeated weather problems (rain, storms, biting winds) – they were not ready to fly until II December. But then the wind wasn’t right until the fourteenth. A coin was tossed to see who was to make the first flight, and Wilbur won. On this first occasion, the Flyer climbed too steeply, stalled, and crashed into the sand. On the seventeenth, after Orville’s triumph, the landings were much gentler, enabling three more flights to be made that day.55 It was a truly historic moment, and given the flying revolution that we now take so much for granted, one might have expected the Wrights’ triumph to be front-page news. Far from it. There had been so many crackpot schemes that newspapers and the public were thoroughly sceptical about flying machines. In 1904, even though the Wrights made 105 flights, they spent only forty-five minutes in the air and made only two five-minute flights. The U.S. government turned down three offers of an aircraft from the Wrights without making any effort to verify the brothers’ claims. In 1906 no airplanes were constructed, and neither Wilbur nor Orville left the ground even once. In 1907 they tried to sell their invention in Britain, France, and Germany. All attempts failed. It was not until 1908 that the U.S. War Department at last accepted a bid from the Wrights; in the same year, a contract was signed for the formation of a French company.56 It had taken four and a half years to sell this revolutionary concept.
The principles of flight could have been discovered in Europe. But the Wright brothers were raised in that practical culture described by Richard Hofstadter, which played a part in their success. In a similar vein a group of painters later called the Ashcan school, on account of their down-to-earth subject matter, shared a similar pragmatic and reportorial approach to their art. Whereas the cubists, Fauves, and abstractionists concerned themselves with theories of beauty or the fundamentals of reality and matter, the Ashcan school painted the new landscape around them in vivid detail, accurately portraying what was often an ugly world. Their vision (they didn’t really share a style) was laid out at a groundbreaking exhibition at the Macbeth Gallery in New York.57
The leader of the Ashcan school was Robert Henri (1865–1929), descended from French Huguenots who had escaped to Holland during the Catholic massacres of the late sixteenth century.58 Worldly, a little wild, Henri, who visited Paris in 1888, became a natural magnet for other artists in Philadelphia, many of whom worked for the local press: John Sloan, William Glackens, George Luks.59 Hard-drinking, poker playing, they had the newspaperman’s eye for detail and a sympathy – sometimes a sentimentality – for the underdog. They met so often they called themselves Henri’s Stock Company.60 Henri later moved to the New York School of Art, where he taught George Bellows, Stuart Davis, Edward Hopper, Rockwell Kent, Man Ray, and Leon Trotsky. His influence was huge, and his approach embodied the view that the American people should ‘learn the means of expressing themselves in their own time and in their own land.’61
The most typical Ashcan school art was produced by John Sloan (1871–1951), George Luks (1867–1933), and George Bellows (1882–1925). An illustrator for the Masses, a left-wing periodical of social commentary that included John Reed among its contributors, Sloan sought what he called ‘bits of joy’ in New York life, colour plucked from the grim days of the working class: a few moments of rest on a ferry, a girl stretching at the window of a tenement, another woman smelling the washing on the line – all the myriad ways that ordinary people seek to blunt, or even warm, the sharp, cold life at the bottom of the pile.62
George Luks and George Bellows, an anarchist, were harsher, less sentimental.63 Luks painted New York crowds, the teeming congestion in its streets and neighbourhoods. Both he and Bellows frequently represented the boxing and wrestling matches that were such a feature of working-class life and so typical of the raw, naked struggle among the immigrant communities. Here was life on the edge in every way. Although prize fighting was illegal in New York in the 1900s, it nonetheless continued. Bellows’s painting Both Members of This Club, originally entitled A Nigger and a White Man, reflected the concern that many had at the time about the rise of the blacks within sports: ‘If the Negro could beat the white, what did that say about the Master Race?’64 Bellows, probably the most talented painter of the school, also followed the building of Penn Station, the construction of which, by McKim, Mead and White, meant boring a tunnel halfway under Manhattan and the demolition of four entire city blocks between Thirty-first and Thirty-third Streets. For years there was a huge crater in the centre of New York, occupied by steam shovels and other industrial appliances, flames and smoke and hundreds of workmen. Bellows transformed these grimy details into things of beauty.65
The achievement of the Ashcan School was to pinpoint and report the raw side of New York immigrant life. Although at times these artists fixed on fleeting beauty with a generally uncritical eye, their main aim was to show people at the bottom of the heap, not so much suffering, but making the most of what they had. Henri also taught a number of painters who would, in time, become leading American abstractionists.66
At the end of 1903, in the same week that the Wright brothers made their first flight, and just two blocks from the Flatiron Building, the first celluloid print of The Great Train Robbery was readied in the offices of Edison Kinetograph, on Twenty-third Street. Thomas Edison was one of a handful of people in the United States, France, Germany, and Britain who had developed silent movies in the mid-1890s.
Between then and 1903 there had been hundreds of staged fictional films, though none had been as long as The Great Train Robbery, which lasted for all of six minutes. There had been chase movies before, too, many produced in Britain right at the end of the nineteenth century. But they used one camera to tell a simple story simply. The Great Train Robbery, directed and edited by Edwin Porter, was much more sophisticated and ambitious than anything that had gone before. The main reason for this was the way Porter told the story. Since its inception in France in 1895, when the Lumière brothers had given the first public demonstration of moving pictures, film had explored many different locations, to set itself apart from theatre. Cameras had been mounted on trains, outside the windows of ordinary homes, looking in, even underwater. But in The Great Train Robbery, in itself an ordinary robbery followed by a chase, Porter in fact told two stories, which he intercut. That’s what made it so special. The telegraph operator is attacked and tied up, the robbery takes place, and the bandits escape. At intervals, however, the operator is shown struggling free and summoning law enforcement. Later in the film the two narratives come together as the posse chase after the bandits.67 We take such ‘parallel editing’ – intercutting between related narratives – for granted now. At the time, however, people were fascinated as to whether film could throw light on the stream of consciousness, Bergson’s notions of time, or Husserl’s phenomenology. More practical souls were exercised because parallel editing added immeasurably to the psychological tension in the film, and it couldn’t be done in the theatre.68 In late 1903 the film played in every cinema in New York, all ten of them. It was also responsible for Adolph Zukor and Marcus Loew leaving their fur business and buying small theatres exclusively dedicated to showing movies. Because they generally charged a nickel for entry, they became known as ‘nickelodeons.’ Both William Fox and Sam Warner were fascinated enough by Porter’s Robbery to buy their own movie theatres, though before long they each moved into production, creating the studios that bore their names.69
Porter’s success was built on by another man who instinctively grasped that the intimate nature of film, as compared with the theatre, would change the relationship between audience and actor. It was this insight that gave rise to the idea of the movie star. David Wark (D. W.) Griffith was a lean man with grey eyes and a hooked nose. He appeared taller than he was on account of the high-laced hook shoes he wore, which had loops above their heels for pulling them on – his trouser bottoms invariably rode up on the loops. His collar was too big, his string tie too loose, and he liked to wear a large hat when large hats were no longer the fashion. He looked a mess, but according to many, he ‘was touched by genius.’ He was the son of a Confederate Kentucky colonel, ‘Roaring Jake’ Griffith, the only man in the army who, so it was said, could shout to a soldier five miles away.70 Griffith had begun life as an actor but transferred to movies by selling story synopses (these were silent movies, so no scripts were necessary). When he was thirty-two he joined an early film outfit, the Biograph Company in Manhattan, and had been there about a year when Mary Pickford walked in. Born in Toronto in 1893, she was sixteen. Originally christened Gladys Smith, she was a precocious if delicate child. After her father was killed in a paddle-steamer accident, her mother, in reduced circumstances, had been forced to let the master bedroom of their home to a theatrical couple; the husband was a stage manager at a local theatre. This turned into Gladys’s opportunity, for he persuaded Charlotte Smith to let her two daughters appear as extras. Gladys soon found she had talent and liked the life. By the time she was seven, she had moved to New York where, at $15 a week, the pay was better. She was now the major breadwinner of the family.71
In an age when the movies were as young as she, theatre life in New York was much more widespread. In 1901–2, for example, there were no fewer than 314 plays running on or off Broadway, and it was not hard for someone with Gladys’s talent to find work. By the time she was twelve, her earnings were $40 a week. When she was fourteen she went on tour with a comedy, The Warrens of Virginia, and while she was in Chicago she saw her first film. She immediately grasped the possibilities of the new medium, and using her recently created and less harsh stage name Mary Pickford, she applied to several studios. Her first efforts failed, but her mother pushed her into applying for work at the Biograph. At first Griffith thought Mary Pickford was ‘too little and too fat’ for the movies. But he was impressed by her looks and her curls and asked her out for dinner; she refused.72 It was only when he asked her to walk across the studio and chat with actors she hadn’t met that he decided she might have screen appeal. In those days, movies were short and inexpensive to make. There was no such thing as a makeup assistant, and actors wore their own clothes (though by 1909 there had been some experimentation with lighting techniques). A director might make two or three pictures a week, usually on location in New York. In 1909, for example, Griffith made 142 pictures.73
After an initial reluctance, Griffith gave Pickford the lead in The Violin-Maker of Cremona in 1909.74 A buzz went round the studio, and when it was first screened in the Biograph projection room, the entire studio turned up to watch. Pickford went on to play the lead in twenty-six more films before the year was out.
But Mary Pickford’s name was not yet known. Her first review in the New York Dramatic Mirror of 21 August 1909 read, ‘This delicious little comedy introduced again an ingenue whose work in Biograph pictures is attracting attention.’ Mary Pickford was not named because all the actors in Griffith’s movies were, to begin with, anonymous. But Griffith was aware, as this review suggests, that Pickford was attracting a following, and he raised her wages quietly from $40 to $100 a week, an unheard-of figure for a repertory actor at that time.75 She was still only sixteen.
Three of the great innovations in filmmaking occurred in Griffith’s studio. The first change came in the way movies were staged. Griffith began to direct actors to come on camera, not from right or left as they did in the theatre, but from behind the camera and exit toward it. They could therefore be seen in long range, medium range, and even close-up in the same shot. The close-up was vital in shifting the emphasis in movies to the looks of the actor as much as his or her talent. The second revolution occurred when Griffith hired another director. This allowed him to break out of two-day films and plan bigger projects, telling more complex stories. The third revolution built on the first and was arguably the most important.76 Florence Lawrence, who was marketed as the ‘Biograph Girl’ before Mary, left for another company. Her contract with the new studio contained an unprecedented clause: anonymity was out; instead she would be billed under her own name, as the ‘star’ of her pictures. Details about this innovation quickly leaked all over the fledgling movie industry, with the result that it was not Lawrence who took the best advantage of the change she had wrought. Griffith was forced to accept a similar contract with Mary Pickford, and as 1909 gave way to 1910, she prepared to become the world’s first movie star.77
A vast country, teeming with immigrants who did not share a common heritage, America was a natural home for the airplane and the mass-market movie, every bit as much as the skyscraper. The Ashcan school recorded the poverty that most immigrants endured when they arrived in the country, but it also epitomised the optimism with which most of the emigrés regarded their new home. The huge oceans on either side of the Americas helped guarantee that the United States was isolated from many of the irrational and hateful dogmas and idealisms of Europe which these immigrants were escaping. Instead of the grand, all-embracing ideas of Freud, Hofmannsthal, or Brentano, the mystical notions of Kandinsky, or the vague theories of Bergson, Americans preferred more practical, more limited ideas that worked, relishing the difference and isolation from Europe. That pragmatic isolation would never go away entirely. It was, in some ways, America’s most precious asset.