CHAPTER 8
Seven Ways to Stay Curious
1. STAY FOOLISH
The two most influential creative businessmen of the last hundred years had a lot in common. Both were California-based pioneers who successfully imposed their own aesthetic tastes on the everyday lives of millions. Both used what the Harvard Business School professor Clayton Christensen terms ‘disruptive technologies’ to build enormous and enduring business empires. Both were driven and intense characters. Both exhibited a high ‘need for cognition’, and infused this characteristic into the culture of their companies — at least while they were alive.
Walt Disney, Chicago-born, moved to Kansas City as a young man and found a job with the Kansas City Ad Company, where he became interested in the new techniques of animation. After reading a book on the subject, Disney decided that ‘cel’ (celluloid) animation would soon supersede cut-out animation. Before long he had started his own business, making short cartoons called ‘Laugh-O-Grams’, which ran at local movie theatres. Soon, Hollywood beckoned. Together with his brother Roy, Walt moved there to set up the first Disney studio, in their Uncle Robert’s garage.
In the 1920s, Disney’s characters, including Oswald the Rabbit, became famous as movie-going took off around the States. Steamboat Willie, which introduced Mickey Mouse in 1928, was one of the first cartoons with synchronised sound. Disney won his first Oscar — for the Mickey Mouse series — in 1932. Towards the end of that decade he built a campus for the Disney studios at Burbank, where full-colour, feature-length spectaculars like Snow White and the Seven Dwarves, Fantasia and Dumbo were created.
The rise of television threatened to kill off the movie theatres and, with them, Disney’s business. But Disney embraced this new and fast-spreading technology. He adapted established characters like Mickey, Donald Duck and Goofy to TV, and created new series from scratch, featuring non-animated characters like Davy Crockett. By the mid-1950s, Disney realised that it could take advantage of the trend towards destination tourism, and built his first theme park in Anaheim, California. His company led the way in new technologies like animatronics; visitors to the Illinois Pavilion at the New York World’s Fair in 1964—1965 were greeted by a robot Abraham Lincoln, created by Disney engineers.
After Walt Disney died in 1966, the company foundered. It lost its knack for innovation, failing to create new properties to match the success of Mickey Mouse, or to take advantage of new technologies, like computer animation. Under new, aggressive management in the 1980s, it became financially successful again. But although it remained a massive, highly profitable company, and one of the world’s most valuable brands, the Walt Disney corporation never quite recaptured the creative zest that made it a global colossus in the first place.
In 2006, Disney’s board was joined by Steve Jobs of Apple, after Disney agreed to buy Pixar, the computer animation company of which Jobs was CEO and a major stockholder. In the fifteen years following the release of Toy Story in 1995, Pixar had become something like Disney was in the 1930s, combining creative and commercial dynamism. During that time, Disney had been distributing Pixar’s films, while envious of its success and acclaim. Disney’s chief executive Michael Eisner and Jobs were fiercely competitive with each other; it was only after Eisner left that Disney felt ready to accept that as it wasn’t able to beat Pixar, it would have to buy it.
Jobs, like Walt Disney, started a business that would change the world from a Californian garage. In his case, it was his parents’ garage; that was where he and the technical genius he had befriended, Steve Wozniak, hacked, fiddled and tweaked their way to a new kind of computer, one that was small enough, simple enough and handsome enough to sit in a person’s home. By 1983, Apple was in the Fortune 500. After Jobs was ejected from the company he had built, in 1985, he became fascinated by the new digital animation techniques being pioneered at a small division of George Lucas’s production company, Lucasfilm. Lucas agreed to sell that unit to him; it became Pixar. For years, Jobs wasn’t sure what to do with Pixar. He just knew that he was curious about what it did.
Steve Jobs was a merely competent technician and, though highly intelligent, not a particularly original thinker. What made him exceptional were a ferocious will to succeed and a burning sense of epistemic curiosity. Jobs was interested in everything: the Bauhaus movement, the poetry of the beats, Eastern philosophy, the workings of business, the lyrics of Bob Dylan, the biology of the digestive system. A university tutor remembers his ‘very enquiring mind . . . he refused to accept automatically received truths, and he wanted to examine everything himself.’35 Jobs took a course in calligraphy while at university for no other reason than that it interested him.36
Jobs’s curiosity was crucial to his ability to invent and reinvent himself and his businesses. He had significantly more epistemic breadth than most of his peers in the technology business, and when the internet started breaking down the divisions between industries, he was best placed to take advantage. At Apple, he brought together at least four disparate cultures in which he had become deeply immersed: 1960s counter-culture, the culture of American business entrepreneurs, the culture of design and the culture of computer geeks.37 When the invention of MP3s made the spread of digital music inevitable, it was Jobs’s personal interest in music that, as much as anything, enabled him to be the first to launch a successful MP3 player and the first legal music download service. It helped him not just to spot the opportunity, but talk to music business executives in terms they understood, and later to persuade rock stars like Bono to help him sell his products.
Jobs’s intellectual fascination with the creative process made him take on Pixar and then stick with it even while it lost money. Throughout his life he retained the interest in novel ideas and techniques that the young Walt Disney showed at the Kansas City Ad Company. Disney’s failure to replicate its early successes was partly due to its failure to institutionalise the driving curiosity of its founder. Instead, it focused on making money from its existing assets. Jobs was unimpressed with Michael Eisner’s failure, as he saw it, to investigate what was happening next door:
Pixar had successfully reinvented Disney’s business, turning out great films one after the other while Disney turned out flop after flop. You would think the CEO of Disney would be curious about how Pixar was doing that. But during the twenty-year relationship, he visited Pixar for a total of about two and a half hours . . . He was never curious. I was amazed. Curiosity is amazingly important.
One of the most important and difficult questions for any organisation, especially those whose success depends on staying abreast of technological change, is how to inculcate a spirit of curiosity into its executives and employees — how to create and sustain communities of enquiring minds. There is no formula for creating a curious culture. But we can glean a few clues from the history of nation states.
Up until about 1700, China was, in the words of the historian Ian Morris, ‘the richest, strongest, and most inventive place on earth’. Yet in the succeeding century the West raced ahead economically and intellectually, and continued to do so until the late twentieth century, while China languished. There is more than one reason that Europe, and then America, industrialised more quickly and successfully than China, as well as India and most other Asian countries; legal frameworks, education systems and natural resources all played their part. But one factor was that the West unlocked the power of human curiosity, while the East did not. The great Eastern empires suffered from what another historian, Toby Huff, calls a ‘curiosity deficit’. The Chinese elites weren’t interested in exploring the knowledge and technologies of the West because they were perfectly content as they were.
Although the seventeenth-century Catholic Church did its best to suppress Galileo’s discoveries, it would be wrong to characterise it as intellectually incurious. Many men of the cloth were up to date with the latest thinking in the new sciences, and some were practitioners in their own right. After the publication of Galileo’s The Starry Messenger, Cardinal Robert Bellarmine ordered the best mathematicians and astronomers at the Jesuit College to study it.
That it was Bellarmine who also presided over the Church’s censure of Galileo in 1616 tells us something important. It wasn’t that the Church was incurious about the true nature of the cosmos; it’s that they believed such knowledge should remain the exclusive province of those who were able to handle it — that is, people like themselves. What made the authorities furious with Galileo wasn’t just that he published The Starry Messenger, but that he published it in Italian, the language of the common man, rather than Latin, the language of elites.
In fact, it was Jesuits who took the Galilean telescope to China and Thailand, and translated Galileo’s work into Chinese. Matteo Ricci, a Jesuit who arrived in China in 1583, was a deeply learned man, and far from the stereotype of the arrogant Western missionary. He mastered written and spoken Chinese, and formed a close and enduring partnership with Xu Guangqi, a brilliant Chinese scholar who converted to Christianity. Together, the two men attempted to spark the interest of China’s rulers and intellectuals in the astonishing new discoveries emerging from Europe. They made little headway.
Chinese scholars had been gazing at the heavens for hundreds of years, and are credited with the discovery of sunspots long before the Europeans. But their astronomical beliefs grew out of their spiritual and religious beliefs, rather than being founded on empirical observation. Ricci and Guangqi set about providing the tools needed to put Chinese astronomy on the same footing as European astronomy: trigonometric mathematics, planetary tables and the telescope.
They predicted eclipses with an accuracy that impressed the Chinese, and even staged competitions with Chinese astronomers to test the predictive power of their theories, which they would invariably win. Other missionaries showed the Chinese innovative military hardware. The reaction of the Chinese establishment to the news from Europe was, for the most part, a massive imperial shrug.
The Chinese authorities recognised that the Westerners had some impressive ideas and technology, but fundamentally, they weren’t interested in it. China, under the Ming dynasty, was enjoying one of the most prosperous eras in its history. Its share of the world’s economic activity far exceeded Europe’s. Why should it care what the upstarts from Europe were doing when it had its own glorious traditions and a thriving economy?
‘It is better to have no good astronomy than to have Westerners in China,’ declared the great seventeenth-century Chinese scholar, Yang Guangxian. During previous periods of glory for Chinese civilisation, like the Han dynasty, he said, astronomers knew even less about the relationship between the sun and the moon than they do today. But still, ‘the Han dynasty enjoyed dignity and prosperity that lasted for four hundred years.’ Eventually, the Chinese told the Christians to go home, and to take their telescopes and cannons with them. China only fully accepted the premises of Western science in the twentieth century, and is only now making up the intellectual and economic ground it lost back then.
In his book Why the West Rules — For Now, Ian Morris argues that China’s decline relative to the West has a lot to do with a simple geographical fact — the respective widths of the Atlantic and Pacific oceans. The Atlantic, 3,000 miles across, was ‘a kind of Goldilocks Ocean’ — it was just the right size. It was big enough that very different kinds of goods, springing from very different kinds of cultures, were produced around its shores, in Africa, Europe and the Americas, yet small enough that it could be traversed by Elizabethan galleons.
The Pacific, by contrast, was much too big to make trade possible or exploration feasible. The 8,000-mile gap between China and California was enough to prevent even the most intrepid Chinese from discovering and colonising the Americas before the Western Europeans. China was relatively safe from invaders and attackers, but it had fewer opportunities to explore the rest of the world and little incentive to do so.
Here was the root cause of China’s curiosity deficit. In the seventeenth century, Europeans created a new market economy around the shores of the Atlantic, which focused Europe’s greatest minds on understanding the movement of winds and tides, which led to the great unlocking of nature’s secrets that came to be known as the scientific revolution. A broader intellectual and political revolution, now known as the Enlightenment, followed; it was Europeans, more familiar with other cultures, who were mostly likely to ask questions about what kind of society was a good society. Meanwhile, China looked inwards, to its ancient and unmatchably rich traditions, and was largely uninterested in the news brought to it by travelling Christians.
Success isn’t good for curiosity. Like the Chinese in the seventeenth century, the managers of consistently profitable companies tend to look inwards, ceasing to be interested in ideas from beyond their own borders. The balance between exploration and exploitation becomes skewed too far towards the latter. Disney, despite its lack of world-beating new films, remained financially strong throughout its creatively fallow period. But its profit margin formed the equivalent of the Pacific Ocean; there was little incentive for it explore new ways of making or distributing Disney magic, when the current methods of exploiting it were generating so much cash.
Apple, during the long bull run of success it enjoyed after the return of Steve Jobs, maintained its pursuit of innovation partly because of the white-hot intensity of its chief executive’s curiosity, and partly because its flirtations with demise over the previous decade meant it was not stupefied by overconfidence. Whether it can maintain its curiosity now that Jobs is gone and the company floats in a Pacific Ocean of cash is a question yet to be answered.
Another way of framing it is to ask how Apple, or any company, can remain aware of its own unknowns. The great physicist James Clerk Maxwell once remarked that ‘thoroughly conscious ignorance is the prelude to every real advance in science.’ Companies, and rulers, who learn to cultivate their ‘conscious ignorance’ — to be fascinated, even obsessed, by what they don’t know — are the ones that are least likely to be caught unaware by change.
When he turned thirty, Steve Jobs was already musing about why it was that people his age and older began to develop rigid habits of thought: ‘People get stuck in these patterns, like grooves in a record.’ When Jobs was fifty, and had already come close to death from cancer, he told Stanford University’s graduating students about Stewart Brand, a luminary of Californian counter-culture, and a technological visionary. Jobs finished his speech by repeating Brand’s mantra: ‘Stay Hungry, Stay Foolish.’ ‘I have always wished that for myself,’ he said. He didn’t leave instructions on how to instil the same attitude into a whole organisation.
2. BUILD THE DATABASE
My first real job after leaving university was at the London office of the advertising agency J. Walter Thompson. At that time, all new employees were handed two slim books, both authored by an agency alumnus, already long dead. His name was James Webb Young. The first book was called How to Become an Advertising Man.38 Although we sniggered at the dated style (‘Salesmanship is the art of influencing any kind of human behaviour by putting the proposition in terms appealing to the other fellow’) we all read it from cover to cover. Its advice was shrewd, cogently expressed and still pertinent.
Young worked on Madison Avenue just prior to its Mad Men heyday. He was of an older generation to Don Draper, having already attained a reputation as a master of persuasion by the time America entered the Second World War. When the American government approached him to ask if he would design a propaganda programme with the aim of depressing German morale, Young drafted a memo that displayed a characteristic blend of chutzpah, hard logic and no-nonsense practicality. He defined the challenge as if it were an exercise in soap powder marketing; the government needs ‘an idea . . . that will bring us the most profit from the market in which we are to sell it — that is, the one that will secure the greatest lowering of morale in the shortest time, and meet with the least resistance . . . from prospective customers. In my opinion this idea is now The Inevitability of Defeat.’
The second book we were handed was called A Technique for Producing Ideas. Written in 1960, when Young was in semi-retirement, the book was intended for advertising people, but its lessons are widely applicable. In its modest way, it negates the need for any other books about the process of creative thinking. It is very short — pamphlet-sized — and immensely practical. There are no reflections on the ineffable mysteries of creative genius, no jargon and few digressions. It gives pride of place to curiosity.
Young’s technique consists of five steps. The first is to ‘Gather raw material’. By this Young meant knowledge about the product and its consumers. You might think there is nothing new to say about your product or the people who buy it, he says, but persist: look harder, and you will see. Young quotes Guy de Maupassant, who was told by an older writer to: ‘Go out into the streets of Paris and pick out a cab driver. He will look to you very much like every other cab driver. But study him until you can describe him so that he is seen in your description to be an individual, different from every other cab driver in the world.’ This was gathering of knowledge specific to the product and its consumers. Of equal importance to this, says Young, is ‘the continual process of gathering of general materials’:
Every really good creative person in advertising whom I have ever known has always had two noticeable characteristics. First, there was no subject under the sun in which he could not easily get interested — from, say, Egyptian burial customs to modern art. Every facet of life had fascination for him. Second, he was an extensive browser in all sorts of fields of information . . . In advertising, an idea results from a new combination of specific knowledge about products and people with general knowledge about life and events.
Young’s formulation is simple but powerful. Any task or project that requires creative thought will be better addressed by someone who has deep knowledge of the task at hand, and general background knowledge of the culture in which it and its users (or readers, or viewers) live. A mind well-stocked with these two types of knowledge is much more likely to be a fertile source of the serendipitous collisions that lead to brilliant ideas. Leo Burnett, founder of the global ad agency network that still bears his name, and a near-contemporary of Young’s, said, ‘Curiosity about life in all its aspects, I think, is still the secret of great creative people.’
Great ideas don’t just spring from the moment of the mental effort involved in trying to come up with one. Their roots extend back months, years, decades into their author’s life; they are products of long-formed habits of mind as much as they are of flashes of brilliance. As Young puts it, ‘To some minds each fact is a separate bit of knowledge. To others it is a link in a chain of knowledge.’ It’s clear that he intuitively understood the principle we examined earlier — that new knowledge is assimilated better, and has more creative possibility, the bigger the store of existing knowledge it is joining. Knowledge loves knowledge.
Highly curious people, who have carefully cultivated their long-term memories, live in a kind of augmented reality; everything they see is overlaid with additional layers of meaning and possibility, unavailable to ordinary observers. The fashion designer Paul Smith says that ‘I’ve got eyes that see. A lot of people have eyes that look but don’t see. I’ll see something light next to something dark, or something smooth next to something rough, or Harris tweed next to silk, and that means something to me. I can look at architecture and the proportions of doors and windows and see pockets and the openings of a jacket. Or I listen to music that is very calm but has a very bright bit and that can be a navy blue suit with a flowery shirt to me.’
The rest of Young’s steps depend and elaborate on his first. The second step is ‘the working over’. This involves taking the facts you have gathered and looking at them again from different angles, bringing them into unusual combinations with other facts, constantly seeking interesting new relationships, new syntheses. This won’t necessarily yield any good ideas; in fact Young predicts that you will hit a wall of hopelessness, when nothing fits, no insights present themselves, and everything you’ve learned is in a meaningless jumble in your mind. Your arrival at this hopeless point, says Young, is actually good news; it means this stage is over and the next one can begin.
This one involves, reassuringly, ‘absolutely no effort of a direct nature’. It is the stage at which the unconscious is allowed to go to work, assisted only by the stimulation of something completely irrelevant to the task at hand. Young reminds the reader that Sherlock Holmes often drags Watson off to a concert in the middle of case, overruling the objections of his literal-minded partner, knowing that, having done the hard work of thinking, insight is now more likely to be discovered while the conscious mind is occupied by something else entirely.
The fourth and most magical stage takes place in the mind’s subterranean chambers. After the concert, advises Young, retire to bed, and ‘turn the problem over to your unconscious mind and let it work while you sleep.’ Now that the conscious mind has prepared the ground, insight will take you unawares: ‘While shaving, or bathing, or most often when you are half awake in the morning.’ In the fifth and final stage, the idea is prodded, tested, tweaked and massaged into reality.
*
We all know about ‘eureka moments’, when ideas seem to drop unbidden into their creator’s head. In fact, as Young knew, there is little accidental about such insights. They arise from the gathering and the working-over — the slow, deliberate, patient accumulation of knowledge.
As a young man, the great French mathematician Henri Poincaré worked as an engineer, and was sometimes asked to investigate mining disasters. He had been struggling with a problem in pure mathematics when he was summoned to the site of a mine to perform an inspection. He later recalled that the excursion allowed him to forget the problem altogether for the first time in several months. His unconscious, however, was just getting to work:
Having reached Coutances, we entered an omnibus to go some place or other. At the moment I put my foot on the step the idea came to me, without anything in my former thoughts seeming to have paved the way for it, that the transformations I had used to define the Fuchsian functions were identical with those of non-Euclidean geometry. I did not verify the idea; I should not have had time, as, upon taking my seat in the omnibus, I went on with a conversation already commenced, but I felt a perfect certainty. On my return to Caen, for conscience’s sake, I verified the result at my leisure.
Poincaré reflected that what had seemed, at the time, to be a fruitless accumulation of mathematical ideas was, in fact, essential preparation for his epiphany. In his unconscious, the ideas had become ‘mobilized atoms’ which collided into each other, arranging and rearranging themselves into ever more complex combinations until finally the ‘most beautiful’ of them made it into consciousness, just as he was boarding a bus.
In recent years, scientists have been examining the neural mechanics of the semi-conscious or unconscious creativity that artists and inventors from Kafka to Edison have relied on for inspiration. They have found that REM (rapid eye movement) sleep, when our dreams are most vivid, does indeed boost our creativity. The reason seems to be that this is when the brain feels most free to make connections between different associative networks of knowledge.
This is something else that Rousseau and his followers got wrong. When we learn facts, they don’t just sit in our unconscious, inert and isolated, useless until recalled. They make themselves available for all sorts of tasks the conscious mind would never think of using them for. Sleep seems to work on our long-term memories like alcohol at a party. As the conscious mind releases its grip on thinking, the facts stored in our memory feel more free to talk to each other — to strike up relationships with bits of knowledge from outside their neighbourhood. When, during the day, the mind’s resources are mobilised in the service of a particular problem, it’s this after-hours mingling that often summons the final breakthrough.
Human memory is inefficient and unreliable in comparison to machine memory, but it’s this very unpredictability that’s the source of our creativity. It makes connections we’d never consciously think of making, smashing together atoms that our conscious minds keep separate. Digital databases cannot yet replicate the kind of serendipity that enables the unconscious human mind to make novel patterns and see powerful new analogies, of the kind that lead to our most creative breakthroughs. The more we outsource our memories to Google, the less we are nourishing the wonderfully accidental creativity of our unconscious.
Although creative people often find insight in dreams, it’s a mistake to think that dreaming is necessarily a creative act in itself. The education writer and teacher Daisy Christodoulou cites the example of a school in which pupils were asked to ‘think like designers’ and encouraged to daydream. As she points out, there’s a big difference between an expert’s daydream and a novice’s daydream. Expert designers have a huge store of background knowledge and learned processes, which feed into their dreams.
In the concluding section of his book, James Webb Young returns to where he started — the importance of lifelong curiosity. ‘There is one [step] on which I would place greater emphasis — the store of general materials in the idea-producer’s reservoir . . . the principle of constantly expanding your experience, both personally and vicariously, does matter tremendously in any idea-producing job.’ Building the database is the surest route to producing ideas that will some day become part of someone else’s database.
3. FORAGE LIKE A FOXHOG
If you have a knowledge-based career, you need a learning strategy. Is it best to be a specialist or a generalist — to know a lot about a little or a little about a lot?
The story of the last century has been one of increasing rewards to specialists. It’s not enough for an historian to be expert on the civil war; she must be an expert on civil war marching songs. Today, Don Draper couldn’t just introduce himself to clients as an ad man, but as a specialist in social media or branded content. Silicon Valley firms aren’t just competing to hire the most brilliant software engineers but the ones most expert in coding for iOS or Android apps.
But the digital revolution — or rather the series of revolutions wrought by digital, wired technologies — has created a counter-trend. Paola Antonelli is senior curator in the department of architecture and design at the Museum of Modern Art in New York. Curators, she told me, come in two types: conservers and hunter-gatherers. She plants herself firmly in the latter category.39 Antonelli is a self-proclaimed generalist, interested in gathering and synthesising different materials from disparate fields, from design and architecture to science, technology and philosophy. She describes herself as ‘a curious octopus. I am always reaching out and taking in things from everywhere.’
Antonelli told me that designers increasingly find themselves working in groups, and have to quickly adapt to other forms of knowledge. There are few design specialisms left, she says — some book designers design only books, but they are exceptions to the rule. Today’s designers might find themselves working with engineers, marketers and accountants. ‘For instance,’ says Antonelli, ‘if you are a branding designer and you are hired by a Texan oil company to create their corporate identity, you’ll need to put a team together that includes an expert on oil production — and you’ll need to be curious about the process of getting oil out of the ground. Unless you do that, you probably won’t come up with the right answer.’
The spread of digital technologies into more and more areas of our lives is blurring the boundaries between domains: ‘Increasingly, designers are having to think in terms not just of material objects but experiences and interactions,’ says Antonelli. Designers now have to be more versatile than ever, and that means being curious about the knowledge of other people. If you want to succeed in today’s music industry, you need to understand social networks; if you want to make a name for yourself in linguistics, you need to get to grips with data analytics.
Even sport, often thought of as purely physical domain, is increasingly knowledge-rich, and requires multi-disciplinary competence. For instance, to be a successful football manager today, you need to have accumulated a deep knowledge of tactical formations and some knowledge of statistical techniques, psychology — even economics. It used to be thought that the only real requirement for a manager was to have been a successful player. But increasingly the coaches of Europe’s biggest clubs are men whose playing careers were truncated through injury, or simply because they weren’t good enough. When the Real Madrid and Chelsea manager José Mourhino, himself an example of this trend, was asked why this was, he replied: ‘More time to study’.40
We know that new ideas often come from the cross-fertilisation of different fields, occurring in the mind of a widely knowledgeable person. Francis Crick, discoverer of DNA, trained as a physicist and later claimed that it was this background that gave him the confidence to solve a problem biologists regarded as fundamentally insoluble. Picasso combined African sculpture with Western painting to create a new kind of art.
In the marketplace for talent, the people most in demand will always be those who offer an expertise few others possess. But having a breadth of knowledge is increasingly valuable too. These two trends exist in tension with each other. So should you focus on learning more about your own niche, or on widening your knowledge base?
This question recalls the story of the hedgehog and the fox. It’s been told in many forms through the ages but the essence of it is always the same. The fox evades his attackers in a variety of inventive but exhausting ways, while the hedgehog adopts one tried and trusted strategy — hunkering down and letting its spikes do the work. In the words of Greek poet Archilochus: ‘The fox knows many things, but the hedgehog knows one big thing.’
The philosopher Isaiah Berlin proposed that all thinkers could be divided into one category or another. There are thinkers who look at the world through the lens of one particular idea, and those who revel in a variety of perspectives. Plato was a hedgehog, Montaigne a fox. Tolstoy thought he was a hedgehog, but couldn’t help writing like a fox. You can apply this distinction to people in politics or business. Ronald Reagan was a hedgehog, Bill Clinton a fox. Steve Wozniak was a hedgehog, Steve Jobs a fox, which may explain why they worked so well together.
The thinkers best positioned to thrive today and in the future will be a hybrid of these two animals. In a high-information, highly competitive world, it’s crucial to know one or two big things and to know them in more depth and detail than most of your contemporaries. But to really ignite that knowledge, you need the ability to think about it from a variety of eclectic perspectives and to be able to collaborate fruitfully with people who have different specialisms.
For example, Charles Darwin knew as much about the life cycle of earthworms and the beaks of finches than anyone alive. But it was his reading of the economist Thomas Malthus that enabled him to rise above other naturalists and construct an overarching theory of life. If Darwin had read widely but hadn’t built deep expertise in biology he would never have arrived at his big idea (and even if he had, he wouldn’t have persuaded anyone of it). If he hadn’t been such a voracious consumer of knowledge from other fields, he might not have come upon the insight that enabled him to see the underlying logic of evolution. Darwin was the archetype of a species he wouldn’t have recognised — the foxhog.
Charlie Munger, Warren Buffett’s business partner and vice-chairman of their legendary investment company Berkshire Hathaway, is one of the most successful investors of all time. He knows as much about picking stocks as anyone in the world and has an unbeatable depth of experience in buying and selling them. But that doesn’t explain his pre-eminence — there are others, albeit not many, with similar expertise. What has elevated Munger above his peers is that he hunts for knowledge like a foxhog. He is constantly reading beyond his own field, in an effort to frame and reframe the information he receives. Munger is a passionate believer in the importance of working with what he calls ‘multiple models’. When Munger looks at a business, he does so through lenses from maths, economics, engineering, psychology and other disciplines. Using multiple models is crucial, says Munger, because they give you different answers to everyone else, even when you are all looking at the same data. They turn facts into stories and information into insight. Breadth is as important as depth: ‘Before you’re going to be a great stock picker,’ says Munger, ‘you need some general education’.
The foxhog possesses what IBM calls ‘T-shaped’ knowledge. The most valuable 21st-century workers combine deep skills in a specialty (the vertical axis of the T), with a broad understanding of other disciplines (the horizontal axis). The former allows them to execute projects that require particular expertise; the latter enables them to see contextual linkages to other disciplines. Having a core competency differentiates the foxhog in the marketplace — it gives her a USP within her organisation and beyond it — while the top line of the T enables her to constructively link up with colleagues from different fields, and to adapt to different challenges throughout her career.
For a contemporary example of a successful foxhog, look no further than Nate Silver, the statistician and writer. Silver first gained recognition for developing a system for forecasting the performance of major league baseball players. But his interests always ranged far beyond sport, and when the 2008 presidential election came into view he started a blog on it, called 538.com (five hundred and thirty-eight being the number of votes in the US electoral college). At 538 he applied his statistical know-how to the business of analysing and predicting results in the party primaries and, later, the general election, with impressive accuracy. After being hired by the New York Times he did the same thing in 2012, and famously bested more traditional pundits by calling the final results with near-perfect precision. In 2013 he was poached by the ESPN network, where he applies statistical techniques to a variety of fields from sports to politics to the movies.
There may be more sophisticated statisticians than Silver in the world. But what makes him stand out is his ability to combine his statistical expertise with interest in and knowledge of different fields. That means he can offer a distinctive and often more valuable kind of analysis to those already available. Silver told the Harvard Business Review that he’s an advocate for the kind of education that produces foxhogs: ‘The thing that’s toughest to teach is the intuition for what are big questions to ask. That intellectual curiosity . . . if you’re going to have an education, then have it be a pretty diverse education so you’re flexing lots of different muscles . . . You can learn the technical skills later on, and you’ll be more motivated to learn more of the technical skills when you have some problem you’re trying to solve or some financial incentive to do so. So not specializing too early is important.’
Western policy-makers, spooked by the success of Asian education systems in producing scientists and engineers, and worried about their economic competitiveness, have been insisting that our schools and universities focus on producing hedgehogs — specialists who can slot neatly into the job market when they graduate. But this is to see only one half of the equation. Educators in the most advanced Asian economies know that the kind of broad, cross-disciplinary education in which the best Western universities have traditionally excelled will be as valuable in the twenty-first century as it was in the twentieth. Here is Professor Tan Chorh Chuan, president of the National University of Singapore:
One thing I’ve been increasingly convinced about is the importance of intellectual breadth. There are two reasons why. First, many of the problems we face in our work and lives are complex. They cut across different disciplines and domains of knowledge. If you don’t have a broad intellectual base, you will not be able to see the potential cross-disciplinary implications. Second, where we expected to do three or four jobs in a lifetime, the average graduate today might do 10 or 12. These jobs can cross many different sectors so you must have the intellectual base from which you can retool yourself more easily to do different types of work.
Discussions of the hedgehog and the fox often come down to whether it’s better to be one or the other. But in a world that rewards expertise and also the groundbreaking insights that come from the clash of domains, we need to be both. We need to be foxhogs.
4. ASK THE BIG WHY
On 26 March 2007 two men sat down side by side and read out prepared statements as millions of people around the world watched on TV. Ian Paisley, Northern Ireland’s hard-line and outspoken Protestant leader, and Gerry Adams, a Catholic, reputed to be the former commander of the Irish Republican Army, were sworn enemies. Each embodied the most uncompromising element of two sides that had been locked into a conflict stretching back decades, claiming thousands of lives, and ripping apart countless families. Yet here they were, almost unbelievably, not only sharing a table, but pledging mutual cooperation.
Somewhere in the room, out of range of the cameras, was a man who can claim to be one of a handful of those most responsible for bringing this ancient and once thought intractable dispute to an end. Jonathan Powell was Tony Blair’s chief of staff throughout Blair’s time as prime minister, and the UK government’s principal negotiator in Northern Ireland. In his book about the peace process, Great Hatred, Little Room, he recounts a seemingly endless series of meetings with the key players over a period of ten years, some of them taking place in great rooms of state in London or Ireland, others, more clandestine and more dangerous, in suburban houses, or churches deep in the heart of fiercely Republican communities — not the kind of places to be seen carrying a briefcase with the royal crest on it.
Powell is tall and thin and retains boyishly curly hair in his middle age. He is likeable — his eyes crinkle when he smiles — but brisk, talking and thinking at a disconcertingly fast pace. He is also direct; despite being a diplomat by trade (and author of an admiring book on Machiavelli) he comes across as someone who is far more likely to offend you than lie to you. This is probably a useful quality in his current job, because to succeed in it, he needs to be trusted by people who trust nobody.
Powell heads an NGO whose purpose is to mediate between governments and terrorist organisations. For reasons of politics and security, politicians and diplomats are usually unwilling to meet terrorists but usually realise that they need to communicate with them if they are find a permanent end to a long-running conflict. Powell offers governments a secret channel to these underground organisations, shuttling back and forth between the two sides until they are prepared to meet directly.
When I spoke to him he told me he was currently involved in eight different conflicts around the world.41 He had just returned from a holiday in Cornwall with his family, which he’d been forced to interrupt to fly to South America.42 Powell couldn’t tell me about the details of his work. But he was happy to discuss what I wanted to talk to him about — the role of curiosity in negotiation.
In their book Negotiation Genius, the Harvard Business School professors Deepak Malhotra and Max H. Bazerman tell the story of an American businessman they call ‘Chris’, whose firm was negotiating with a European company to purchase an ingredient for a new healthcare product. The two firms had agreed a price but become deadlocked over the question of exclusivity; the Europeans would not accept the Americans’ demand that the ingredient not be sold to any of their competitors. The American negotiators offered more money, but their counterparts would not budge. As a last resort, the Americans called Chris and asked him to fly over to Europe and join the meeting. Malhotra and Bazerman describe what happened next:
When Chris arrived and took a seat at the bargaining table, the argument over exclusivity continued. After listening briefly to the two sides, he interjected one simple word that changed the outcome of the negotiation . . . The word was ‘why’.
Chris simply asked the supplier why he would not provide exclusivity to a major corporation that was offering to buy as much of the ingredient as he could produce. The supplier’s answer was unexpected: exclusivity would require him to violate an agreement with his cousin, who currently purchased 250 pounds of the ingredient each year to make a locally sold product. With this information in hand, Chris proposed a solution that helped the two firms quickly wrap up an agreement: the supplier would provide exclusivity with the exception of a few hundred pounds annually for the supplier’s cousin.
Chris’s colleagues hadn’t asked the question, probably because they assumed that they already knew the answer; they hadn’t thought hard enough about the possibility of unknown unknowns. As the negotiation expert Diane Levin has pointed out in a commentary on this story, they may have also been inhibited by social pressures. Asking penetrating questions can be construed as bad manners. It can make us feel exposed to accusations of stupidity. But according to texts on negotiation, asking ‘why’ is crucial to the unravelling of knotty conflicts. Richard Shell, author of Bargaining for Advantage, lauds the ‘relentless curiosity’ of experienced negotiators. In a classic work, The Making of a Mediator, Bernard Mayer and Alison Taylor recommend ‘a commitment to curiosity and exploration’.
When it comes to negotiation (and mediation), Jonathan Powell isn’t a fan of curiosity for its own sake: he cautions against the asking of endless, purposeless questions. But he points out that if each party accepts the other’s negotiating position on its own terms, then the most likely result is deadlock. The key, he told me, is to ask what lies beneath the demand. ‘The fundamental question,’ Powell told me, ‘isn’t “what”, it’s “why”.’
If the parties negotiate on their pre-agreed positions, the negotiation becomes a trade-off in which one side loses while another one gains. ‘But,’ Powell said, ‘if you ask what people’s underlying interests are — what do they need — then you’re more likely to get to find an imaginative solution.’ That means asking probing, penetrating questions which force the other party off their prepared script, and encourage them to open up about the pressures on them from their own side. It also means listening intently to their answers.
It sounds simple. But over and again, Powell said, negotiators make the same mistake. ‘What always amazes me is that people go into these meetings without really attempting to understand the mindset of the people they’re negotiating with. Good negotiators are intelligent listeners — they don’t just hear out the other side and then present their positions. They listen carefully and try to understand where the other guys are coming from.’
The attitude Powell describes here has been termed, in the context of doctor-patient relationships, ‘empathic curiosity’. Dr Jodi Halpern, a bioethicist at the University of California, was once a practicing psychiatrist43. She noticed the way that patients seemed to respond better to doctors who seemed genuinely interested in them, rather than doctors who — following professional convention — remained emotionally detached in an attempt to be objective. She also noticed that even doctors who expressed genuine sympathy for their patients sometimes had trouble understanding or responding to their real needs. In 2001 she wrote an influential book arguing that empathy is more important than sympathy, because empathy involves making an effort to be consciously curious about the patient’s perspective. ‘Most people have the human capacity for empathic curiosity, for genuine interest in and emotional responsiveness to another person’s perspective, but they can turn it on and off,’ said Halpern. Doctors too often turn it off.
So, according to Powell, do negotiators. Decision-makers in business or government tend to assume that the goal of a negotiation is a result in which everyone’s costs and benefits become roughly equal. But long-running disputes are often rooted in an underlying moral and emotional conflict that isn’t susceptible to material negotiations. Only by applying ‘conscious curiosity’ can a negotiator or mediator identify the contours of these deeper motivations, and thus search for ways to address them.
From 2004 to 2008, the social psychologist Jeremy Ginges and the anthropologist Scott Atran surveyed nearly 4,000 Palestinians and Israelis across the political and social spectrum, including refugees, Hamas supporters and Israeli settlers on the West Bank. They asked the participants to react to a series of hypothetical but realistic peace deals. Almost everyone on both sides rejected the deals outright. Asked to explain why, they would say that the values involved were sacred to them. Many Israeli settlers said that they would never consider trading any land on the West Bank, which they considered to have been granted by God. Palestinians considered the right of return to be sacred rather than something that might be traded for a concession from the other side.
The psychologist Philip Tetlock has called this effect ‘the taboo trade-off’. When parties in a negotiation are asked to trade something they consider sacred for something secular or material, they become angry, inflexible, and deaf to dry cost-benefit reasoning. Indeed, material offers can backfire. Contrary to classical economic theory, financial incentives can make people even less likely to make concessions, compared to when an offer includes no such money.
When Atran and Ginges added monetary incentives to their hypothetical deals — for instance, an offer of $10 billion a year to the Palestinians — the respondents being offered the incentive reacted with even greater outrage than before. The researchers also spoke to leaders on both sides of the dispute, and found that financial proposals elicited a similar response even from these experienced negotiators:‘No, we do not sell ourselves for any amount,’ a Hamas leader angrily told the researchers after they suggested adding in American aid to the deal.
Atran and Ginges found that the only proposals appearing to break the deadlock were symbolic ones, which carried a great emotional freight. Palestinian hard-liners were more willing to consider recognising the right of Israel to exist if the Israelis offered an official apology for the displacement of Palestinians in the 1948 war. Israeli respondents were prepared to consider borders very close to those that existed before the 1967 war if Palestinian groups explicitly recognised Israel’s right to exist.
Western mediators, operating on the rational actor principle, find such attitudes hard to come to terms with. Politicians sometimes talk about taking a ‘businesslike’ approach to such disputes, or argue that peace will inevitably follow progress on material issues, like jobs or access to the electricity supply. Making progress on such issues can certainly help, but it can also throw into even sharper relief conflicts of values — values rooted in fierce, heartfelt beliefs about identity and moral purpose. If answers are to be found, they lie buried deep in the ‘why’ of the dispute rather than the ‘what’. Only negotiators curious enough about the other side’s fundamental beliefs and feelings will discover them.
In Northern Ireland, said Powell, ‘we spent a long time knocking our heads against the issue of decommissioning [the disarming of the IRA]. It became a zero-sum game. The IRA said that giving up their weapons before being invited to share power with the Unionists meant giving up their trump card. The Unionists said they weren’t going into government with people who had a private army. Both were reasonable positions, but the result was deadlock. So what we had to do was ask questions. “What is it that really matters to you?’’’
It eventually became apparent that what the Unionists needed wasn’t really decommissioning; after all, the IRA could always get new weapons if they wanted them. They needed the IRA to take the symbolic step of publicly renouncing violence forever. As for the IRA themselves, they had no intention of going back to violence. But they didn’t want to feel as if they were being forced into a surrender. This conflict was at least as much about the intangibles of face, pride and respect as it was about material goals.
Powell and Blair needed to find something that wasn’t quite decommissioning but that carried the necessary symbolic weight. ‘The idea of weapons dumps came from Kosovo and Bosnia. British generals told me that because they couldn’t get people to give up their weapons, they made both sides’ weapons available for inspection. So I went to a house in West Belfast and met Gerry Adams and put this to him. He said, ‘There’s absolutely no way we can accept this’.’ A month later, Adams returned to Powell with the same proposal.
The symbolic step of burying weapons opened the way to a lasting peace deal. ‘Terrorist groups don’t want to be thought of as criminals, but as legitimate political movements,’ Powell told me. Legitimacy, of course, being another word for respect.
One of America’s most effective generals, Stanley McChrystal played key roles in the Iraq and Afghanistan wars, combining a reputation for ruthlessness in combat with a scholarly intellect. During an interview given after his retirement he summarised the difficult process of adaptation the US military went through in the years following the invasion of Baghdad — a process which led, eventually and belatedly, to a lasting reduction in violence:
When we first started, the question was, ‘Where is the enemy?’ That was the intelligence question. As we got smarter, we started to ask, ‘Who is the enemy?’ And we thought we were pretty clever. And then we realized that wasn’t the right question, and we asked, ‘What’s the enemy doing or trying to do?’ And it wasn’t until we got further along that we said, ‘Why are they the enemy?’
It has often been observed that the American military is obsessed with short-term results, and as a consequence can lose sight of longer-term goals. This is not a problem unique to the military, however. As a culture, we have a persistent tendency to pretend that asking ‘what’ can be substituted for ‘why’. When we can, we avoid the murky waters of emotion and causation, and focus only on the measurable. Most economists work with a model of human behaviour that treats individuals as ‘rational actors’ who respond to incentives and disincentives but possess no deeper motivational complexity. Investors value companies according to their quarterly results rather than evaluating their long-term strategy.
For a good part of the twentieth century, even psychologists stopped asking why human beings behave in certain ways, and focused exclusively on what they do. Behaviourists, who dominated the field in the 1930s and 1940s, argued that it was futile to try and fathom people’s inner feelings, thoughts or desires, and that the only proper object of study was the interplay between behaviour and environment, stimulus and response. Only with the arrival of the ‘cognitive revolution’ in the 1950s did it once again become acceptable to ask questions about motivation.
The familiar desire to do away with ‘why’ can be discerned underneath the contemporary enthusiasm for Big Data. The exponential increase in the processing power of computers, and the ubiquity of connected digital devices, mean there is more available information on human activity than ever before. Companies crunch data derived from our web and mobile phone use to work out what we’re going to buy next. Social scientists, journalists and activists use similar techniques to map and predict the spread of disease, crime and famine.
The Failed State Index, for instance, is designed to be a scientific measure of which states around the world are close to collapse. Each country is rated on twelve indicators of ‘pressure on the state’ during the year in question — including refugee flows, poverty and security threats. The data is drawn from some 130,000 publically available sources. Its aim is to give ‘an early warning of conflict . . . to policy-makers and the public at large.’
Chris Anderson, the former editor of Wired magazine, has made the extreme case for the potential of such techniques:
Out with every theory of human behavior, from linguistics to sociology. Forget taxonomy, ontology, and psychology. Who knows why people do what they do? The point is they do it, and we can track and measure it with unprecedented fidelity. With enough data, the numbers speak for themselves.
Anderson thinks that when you amass Big Data, you no longer have to bother with the Big Why. Every question should be treated as a puzzle rather than a mystery. But not everything is susceptible to such analysis. The Failed State Index failed utterly to predict the failure of Middle Eastern and North African states in early 2012 that became known as the Arab Spring. Only someone with a deep understanding of the relevant countries’ politics and history would stand a chance of anticipating such events, analysing their causes or formulating a response.
As the authors of a more balanced assessment of Big Data put it, even as we make the most of its potential, ‘There will be a special need to carve out a place for the human: to reserve space for intuition, common sense, and serendipity.’ Knowing the ‘what’ is crucial to making good decisions and discoveries, but it will always be important to ask ‘why’.
It’s part of what makes us human, after all. When we stop asking why we become like Kanzi — intelligent apes who can monitor their environment, make requests and follow instructions, but who remain blind to deeper truths — like what my adversary is really asking of me.
5. BE A THINKERER
In 1773, on a wet and blustery October day, Benjamin Franklin led a group of men down to Portsmouth harbour on the south coast of England, where they boarded two boats and set out to sea. Franklin’s boat put down anchor about a quarter of a mile from the shore. The other, at Franklin’s instruction, pushed out a little further, and began to cross back and forth over the same stretch. As it did so, one of its passengers poured olive oil onto the waves out of a large stone bottle, through a hole in the cork ‘somewhat bigger than a goose-quill’. The expedition’s leader watched keenly from his small boat, as it pitched and tossed and the sea threw icy spray into his face.
Franklin was looking to see if the oil was flattening the surf. In his account of the expedition, he explained that his interest in the apparent capacity of oil to calm turbulent waters had first been aroused sixteen years before, when he was on his way to England on his first diplomatic mission as an emissary of the American colonies. Standing on deck, he noticed that the wake of his ship appeared remarkably smooth compared to that of the others in the fleet. He asked the ship’s captain why this might be. The captain, with a hint of contempt for his question, replied that it was because the cooks had just emptied their greasy water through the scuppers.
In 1762, on a voyage from England to America, Franklin made himself a reading lamp by floating oil and a wick on some water in a glass which he hung from the ceiling of his cabin. He couldn’t help but become more interested in what was happening inside his lamp than in his book. He noticed that when the ship rocked, the water in the lamp was ‘in great commotion’ but the oil remained in place, and that when the oil burned away to a thin film overnight the water stopped moving too.
Franklin asked a fellow passenger, a retired sea captain, for his thoughts. The old man said that the Bermudans used oil to smooth choppy waters, and that he had witnessed a similar practice in Lisbon. Another passenger with whom Franklin consulted recalled that divers in the Mediterranean would keep a small quantity of oil in their mouths as they descended into the deep and release it in order to smooth the waters above, thus allowing more light to reach down below.
Once ashore, Franklin described what he had observed to various of his learned friends, all of whom agreed it was interesting and, having promised to consider the matter, promptly forgot about it. Not Franklin. He couldn’t forget about anything he couldn’t explain. As one of his biographers puts it, Franklin ‘could not drink a cup of tea without wondering why tea leaves gathered in one configuration rather than another at the bottom.’
A few years later Franklin, now back in England, could be found crouched at the edge of the large pond at Clapham Common, south of London. It was a windy day, and when he poured a little oil on the waves that spread from the side of the pond he saw that an instant calm was produced on the water, which spread until a quarter of the pond was ‘as smooth as a looking glass.’ After this, Franklin carried a vial of oil in the hollow of his bamboo walking stick everywhere he went, so that he could make similar experiments at every stream, pond or lake he passed on his walks.
Eventually, Franklin wondered if the wave-stilling effect would work reliably, not just on the gentle waves of a wind-disturbed pond, but on the high waves of an ocean swell, and if so whether it might be harnessed to help sailors land ashore in stormy waters. On the pond in London’s Green Park, Franklin demonstrated his experiment to Count Bentinck of Holland and his son, a naval captain and amateur inventor. Captain Bentinck promptly invited Franklin to Portsmouth, where he provided him with the boats required for the larger-scale experiment Franklin had in mind, and accompanied him on the expedition.
Few questions escaped Franklin’s penetrating curiosity. When he learned that the ocean voyage between England and America took two weeks longer going west than it did going east, he wondered if it was something to do with the rotation of the earth. But after a conversation with a Nantucket whaler, he discovered that a warm current of water was slowing down ships that travelled westward and speeding up those heading east. Franklin named it the Gulf Stream, and was the first to chart its course, by keeping track of the ocean’s temperature during his Atlantic crossings. Every day he could be found on deck, taking daily temperature readings from the waters.
A portmanteau of ‘think’ and ‘tinker’, the origin of the verb ‘to thinker’ is unknown. I was introduced to it by Paola Antonelli of MoMA, who traced it to a presentation given in 2007 by John Seely Brown, a Silicon Valley legend and until 2000 a director of the legendary Xerox Palo Alto Research Centre (Xerox PARC). Brown and Antonelli use the term to describe a social, collaborative way of working. But I’m using it to name a style of cognitive investigation that mixes the concrete and the abstract, toggling between the details and the big picture, zooming out to see the wood and back in again to examine the bark on the tree. Here is Peter Thiel, venture capitalist and co-founder of eBay, introducing a series of lectures he gave to students at Stanford University on the theme of entrepreneurialism:
A fundamental challenge — in business as in life — is to integrate the micro and macro such that all things make sense. Humanities majors may well learn a great deal about the world. But they don’t really learn career skills through their studies. Engineering majors, conversely, learn in great technical detail. But they might not learn why, how, or where they should apply their skills in the workforce. The best students, workers, and thinkers will integrate these questions into a cohesive narrative.
What Thiel describes here is what I’m calling thinkering. Benjamin Franklin was the archetypal thinkerer. Although undoubtedly an intellectual, he didn’t fit the popular image of the philosopher, as captured by Auguste Rodin — a sedentary, solitary cogitator in repose, shuttered from the world’s distractions. Franklin was a man of action, an implausibly productive doer who built better versions of things that already existed, like printing presses, and things that hadn’t yet been born, like fire services and democratic republics. He was physically active (he once swam down the Thames from Chelsea to Westminster) and socially hyperactive; he loved to sit around a table with friends and new acquaintances, drinking coffee, telling stories, and making plans for a better world. He was as at home discussing airy abstractions like freedom and virtue as he was the best way to count a vote or to organise one’s day, and he approached the former by way of the latter. Franklin loved life, with all of its surprises, kinks and uncertainties. His epistemic curiosity was fed by an accumulation of experiments, like what happens when you spread oil upon a pond. In the eighteenth century there were plenty of people who tinkered with Leyden jars but there were fewer with the intellectual toolkit to consider how the crackle in the jar related to the flashes in the sky.
In the 1990s the term ‘symbolic analysts’ was coined by the economist Robert Reich to describe the rise of jobs that use technology to shape, manipulate and sell units of significance rather than producing or moving physical goods. Symbolic analysts include those who work in marketing, software development and investing. They are PowerPoint wizards, who apply the same conceptual tools to every area of human endeavour; a management consultancy thinks about a company that makes TV programmes the same way it does a hospital that saves lives.
Reich was commentating on the shift in global economic activity that saw developing countries, most notably China, take over the bulk of the world’s manufacturing, while Western countries became ‘knowledge economies’, exporting thoughts rather than goods. But the knowledge economy often seems to have little room for the kind of knowledge Franklin gained when he made himself a reading lamp. It is more interested in big ideas than physical process, and values conceptual breakthroughs over incremental progress. Meanwhile, the technical knowledge of the world is becoming increasingly specialised, the province of a few experts who find it hard to communicate what they know to those outside their own field and who, as Thiel suggests, find it hard to integrate their micro knowledge with the macro needs of the workplace and world.
Although he was writing three hundred years ago, David Hume saw that an economy needs a balance of thinkers and doers, and that each is improved by the other: ‘The same age, which produces great philosophers and poets, usually abounds with skilful weavers, and ship-carpenters. We cannot reasonably expect, that a piece of woollen-cloth will be wrought to perfection in a nation, which is ignorant of astronomy, or where ethics are neglected.’ Nor can we expect great ideas to flourish in a society, or a company, in which the details of craft are neglected. Here is Steve Jobs again, one of the heroes of the symbolic analyst class, and yet also a manufacturer:
You know, one of the things that really hurt Apple was after I left, John Sculley got a very serious disease. It’s the disease of thinking that a really great idea is 90 per cent of the work. And if you just tell all these other people ‘Here’s this great idea’, then of course they can go off and make it happen. And the problem with that is that there’s just a tremendous amount of craftsmanship in between a great idea and a great product . . . Designing a product is keeping five thousand things in your brain and fitting them all together in new and different ways to get what you want. And every day you discover something new that is a new problem or a new opportunity to fit these things together a little different. And it’s that process that is the magic.
Steve Jobs deserved the overused term ‘visionary’. But he also, famously, obsessed over details; 323 Apple product patents list Steven P. Jobs as one of their inventors (they include one for the glass steps used in Apple stores). These two aspects of his outlook weren’t contradictory; they depended on each other. Jobs was able to think differently about the future of personal computers, but only after he had spent time tinkering with the Apple Mac’s forerunner, the Alto, at PARC. Jobs was a thinkerer.
The people responsible for our biggest ideas are usually detail freaks too. Open the pages of The Origin of Species for the first time and you discover a rather different book from the one you may have been led to expect. You will not find ringing declarations of intellectual revolution. What you will find are pages upon pages on the breeding of dogs and horses. Darwin’s world-changing idea grows organically out of his empirical observation. Similarly, if you read Adam Smith’s The Wealth of Nations, you find, before any mention of the invisible hand of the market, a closely observed account of the operations of a pin factory.
The experiment in Portsmouth harbour was a failure. While Franklin and his fellow investigators observed some smoothing of the waters around the boat, it had little effect on the height and force of the white-capped waves as they crested towards the coast and broke on the shore. But that was no matter to Franklin. He was careful to record the details ‘even of an Experiment that does not succeed, since they may give Hints of Amendment in future Trials.’ Indeed, Franklin’s wave-stilling experiments took on a life beyond that of their instigator. As a recent paper in the Biophysical Journal attests, they inspired future scientists to investigate the action of molecule-thick layers (‘monolayers’) on water, and eventually to a better understanding of the properties of cell membranes, the semi-permeable wrappers that surround the basic building block of all living things.
We live in a very different world to Benjamin Franklin, one of far greater technological complexity, and as a consequence, greater abstraction. Most of us can’t even begin to understand how the engine of a modern car functions, or how our smartphone works. Abstraction is the very principle on which the digital revolution is built — the world copied into zeroes and ones. The web allows us to skim and skip along the top line of everything, scooping out the gist without delving into details. Unless we make an effort to be thinkerers — to sweat the small stuff while thinking big, to get interested in processes and outcomes, tiny details and grand visions, we’ll never recapture the spirit of the age of Franklin.
6. QUESTION YOUR TEASPOONS
On a freezing Sunday morning in the East End of London I took my place in a long line of people which disappeared around a distant corner. It was populated by the youthful inhabitants of one of London’s hippest neighbourhoods, people not known for their eagerness to be up and about the morning after a Saturday night. They were wrapped in thick coats and wore woolly hats with ear-muffs. Gloved fingers slid across the screens of smartphones. The queue wasn’t budging. I overheard someone say, ‘We can hardly complain, can we? This is the Boring Conference.’
Eventually things got going, and we filed into a draughty Victorian building called York Hall, formerly renowned for its staging of boxing matches. Five hundred chairs in neat rows faced a stage, on which there was a podium and a big screen showing, in slow rotation, photographs of dull suburban streets. Overlaid on the pictures was a message, written in a stolid font: ‘Welcome to Boring 2012. It won’t be as good as last year.’
Despite this warning, excited chatter filled the hall. Now and again a cheer was heard; in one corner of the hall there was a competition to see who could make the most self-propelled rotations in an office chair from a single push. The people in the row behind me discussed the most efficient technique: ‘Once you’ve achieved lift-off, you need to tuck your feet in and stick out your arms. It’s the principle of conservation of momentum.’
James Ward, prime mover of the Boring conference, was first to speak. After a brief welcome consisting mainly of apologies he introduced his own Boring 2012 topic of choice — the history of supermarket self-checkout machines. Ward’s presentation, entitled ‘Unexpected Item in Bagging Area’44, was followed by a talk about letterboxes from someone who had spent time working as a postman. The challenge of ‘protective bristles’ was addressed.
Next, a stylishly dressed young woman called Leila Johnston told the audience about her obsession with IBM cash registers, as used in Starbucks and other retail chains. IBM’s machines are, she asserted, superior to the Sharp or Toshiba alternatives. She showed photographs of different IBM models she had spotted while out shopping (‘Now, here is something special: the EPOS 5600, in white — my Moby Dick.’) together with a plot of their locations on a Google map. A man in a tight-fitting shirt talked about how to make perfectly browned ‘hotel toast’ at home. He started by explaining that he generally prefers to place the bread lengthways into the toaster, ‘though obviously that depends on the aspect ratio of the bread.’ The hall was cold, but it was warmed by the audience’s evident pleasure in the presentations. After a break for some ostentatiously boring refreshments (cucumber sandwiches), a music journalist gave a talk about double yellow lines.
Ward is a marketing manager by day, and co-founder of the Stationery Club, whose members convene to discuss pens, paper and paper clips. In 2010 he noted that a conference called ‘Interesting’ had just been cancelled. He took to Twitter to propose, half-jokingly, that it ought to be replaced with a Boring conference. To his surprise, the tweet provoked a wave of enthusiastic replies, including offers of contributions, ideas and assistance. So he asked a few people to prepare short presentations on a boring subject of their choice, booked a venue, and hoped that enough people would buy tickets to cover the deposit. The first fifty tickets sold in seven minutes, the rest soon followed.
The inaugural Boring conference was held in a room above the London theatre which houses a long-running musical based on the songs of the band Queen (the conference motto that year was ‘We Will Not Rock You’). Ward kicked off proceedings of the first conference with a discussion of his tie collection, complete with PowerPoint charts (he noted, for example, that the proportion of single-colour ties in his collection fell from 45.5 per cent to 1.5 per cent between June and December that year). Ward’s whimsical tweet had turned into a real world event that garnered national and international media coverage, including an article in the Wall Street Journal, anointing Ward the ‘envoy of ennui’.
The conference has been repeated every year since, in a succession of larger venues. It is dedicated to ‘the mundane, the ordinary and the overlooked.’ Over the years, topics have included electric hand dryers, paint catalogues, sneezing (the latter given by a man who had kept a diary of his sneezes for three years), car park roofs and bus routes. Beneath the dry irony and self-deprecating humour pervading the conference lies a serious purpose — to demonstrate that anything can be interesting.
The title of Ward’s blog is borrowed from a saying of Andy Warhol’s: ‘I like boring things’. Warhol took the most boring and ubiquitous object he could think of — a tin of soup — and made millions of people see it anew. Ward says that when he refers to boring things he is thinking of things that only seem boring, because we’re not paying attention to them. He quotes another avant-garde artist, the composer John Cage, ‘If something is boring after two minutes, try it for four. If still boring, then eight. Then sixteen. Then thirty-two. Eventually one discovers that it is not boring at all.’
Ward calls this ‘the transformative power of attention’. Car park roofs, hand dryers, milk; you can take anything, he says, and, by paying attention to it, reveal hidden interest, significance, beauty. Leila Johnston told the people in York Hall about how her childhood years were spent in a small town in Scotland, close to an IBM plant. The factory was the lifeblood of the town; the train station was named IBM Halt. Everyone’s parents worked there, and children were used to playing with bags of IBM components as toys. Johnston explained that growing up like this not only made her forever interested in electronics, but left her with an abiding affection for Big Blue. Her audience was captivated. An apparently tedious topic had been transformed into a meaningful story about how we cherish our connections to childhood.
Ward is an admirer of the French writer Georges Perec, who was interested in the ‘infra-ordinary’, by which he meant the opposite of extraordinary: the background noise of life, the things we see or do every day. Our utensils, our habitual turns of phrase, are things so obvious and commonplace that we forget to see their inherent fascination. In an essay called An Attempt at Exhausting a Place in Paris, Perec takes a seat in the window of a café in Paris and describes everything he can see. Then he goes back the next day and does the same thing — and the next day, and the next. He wanted to find out ‘what happens when nothing happens’. Perec urged his readers to ‘question your teaspoons’.
Henry James was once accused by H.G. Wells of having sacrificed his life to art. He replied, ‘I live, live intensely, and am fed by life, and my value, whatever it might be, is my own kind of expression of that. Art makes life, makes interest, makes importance.’ When someone gets interested or bored we tend to praise or blame the object which interests or bores them. But some people are just better than others at ‘making interest’ in the world. It is a talent, or rather, an art. Henry James was fed by a life that was itself no more interesting than the lives most of us live — indeed, as Wells was suggesting, it was less interesting than many. But he turned his unpromising raw materials — observations made while walking in the park, gossip overheard at a dinner party — and transformed them into vividly imagined fictions.
James didn’t feel a need to go chasing after experience, preferring to discover what was interesting in the experiences he had. His biographer Hazel Hutchinson told me that most of James’s novels grew out of anecdotes told to him by friends, ‘which he took away and chewed over, working out the reasons why the people involved behaved in the way that they did.’ His advice to young writers was, ‘Try to be one of the people upon whom nothing is lost!’
It is by studying little things that we attain the great art of having as little misery and as much happiness as possible
Samuel Johnson
Laura McInerney is a former teacher, now pursuing a PhD in education on a Fulbright scholarship. When she was an undergraduate she had a job at McDonald’s. During the daily breakfast shift she would break and cook over 400 hundred eggs: ‘Smash, crack, sizzle, remove. Repeat!’ It was soul-destroying work, or at least, it might have been but for her capacity to get interested in what she was doing. She began getting interested in eggs and how they cook because of coagulation, a process that involves protein becoming so overwhelmed by heat that it ceases to be soluble and sets into a solid.
Suddenly, McInerney came to see each egg as a mini-battlefield, where proteins fought heat warriors. She started to watch each egg carefully to see which proteins gave up the ghost first — those in the middle or those at the edge. On other days, the eggs would remind her of a history class she took on Weimar Germany in which she learned that an egg went from costing a quarter of a Reichsmark to four billion Reichsmarks. Or she would reflect on eggs and morality — was it ethically right to steal eggs from a chicken? For McInerney, an egg wasn’t just an egg.
When Carol Sansone was at university she found herself bored by the courses that she was taking, so she started to attend courses in things she was interested in, even though they didn’t count towards her degree. In her mandated courses, she was the epitome of what educationalists call a ‘surface learner’; she targeted her efforts efficiently, doing just what she needed to do to succeed. In courses she took for their own sake — an eclectic mix of art history, literature and creative writing — she was a ‘deep learner’, absorbed and enthralled by the material, seeking understanding for its own sake. Her tutors on these courses were pleased to have such an enthusiastic student in their class, but somewhat puzzled when she told them that she was gaining no credits from their course.
Sansone was puzzled too. She wondered why there seemed to be two distinct categories, not just in her life, but in the world — things you do because they’re important, and things you do for pleasure. Or to put it another way, things you do to hit a target that others have set — a great degree, a top job — and things you do just because you are fascinated by the doing of them. Now, as a psychology professor at the University of Utah, she studies the strategies people use to make boring things interesting.
We have all found ourselves in situations where we’re compelled — by our parents, our teachers, our bosses, our own conscience — to spend time on tasks that we find painfully dull. We can motivate ourselves to complete them by thinking about the money we’ll get for doing it, or the approval of our teacher, or just the stuff we’ll have to deal with if we don’t do it. But we can also find ways to turn this mundane activity into something that stimulates our curiosity, knowing that once we get interested in it, we’re more likely to spend time on it.
It’s often assumed that motivating people involves getting them to think about the future — about what they can achieve or become. When head teachers, life coaches or personal trainers talk about motivation, they usually focus on the importance of goals. Work hard at this job, and you’ll get promoted; to get through another set of bench presses, think about the kind of biceps you want. This makes sense — we all use the prospect of future benefits to drag ourselves through some tedious or unpleasant but necessary activity. But the goal-focused approach to motivation has its problems, because when we set our sights on the future we are less likely to enjoy the present, which can make what we’re doing feel less interesting, and thus make us less likely to persevere with it.
Researchers from the University of Chicago and the Korea Business School collaborated on a study to investigate this phenomenon. They recruited a hundred students about to embark on a session at the gym. They asked half of them to describe their goals — ‘I want to lose weight’ — and then to continue focusing on those goals as they worked out. They asked the other half to describe their experience — what it was like stretching and exercising at the gym — and to continue thinking about that as they undertook their session.
Before the session, the goal-driven students tended to say they were planning to run on the treadmill for longer than the experience-focused students. But it was the experience-focused students who actually ended up running for longer, and who also reported enjoying their exercise more afterwards.45 When all our interest is directed at the future, we get easily bored with the present. This puts a subtle twist on the classic distinction between intrinsic and extrinsic motivation. Extrinsic rewards aren’t always offered or imposed upon us by a third party. We can set them ourselves, and in the process accidentally corrode our inner motivation.
If you are a manager or a teacher, then, is it best to encourage employees or students to explore their curiosity? Not if your overriding concern is that they perform a task on time. As we’ve seen, getting interested in something means that future goals recede in importance. Sansone and her colleagues gave volunteers a repetitive word-copying task they spontaneously started to make it more interesting for themselves by varying the way they copied the letters and reading the incidental text. When they employed these interest-enhancing tricks, they copied fewer letters within the time allowed than they had before. However, when the time spent on the task was up to students, the ones who employed interest-enhancing strategies copied more letters, because they persisted for longer. Curiosity is likely to lead to better work, but only if it’s allowed time to breathe.
It can lead to broader satisfactions too. If diversive curiosity is the flash and splash of novel stimuli, epistemic curiosity is a path you want to keep travelling down, even when the road is bumpy. Making such a journey can bring an important ancillary benefit. The English philosopher J.S. Mill argued that happiness is something that happens to us while we are pursuing some other purpose — that it approaches us sideways, ‘like a crab’. His insight foreshadowed the research of Mihaly Csikszentmihalyi, the psychologist who coined the term ‘flow’ to describe the happiness that comes from being completely and unreflectively immersed in an activity, whether it’s guitar playing, rock climbing, or studying molecular genetics. People who share something of Henry James’s talent for getting interested in things — even things that might seem mundane —tend to be happier than those who don’t.
This is true of couples as well as individuals. Arthur Aron is a psychologist at Stony Brook University in New York who specialises in the study of long-term romantic relationships. When he started getting interested in this area, he noted that there was a gap in the research. Most of it focused on conflict — why do couples argue? There were plenty of studies of jealousy, resentment and anxiety. A less dramatic and more common problem had been neglected — what happens when couples get bored?
Aron and his colleagues run a long-term study of married couples. Over a hundred couples from Michigan are interviewed about their relationships, individually and at home, at yearly intervals. Aron has gathered data on three specific questions, from the seventh and sixteenth years of marriage. First, ‘During the past month, how often did you feel that your marriage was in a rut (or getting into a rut)?’ Second, ‘All in all, how satisfied are you with your marriage?’ Thirdly, he looked at a visual measure of closeness in which participants were asked to select, from a series of circles overlapping to different degrees, ‘the picture which best describes your marriage’.
Aron found that the couples who felt that their relationship had become a little boring after seven years of marriage experienced much lower satisfaction nine years later, irrespective of their levels of conflict and argument. Couples who felt unexcited by each other were also likely to choose pairs of circles that overlapped less with each other, to sum up their closeness. Boredom wasn’t a neutral quality, a mere absence of excitement. It acted as a malign agent, quietly prising couples apart. In some ways it was more dangerous than open conflict. As Aron puts it, ‘At least couples who argue with each other are still doing something together.’
Studies have shown that marital satisfaction tends to take a precipitous decline in the early years of the marriage. Part of the reason that couples drift apart from each other, Aron reasons, is that the novelty of mutual self-transformation wears off. It’s thrilling to see the world through someone else’s eyes; to feel your own being remoulded in response to another’s. But when you’re no longer getting to know each other’s enthusiasms and quirks and unexpected strengths, when you’ve agreed on your favourite restaurants and holiday destinations, and got to know each others’ friends, then it’s time, according to Aron, to actively replenish the stock of novelty. The couples who do so are more likely to remain happy.
It’s not enough to just order a new DVD box set, however; couples need to pursue activities which involve learning or achieving something together. In another study, Aron recruited twenty-eight couples from his university campus. Some were married, some dating for at least two months.They were taken to a gym hall and invited to take part in one of two different tasks. Some couples were given the ‘mundane’ task, which involved one partner rolling a ball to the centre of the room while the other watched, then retrieving the ball and rolling it back the other way. Others were given the ‘novel and arousing’ task, which involved being bound to each other with Velcro straps while negotiating an obstacle course. (Despite the faintly sadomasochistic flavour of the activity, the term ‘arousing’ here refers only to physiological and mental stimulation.)
The couples were then asked to fill out a questionnaire about their relationship, and some were filmed talking to each other, their conversation observed and coded according to established protocols for measuring the quality of marital interaction. The couples that engaged in the ‘novel and arousing’ activities were significantly more likely to express satisfaction in their relationship afterwards, and to feel romantically about the other. As Aron pointed out to me, couples who take part in such potentially awkward, embarrassing and frustrating experiences are more likely to fall out. But better that than being bored.
Aron doesn’t prescribe a frantic search for utterly new experiences — ‘We went paragliding last year, let’s learn the Peruvian flute this summer’ — so much as taking a gentle pleasure in variation, even if the themes are familiar. He told me that he and his wife (also a psychologist, and a collaborator) go hiking in the mountains of Slovenia, but always try different routes. Another researcher, James Graham of Western Washington University, found that happy couples were simply better at finding interest in the mundane joint activities of everyday life — cooking, childcare, DIY.
Henry James used curiosity to turn the ordinary stuff of life into great art. But then, he was a genius. The rest of us can, at the very least, use it to make our lives more interesting. It’s a choice: we can interrogate our cutlery, or allow the familiar to be boring. Laura McInerney expresses the principle beautifully:
When you live somewhere boring — and we all live somewhere boring — then we have a choice about the way we will see that place. We can spend our days thinking like everyone else, seeing the same things over and over, and never once wondering about how they got that way, or why they stayed that way, or how they could be better. Or, we can learn. And if we make the choice to learn, and to be curious about the things around us, then we are essentially making the choice never to be bored again.
7. TURN PUZZLES INTO MYSTERIES
In 1955, a bow-tied man with silver hair and a fastidiously trimmed moustache cleared his belongings from a government office in Washington. William Friedman was retiring from his post at the US National Security Agency, bringing an end to a world-changing career very few people knew about.
He took with him a photograph he had kept under the glass plate that covered his desk. It heads this section. In it, seventy-one uniformed officers stand in two lines behind a seated row of five men and women in civilian clothes. At first glance the image is unremarkable, but a second look reveals something odd about the way the subjects are posed. Some face the front but have their head turned to the right, some are square to the camera, others have their whole bodies facing one way or the other.
For more than thirty years, Friedman was the US government’s chief code-breaker. He was the leader of the team that broke Japan’s ‘Purple’ cipher — equivalent to the German Enigma machine — thus providing the US government with vital intelligence on Japanese activities during the war, including information that, had it been acted on, might have averted Pearl Harbor. He was also co-inventor of the US army’s best cipher machine, and is regarded as one of the founders of the modern study of cryptography. In this book I’ve talked about the difference in curiosity about puzzles and mysteries. Friedman’s expertise was literally in puzzles, but he exemplifies the ability of the curious mind to find fulfilment in pursuing the insoluble.
Friedman’s hero was the great Renaissance scholar and statesman Sir Francis Bacon, and it was from Bacon that Friedman took his fascination with codes. Bacon invented a code called the ‘bilateral cipher’, in which a combination of only two symbols is used, in groups of five, to represent every letter in the alphabet. So, for instance, if the two symbols are a and b, A = aaaaa, B = aaaab, C = aaaba, and so on. The crucial point in Bacon’s system is that rather than using a’s and b’s (which would make the code relatively easy to decipher) he proposed using anything that can be divided into two classes or types: two types of font, bells and trumpets, apples and oranges. As Bacon put it, such a system enabled people to say ‘anything by means of anything’.
Bacon’s system was never used, as he had thought it would be, to transmit military secrets, but it became a tool used by literary sleuths. From the latter half of the nineteenth century onwards, many learned people became convinced that Shakespeare was not the author of his plays, and that the answer to the question of who was could be found in puzzles buried within the texts.46 Sir Francis Bacon was a leading candidate, and ‘Baconians’ believed his bilateral cipher offered the key to the answer. One of the men most interested in this question was the eccentric American millionaire George Fabyan, heir to the country’s largest cotton goods firm. He lured Elizabeth Gallup, author of a highly successful book of Baconian investigation, to come and work on ‘the Greatest of Literary problems’ at his estate at Riverbank, on the Fox river, just west of Chicago. Gallup was joining an avant-garde faculty of scientists at Riverbank, dedicated, as Fabyan put it, to ‘wresting from Nature, her secrets’.
Fabyan had the main building redesigned by Frank Lloyd Wright and added a Japanese garden, a lighthouse, a zoo (with a gorilla named Hamlet) and a Dutch windmill, moved brick by brick from Holland. In these bizarre surroundings, a diverse group of scientists were paid to pursue their own obsessions. In 1915, William Friedman joined them, after Fabyan persuaded him to leave his PhD in plant biology at Cornell to work at Riverbank on the propagation of wheat. Friedman’s fascination with codes and bibliography meant that he soon joined Elizabeth Gallup’s Department of Ciphers.
Before long, Friedman lost faith in the methods and purpose of Gallup’s project (much later, he co-authored a book that demolished the arguments of Gallup and other Baconians). But it was at Riverbank that he developed his curiosity about codes into a fascination by which he would be happily consumed for the rest of his life. He created ever more elaborate cryptological designs, including a card he used for correspondence featuring a botanical plant, in which, to the initiated, everything — the roots, the leaves, the veins in the leaves — is a cipher (the roots spell out ‘Bacon’, the flower spells ‘Shakespeare’, while the leaves contain the names of other Elizabethan authors).
Another of Friedman’s designs was of a page of sheet music for Stephen Foster’s popular nineteenth-century song ‘My Old Kentucky Home, Good Night,’ that on much closer examination (some of the notes have small gaps in them, and some are whole, thus becoming a-types and b-types) reveals a secret message: ‘Enemy advancing right/we march at daybreak.’ At the bottom Friedman writes (in plain text): ‘An example of making anything signify anything.’
A puzzle is something that commands our curiosity until we have solved it. A mystery, by contrast, never stops inviting enquiry. When we first meet a new problem, our instinct is to treat it as a puzzle: What’s the answer? Then, after gathering the knowledge we need to solve it, we sometimes start to think of the same problem as a mystery, one which will sustain our curiosity forever. A passing interest can be transformed into a lifelong passion.
When we come across a puzzle of any kind, we should always be alert to the mystery that lies behind it, because it might be a mystery that that will occupy and entertain us long after the puzzle is solved. William Friedman loved puzzles, in the most literal sense of that word. But his curiosity about them went far beyond any single example. At Riverbank, he came to think of the most fundamental principle of cryptology — that ‘anything can signify anything’ — as an inexhaustible mystery, in which he took a profound and enduring delight. Puzzles are stepping stones to mysteries. The more mysteries we pursue, the more knowledge we gather, the greater our intellectual and cultural range.
During the First World War, the US government, hearing of Friedman’s work at Riverbank, called on him to train army units in cryptography. After doing so, Friedman joined the army himself. On returning from France, he went back to work at Riverbank for a few more years before moving to Washington with his wife (one of Elizabeth Gallup’s assistants). Both put their cryptographic skills to work for the American government. Following his heroic efforts in cracking Japanese codes, Friedman rose to become chief cryptologist for the National Security Agency (NSA). He left behind a body of work that defined his field of study for decades afterwards.
Friedman liked to remind people that meaning can reside in the most unlikely places. The photograph on his desk was taken in Aurora, Illinois, on a winter’s day in 1918, at the training school where he and his wife taught cryptography to army officers about to be sent to France. What did Friedman see when he looked at it? He saw his younger self seated at one end of the front row, facing inwards. He saw his wife in the middle, and at the other end, the imposing figure of George Fabyan. He also saw a message, hiding in plain sight. The image is a cryptogram in which people stand in for letters. Thanks to Friedman’s careful positioning, the soldiers spell out Sir Francis Bacon’s most famous axiom: ‘knowledge is power.’
35 Jeff Bezos, founder of Amazon, also has an exceptionally powerful sense of epistemic curiosity. When he was a child, his mother discovered him trying to take his crib apart with a screwdriver. As a teenager he started a summer camp for intellectually inquisitive children called ‘the Dream Institute’. A Washington Post profile of Bezos describes its activities: ‘The children read selections from books such as “Gulliver’s Travels,” “Dune” and “Watership Down.” They studied black holes. They wrote simple programs on an Apple computer that made their names scroll down the screen.’
36 It later prompted Jobs to pay close attention to the fonts used on the first Macs, which in turn ensured the presence of classical fonts on every home computer made since.
37 A recurring pattern in the history of innovation is the combination of something with its inverse to form a single invention: the clawhammer joined nail removal with nail driving; the pencil was joined with the eraser. By combining the hitherto opposed roles of businessman and hippie, Jobs provided a walking example of the same pattern.
38 On each of our copies the last word was crossed out with a marker pen and replaced by a scrawled ‘person’.
39 Antonelli told me that she borrowed the distinction from one of her predecessors at MoMA, Emilio Abasz.
40 Sir Alex Ferguson, probably the greatest manager British football has ever known, also started young (he became manager of St Mirren aged thirty-two). A former shipbuilder who never went to university, Ferguson has an exceptionally hungry mind. In the course of his managerial career he became an expert in wine, horse-racing, the life of Abraham Lincoln and the American Civil War. He is a film enthusiast and a voracious reader, who has completed all five volumes of Robert Caro’s monumental biography of Lyndon Johnson. To put it mildly, this breadth of interest is not the norm in football circles. While Ferguson is rightly lauded for his will to win and motivational ability, his epistemic curiosity surely contributed to his success. During his decades at the top, the game changed radically, but Ferguson always adapted. When innovations like statistical analysis were introduced to football, many managers of his generation ignored them, preferring to stick to what they knew. Presumably Ferguson treated them as something else to learn about.
41 Powell’s work means spending a lot of time with people who specialise in killing civilians. ‘It must be quite . . . edgy?’ I suggested. ‘It can be very dangerous, yes,’ he said.
42 It may not have been coincidence that, shortly after our conversation took place, news emerged that the FARC terrorist organisation and the Colombian government launched their first direct peace talks in a decade.
43 Halpern uses the term emphatic curiosity in reference to clinical practice. In this book I use it in a broader sense, as interest in the thoughts and feelings of others, though obviously the two uses are congruent.
44 Ward told us about how he once deliberately miscategorised portobello mushrooms as ordinary mushrooms, thus defrauding a supermarket of several pence. (‘A security guard was standing right next to me as I carried out my mushroom hustle. I’ve never felt so alive.’)
45 For good measure, the researchers performed a similar experiment with people about to embark on classes or programmes of other activities, including origami, yoga and dental flossing. Similar results obtained (apparently it is possible to find even flossing interesting, if you try hard enough).
46 The Shakespeare conspiracy theorists included Henry James, Mark Twain and Sigmund Freud, proving that very smart people can believe very silly things.