1
Knowledge Overload
Triangular Knowledge
In his 1988 presidential address to the International Society for General Systems Research, Russell Ackoff, a leading organizational theorist, sketched a pyramid that has probably been redrawn on a white board somewhere in the world every hour since.
1 The largest layer at the bottom of the triangle represents data, followed by successively narrower layers of information, knowledge, understanding, and wisdom. The drawing makes perfect visual sense: There’s obviously plenty of data in the world, but not a lot of wisdom. Starting from mere ones and zeroes, up through what they stand for, what they mean, what sense they make, and what insight they provide, each layer acquires value from the one below it.
Ackoff was not the first person to propose a data-information-knowledge-wisdom (DIKW) hierarchy. Milan Zeleny had discussed a similar idea in an article published the year before, and Michael Cooley had come up with roughly the same concept in an article written shortly before that. In fact, in 1982, Harlan Cleveland not only described the hierarchy in an article in
The Futurist, he pointed to its earliest known version:
2 Where is the Life we have lost in living?
Where is the wisdom we have lost in knowledge?
Where is the knowledge we have lost in information?
T. S. Eliot wrote these lines in 1934 in a poem called “The Rock.” The next reference, preceding all of the business articles and books on this topic, appeared in 1979, in a song called “Packard Goose” by Frank Zappa.
3
Now, no one believes any of these thinkers actually plagiarized T. S. Eliot, much less Frank Zappa. The idea seemed to have a certain inevitability. Imagine you’re in charge of your company’s data processing center in 1955—well after Eliot’s poem and well before Zappa’s lyrics—and you’re watching the experts from IBM install the most popular corporate computer of the 1950s, the IBM 650.
4 With only seventy-five of these machines installed anywhere, you’re on the leading edge. The 650 has the latest in punchcard-reading technology and can calculate 78,000 additions or subtractions per minute. Your 2011 home PC probably does about 300 billion per minute, but back then the 650’s computing power didn’t come cheap: The 650 cost your company $500,000, equivalent to $4 million in 2011 dollars.
5 It’s got its own room, its own fleet of maintenance folks, its own dress code: white lab coats, please. But it’s a workhorse, and you’ll get good value from it—processing payrolls, calculating projected sales figures, managing the human resources database.
You’ve gotten used to a particular drill when corporate executives visit your data center and marvel at the boxes of thousands, hundreds of thousands, millions of punchcards. Ah, you patiently explain, that’s just data. By itself it has no value. But process the data and you get information. Information is to data what wine is to a vineyard: the delicious extract and distillate. In 1955, information was the value of the seemingly senseless mounds of data that were quickly accumulating.
Thirty years go by. You and the rest of the world have been refining data into information by the boxcar-full. Now you are as overloaded with information as you once were with data, and you have the same question: You’ve spent a lot of money gathering it, but what’s its value? Information has become a problem, not a solution. So, how do you justify your investment in producing all that information out of all that data? The same way you justified your investment in data. You’ve refined the data to produce information, and you’ve refined the information to generate something of greater value. You’ve got to call it something. How about “knowledge”? Thus did the knowledge management industry take off in the early and mid-1990s, based on the promise that it would help enterprises discern and share the highest-value information it was generating.
Of course, to get knowledge to look like it’s an outcome of information, you have to radically redefine knowledge. For Ackoff, knowledge was know-how that transforms “information into instructions,”
6 such as “knowing how a system works or how to make it work in a desired way.”
7 Skip Walter, who was mentored by Ackoff, said that while information is structured data, knowledge “is actionable information.” For example, information becomes knowledge when you decide whether to wear a sweater.
8 Milan Zeleny, who beat Ackoff to the punch by a couple of years, said knowledge is like the recipe that turns information into bread, while data are like the atoms that make up the flour and the yeast.
9
Fine. But when T. S. Eliot wrote “Where is the knowledge we have lost in information?” he was not thinking of knowledge as “actionable information.” The knowledge discovered by scientists and researchers isn’t a recipe. In fact, it wasn’t even information in the current sense of a mass of unrelated facts. Back before Ackoff ’s pyramid, back when the idea of knowledge first occurred to us, the ability to know our world was the essential difference between us and the other animals. It was our fulfillment as humans, our destiny. Knowledge itself fit together into a perfectly ordered whole. Knowledge therefore was considered for thousands of years in the West to be an object of the most perfect beauty. Indeed, in knowing the world, we were striving to understand God’s creation as He himself understands it, given our mortal limitations; to know the world was to read it like a book that God had written explaining how He had put it together. Darwin spent five years sailing on a small boat, Galileo defied a Pope, and Madame Curie handled radioactive materials, all in pursuit of knowledge as the most profound of human goals. That is what knowledge has meant in our culture, and it has little to do with the middle layer of a made-up pyramid that shears knowledge of all but its most prosaic, get-’er-done utility.
Despite this, the DIKW pyramid gets one thing very right about how we’ve thought about knowledge. Our most basic strategy for understanding a world that far outruns our brain’s capacity has been to filter, winnow, and otherwise reduce it to something more manageable. We’ve managed the fire hose by reducing the flow. We’ve done this through an elaborate system of editorial filters that have prevented most of what’s written from being published, through an elaborate system of curatorial filters that has kept most of what’s been published from being shelved in our local libraries and bookstores, and through an elaborate system of professional filters that have kept many of us from being responsible for knowing most of what’s made it through the other filters. Knowledge has been about reducing what we need to know.
The Information Age took this strategy and ran with it. We built computers that ran databases that took in a bare minimum of useful information, in predefined categories: first name, last name, social security number, date of birth.... Whether they include a dozen fields or a thousand, our information systems worked only because they so rigorously excluded just about everything.
Even the very idea of knowledge began as a way of winnowing claims. In ancient Athens, matters of state were debated in public by any citizen—so long as he was one of the 40,000 free males who had completed military training.
10 Many opinions were expressed—go to war or not, find someone guilty or innocent of a political crime—but only some were worth believing. Philosophers noticed the difference and raised beliefs worth believing into their own class. Plato gave us the abiding formulation: Among all the opinions spouted, the subset that counts as knowledge consists of the ones that not only are true but also are believed for justifiable reasons. That second qualification was necessary because some people hold opinions that are true but only by accident: If you think Socrates is innocent of corrupting the young because you like the way he drapes his toga, your opinion is true but does not constitute knowledge. Hunches and guesses that turn out to be right also aren’t knowledge. Knowledge is so important for deciding matters of state—and for understanding who we are and how our world works—that the bar needs to be set high. What makes it over are beliefs we can rely on, beliefs we can build on, beliefs worth preserving and cherishing. That’s more or less what Plato, T. S. Eliot, and we today still mean by “knowledge” in its ordinary usage.
We’ve become the dominant species on our planet because the elaborate filtering systems we’ve created have worked so well. But we’ve paid a hidden price: We have raised the bar so high that we have sometimes excluded ideas that were nevertheless worth considering, and false beliefs—once accepted—can be hard to dislodge even after they have been found out. And there is so much that could be known that we just don’t have room for it all; our science magazines can publish only so many articles and our libraries can hold only so many books. The real limitation isn’t the capacity of our individual brains but that of the media we have used to get past our brains’ limitations. Paper-based tools allowed us to write things down, but paper is expensive and bulky. Even once we had computers with millions of times more capacity—your home computer may well have over 300,000 times more memory than the IBM 650
11—knowledge was hard to transfer from paper, and hard to access on a desktop machine.
Now if you want to know something, you go online. If you want to make what you’ve learned widely accessible, you go online. Paper will be with us for a long time, but the momentum is clearly with the new, connected digital medium. But this is not merely a shift from displaying rectangles of text on a book page to displaying those rectangles on a screen. It’s the connecting of knowledge—the networking—that is changing our oldest, most basic strategy of knowing. Rather than knowing-by-reducing to what fits in a library or a scientific journal, we are now knowing-by-including every draft of every idea in vast, loosely connected webs. And that means knowledge is not the same as it was. Not for science, not for business, not for education, not for government, not for any of us.
Info Overload as a Way of Life
Information overload isn’t what it used to be.
Alvin Toffler introduced the idea of information overload to the general public in 1970 in his book
Future Shock.12 He positioned information overload as a follow-on to sensory overload:
13 When our environment throws too many sensations at us—say, at a Grateful Dead concert with a light show and the mixed scents of a thousand sticks of incense—our brains can get confused, causing a “blurring of the line between illusion and reality.”
14 But what happens when we go up a level from mere sensation, and our poor little brains are buffeted by information?
Toffler pointed to research indicating that too much information can hurt our ability to think. If too many bits of information are transferred into our wetware, we can exceed our “channel capacity”—a term straight out of Information Science. Wrote Toffler: “When the individual is plunged into a fast and irregularly changing situation, or a novelty-loaded context . . . his predictive accuracy plummets. He can no longer make the reasonably correct assessments on which rational behavior is dependent.”
15 “Sanity, itself, thus hinges” on avoiding information overload.
16 A term, a fear, and a bestseller were born.
The marketers quickly picked up on the idea, worrying that consumers would get confused if given too much information. But what was too much information? In a study performed in 1974, 192 “housewives” were given information about sixteen different attributes of sixteen different brands. And that information was itself simplified into binary pairs; for example, rather than being told the number of calories, the women were simply told that items were high or low in calories.
17 Yet even this was believed to so overload them with information that they made poor buying decisions. Marketers thus told themselves they were preserving the rational faculties of consumers by strictly limiting how much information vendors provided.
This study strikes us as an artifact of a simpler time. Comparing the calories on sixteen nutritional labels constitutes information overload? We must have been in a very delicate informational state.
The psychological syndrome caused by information overload subsequently got renamed, driven often by nothing more than the desire to market a book. We heard about information anxiety, information fatigue syndrome, analysis paralysis. These debilitating diseases were brought on by data smog, infoglut, and the information tsunami. We were about to drown. Richard Saul Wurman’s book
Information Anxiety, published in 1989, made its case by aggregating startling facts such as “About 1,000 books are published internationally every day”
18 and “Approximately 9,600 different periodicals are published in the United States each year.”
19
We now laugh in the face of such danger.
Technorati.com in 2009 tracked over 133 million blogs. Of those, more than 9,600 were
abandoned every day. With a trillion pages, the Web is far far far larger than anyone predicted. In fact, according to two researchers at University of California–San Diego, Americans consumed about 3.6 zettabytes of information in 2008.
20
Zettabytes?
This is a number so large that we have to do research just to understand it. Fortunately, these days we have the Internet to answer all our questions, so we can just type “zettabyte” into our favorite search engine and discover that it means 1 sextillion bytes.
21
Sextillion?
Back to Google. A sextillion is 1,000,000,000,000,000,000,000 bytes. That’s 10
21 bytes. A billion gigabytes times a thousand. Clear?
No? How about this, then: The electronic version of
War and Peace takes just over 2 megabytes of space on the Kindle. A zettabyte is therefore the equivalent of 5 × 10
14 copies of
War and Peace. Of course, now we have to figure out what 5 × 10
14 copies of
War and Peace look like. Assume each is six inches thick, and they’d stack up to over 47 billion miles. And to understand
that number, we could point out that it would take light 2.9 days to travel from the front cover of the first volume to the back cover of the last—ignoring the relativistic effects the gravity of this new 250 billion–ton object would create (assuming each volume weighs a pound). Or, put differently, if we divided the novel into two equal parts,
War would stretch the length of eight trips from the sun to Pluto and
Peace would stretch eight trips back.
Our little simian brains just can’t make sense of numbers like these. But we don’t have to have a firm sense of how long a zettabyte would stretch or how much it would weigh, or even how much it could earn just by saving a penny a day, in order to suspect that the change we’re seeing in knowledge is not primarily due to the massive increase in the amount of information. Something else is at work.
After all, we’ve been complaining about what we now call “information overload” for a long time. In 1685, French scholar Adrien Baillet wrote: “We have reason to fear that the multitude of books which grows every day in a prodigious fashion will make the following centuries fall into a state as barbarous as that of the centuries that followed the fall of the Roman Empire.”
22 It’s comforting to know that the idea that information will cause civilization to come crashing down has survived several crashed civilizations.
Baillet was not some isolated crank. In 1755, none less than the creator of the first modern encyclopedia, Denis Diderot, reasoned: “As long as the centuries continue to unfold, the number of books will grow continually,” so “one can predict a time will come when it will be almost as difficult to learn anything from books as from the direct study of the whole universe.”
23 And it wasn’t just the French who feared they would drown in a sea of leather-bound volumes. In 1680, the German philosopher Gottfried Leibniz wrote of his fear of the “horrible mass of books which keeps on growing”
24 that would someday make it impossible to find anything. This did not prevent him from adding his own dense works to that horrible mass. It never does.
We can trace it back further, if we want. The Roman philosopher Seneca, born in 4 BCE, wrote: “What is the point of having countless books and libraries whose titles the owner could scarcely read through in his whole lifetime? That mass of books burdens the student without instructing.”
25 In 1642, Jan Amos Comenius complained that “bookes [
sic] are grown so common . . . that even common country people, and women themselves are familiarly acquainted with them.”
26
Of course, those voices sound like whiners to us now. They were drowning facedown in puddles of information. In our age, information overload has blown past every dire prediction, with the zettabyte study from UCSD zooming past the previous estimate of 0.3 zettabytes from a study just two years earlier.
27 The difference between 0.3 and 3.6 zettabytes is ten times the total number of grains of sand on the earth—although these studies probably only chart an unimaginable gap in how they measure information.
It doesn’t matter. No matter how long the line of copies of
War and Peace gets, or how good your intentions, you’re still probably not going to get through a single one of them this summer. It doesn’t really matter if that bookshelf continues to Pluto and then loops back another fifteen times. An overload of an overload is still just an overload. Does it matter whether you drown in water that’s 10 feet deep or stretches for 10
21 more feet (189 quadrillion miles)?
Yet, something odd has happened. As the amount of information has overloaded the overload, we have not proportionately suffered from information anxiety, information tremors, or information butterflies-in-the-stomach. Information overload has become a different sort of problem. According to Toffler, and for three decades following Future Shock’s publication, it was a psychological syndrome experienced by individuals, rendering them confused, irrational, and demotivated. When we talk about information overload these days, however, we’re usually not thinking of it as a psychological syndrome but as a cultural condition. And the fear that keeps us awake at night is not that all this information will cause us to have a mental breakdown but that we are not getting enough of the information we need.
So, we have rapidly evolved a set of technologies to help us. They fall into two categories, algorithmic and social, although most of the tools available to us actually combine both. Algorithmic techniques use the vast memories and processing power of computers to manipulate swirling nebulae of data to find answers. The social tools help us find what’s interesting by using our friends’ choices as guides.
The technologies will continue to advance. This book does not focus on the technological side. Instead, we will pursue a different, and more fundamental, question: How has the new overload affected our basic strategy of knowing-by-reducing?
Filtering to the Front
If we’ve always had information overload, how have we managed? Internet scholar Clay Shirky says: “It’s not information overload. It’s filter failure.”
28 If we feel that we’re overwhelmed with information, that means our filters aren’t working. The solution is to fix our filters, and Shirky points us to the sophisticated tools we’ve developed, especially social filters that rely upon the aggregated judgments of those in our social networks.
Shirky’s talk of filter failure intends to draw our attention to the continuity between the old and the new, bringing a calming voice to an overheated discussion: We shouldn’t freak out about information overload because we’ve always been overloaded, in one way or another. But when I asked him about it, Shirky agreed without hesitation that the new filtering techniques are disruptive, especially when it comes to the authority of knowledge. Old knowledge institutions like newspapers, encyclopedias, and textbooks got much of their authority from the fact that they filtered information for the rest of us. If our social networks are our new filters, then authority is shifting from experts in faraway offices to the network of people we know, like, and respect.
While I agree with Shirky, I think there’s another—and crucial—difference in the old and new filters.
If you are on the Acquisitions Committee of your town library, you are responsible for choosing the trickle of books to buy from the torrent published each year. Thanks to you, and to the expert sources to whom you look, such as journals that preview forthcoming titles, library patrons don’t see the weird cookbooks and the badly written personal reminiscences that didn’t make the cut, just as the readers of newspapers don’t see the crazy letters to the editor written in crayon. But many of the decisions are harder. There just isn’t room for every worthwhile book, even if you had the budget. That’s the way traditional physical filters have worked: They separate a pile into two or more piles, each physically distinct.
The new filters of the online world, on the other hand, remove clicks, not content. The chaff that doesn’t make it through the digital filter is still the same number of clicks away from you, but what makes it through is now a single click away. For example, when Mary Spiro of the
Baltimore Science News Examiner posts the “eight podcasts you shouldn’t miss,”
29 she brings each of those eight within one click on her blog. But the tens of thousands of science podcasts that didn’t make it through her filter are still available on the Net. It may take you a dozen clicks to find Selmer Bringsjord’s podcast “Can Cognitive Science Survive Hypercomputation?,” which didn’t make it through Spiro’s filter, but it’s still available to you in a way that a manuscript rejected by your librarian or print-book publishers is not. Even if Bringsjord’s paper is the millionth result in a Google search, a different search might pop it to the top, and you may well find it through an email from a friend or on someone else’s top-ten list.
Filters no longer filter out. They filter forward, bringing their results to the front. What doesn’t make it through a filter is still visible and available in the background.
Compare that to your local library’s strategy. In the United States, 275,232 books were published in 2008, a thirty-fold increase in volume from 1900.
30 But it’s highly unlikely that your local library got hundreds of times bigger during those past 110 years to accommodate that growth curve. Instead, your library adopted the only realistic tactic, each year ignoring a higher and higher percentage of the available volumes. The filters your town used kept the enormous growth in book-based knowledge out of sight. As a result, library users’ experience of the amount of available knowledge didn’t keep up with its actual growth. But on the Net, search engines answer even our simplest questions with more results than the total number of books in our local library. Every link we see now leads to another set of links in a multi-exponential cascade that fans out from wherever we happen to be standing. Google lists over 3 million hits on the phrase “information overload.”
31
There was always too much to know, but now that fact is thrown in our faces at every turn. Now we know that there’s too much for us to know. And that has consequences.
First, it’s unavoidably obvious that our old institutions are not up to the task because the task is just too large: How many people would you have to put on your library’s Acquisitions Committee to filter the Web’s trillion pages? We need new filtering techniques that don’t rely on forcing the ocean of information through one little kitchen strainer. The most successful so far use some form of social filtering, relying upon the explicit or implicit choices our social networks make as a guide to what will be most useful and interesting for us. These range from Facebook’s simple “Like” button (or Google’s “+1” button) that enables your friends to alert you to items they recommend, to personalized searches performed by Bing based on information about you on Facebook, to Amazon’s complex algorithms for recommending books based on how your behavior on its site matches the patterns created by everyone else’s behavior.
Second, the abundance revealed to us by our every encounter with the Net tells us that no filter, no matter how social and newfangled, is going to reveal the complete set of knowledge that we need. There’s just too much good stuff.
Third, there’s also way too much bad stuff. We can now see every idiotic idea put forward seriously and every serious idea treated idiotically. What we make of this is, of course, up to us, but it’s hard to avoid at least some level of despair as the traditional authorities lose their grip and before new tools and types of authority have fully settled in. The Internet may not be making me and you stupid, but it sure looks like it’s making a whole bunch of other people stupid.
Fourth, we can see—or at least are led to suspect—that every idea is contradicted somewhere on the Web. We are never all going to agree, even when agreement is widespread, except perhaps on some of the least interesting facts. Just as information overload has become a fact of our environment, so is the fact of perpetual disagreement. We may also conclude that even the ideas we ourselves hold most firmly are subject to debate, although there’s evidence (which we will consider later) that the Net may be driving us to hold to our positions more tightly.
Fifth, there is an odd consequence of the Net’s filtering to the front. The old library Acquisitions Committee did its work behind closed doors. The results were visible to the public only in terms of the books on the shelves, except when an occasional controversy forced the filters themselves into the public eye: Why aren’t there more books in Spanish, or why are so many of the biographies about men? On the Net, the new filters are themselves part of the content. At their most basic, the new filters are links. Links are not just visible on the Net, they are crucial pieces of information. Google ranks results based largely on who’s linking to what. What a blogger links to helps define her. Filters are content.
Sixth, filters are particularly crucial content. The information that the filters add—“These are the important pages if you’re studying hypercomputation and cognitive science”—is itself publicly available and may get linked up with other pages and other filters. The result of the new filtering to the front is an increasingly smart network, with more and more hooks and ties by which we can find our way through it and make sense of what we find.
So, filters have been turned inside out. Instead of reducing information and hiding what does not make it through, filters now increase information and reveal the whole deep sea. Even our techniques for managing knowledge overload show us just how much there is to know that escapes our best attempts. There is no hiding from knowledge overload any more.
The New Institution of Knowledge
We are inescapably facing the fact that the world is too big to know. And as a species we are adapting. Our traditional knowledge-based institutions are taking their first hesitant steps on land, and knowledge is beginning to show its new shape:
Wide. When British media needed to pore through tens of thousands of pages of Parliamentarians’ expense reports, they “crowd-sourced” it, engaging thousands of people rather than relying on a handful of experts. It turns out that, with a big enough population engaged, sufficient width can be its own type of depth. (Note that this was not particularly good news for the Parliamentarians.)
Boundary-free. Evaluating patent applications can’t be crowd-sourced because it would require expertise that crowds don’t have. So, when the US Patent Office was frustrated by how long its beleaguered staff was taking to research patent applications, it started a pilot project that enlists “citizen-experts” to find prior instances of the claimed inventions, across disciplinary and professional lines. That pilot is now becoming a standard part of the patent process.
32
Populist. IBM has pioneered the use of “jams” to engage the entire corporation, at every level and pay grade, in discussing core business challenges over the course of a few days. From this have come new lines of business—created out of a stew in which the beef, peas, and carrots all have the same rank.
“Other”-credentialed. At the tech-geek site
Slashdot.com (its motto is “News for Nerds”), you’ll find rapid bursts of argumentation on the geeky news of the day. To cite your credentials generally would count against you, and if you don’t know what you’re talking about, a credential would do you no good. At Slashdot, a slash-and-burn sense of humor counts for more than a degree from Carnegie Mellon.
Unsettled. We used to rely on experts to have decisive answers. It is thus surprising that in some branches of biology, rather than arguing to a conclusion about how to classify organisms, a new strategy has emerged to enable scientists to make progress together even while in fundamental disagreement.
Together, these attributes constitute a thorough change in the shape of our knowledge-based institutions. To see such changes at work, let’s look at two examples, one in business and one in government.
When Jack Hidary built his company, Primary Insight, he looked at the leader in his industry—The Gartner Group—and asked himself what wouldn’t it do? Gartner, a consultancy specializing in the business use of technology, hires experts out of information technology industries, gives them a staff, and charges companies in those industries to hear what the specialists have to say. They often become so expert and authoritative that they not only report on industries, they shape them. By operating within the traditional framework of knowledge and expertise, Gartner has built itself into a 1.3-billion-dollar company.
33
Hidary left his career as a scientist at the National Institute of Health in part because putting scientific papers through the traditional peer-review process had begun to seem frustratingly outdated. There had to be more efficient ways to gather and vet information. So, when Hidary started a company to advise financial fund managers, rather than build a Gartner-esque stable of full-time analysts, with a single lead in each practice area, he made a network of thousands of part-time experts available to every client. This arrangement not only provides a wider range of advice but also means that each financial fund manager is talking with a unique, customizable set of people—a network within the network. That’s crucial, explains Hidary, because fund managers’ competitive edge is knowing what the other managers do not.
Further, unlike at Gartner, Hidary’s expert networks consist of part-time experts who maintain their jobs in their given field. Hidary considers this to be a strength, because removing them from their field “removes their real-world edge.” Hidary won’t talk about what his clients pay for this service, but it is in six figures. And, he claims his subscription renewal rates are “off the charts.”
34 His success comes precisely by contravening much of what we have taken for granted about knowledge. For example:
Rather than hiring a handful of full-time experts who could be marketed as representing the unique pinnacle of industry knowledge, Hidary has built a network that has strength because of the variety of people in it.
Rather than amassing content as if it could all be deposited in one human library, he has built a network of people and resources that can be enlarged and deployed at will.
Rather than treating an expert as a full-time job or a profession unto itself, Hidary insists that his experts have their sleeves rolled up, working in their fields.
Rather than publishing a newsletter, the same for each recipient, Hidary’s network is always deployed in uniquely personal ways.
Rather than looking for credentials from authenticating institutions, Hidary’s experts earn their credentials from their peers.
If Hidary’s Primary Insight represents a new type of institutionalization of expertise, Beth Noveck’s experiences reflect a very different sort of institution: the White House.
Noveck, who for the first two years of the Obama administration was the leader of its open government efforts, has a fascinating history as an innovator building communities of experts. In June 2009, Noveck convened fifteen people in the DC boardroom of the American Association for the Advancement of Science, so that the AAAS could combine its expertise with that of the public to provide better advice on issues facing executive-branch agencies. Her goal was to shape a new kind of institution that would enable professionals and amateurs to engage on questions—sometimes driving to consensus but at other times providing a range of options and opinions.
Expert Labs is still young as I write this, but it is already quite remarkable. Two of the most staid and prestigious institutions in America—the White House and the American Association for the Advancement of Science—recognize that traditional ways of channeling and deploying expertise are insufficient to meet today’s challenges. Both agree that the old systems of credentialing authorities are too slow and leave too much talent outside the conversation. Both see that there are times when the rapid development of ideas is preferable to careful and certain development. Both acknowledge that there is value in disagreement and in explorations that may not result in consensus. Both agree that there can be value in building a loose network that iterates on the problem, and from which ideas emerge. In short, Expert Labs is a conscious response to the fact that knowledge has rapidly gotten too big for its old containers. . . .
35
Especially containers that are shaped like pyramids. The idea that you could gather data and information and then extract value from them by reducing them with every step upward now seems overly controlled and wasteful. Primary Insight and Expert Labs respond to knowledge overload, a product of the visibility of the network of people and ideas, by creating knowledge in a new shape: not a pyramid but a network.
And not just any network. Knowledge is taking on the shape of the Net—that is, the Internet. Of all the different communication networks we’ve built for ourselves, with all their many shapes—the history of communication networks includes rings, hubs-and-spokes, stars, and more—the Net is the messiest. That gives it a crucial feature: It works at every scale. It worked back when an online index of the Net fit on a hard drive with half the capacity of a typical laptop today, and it works now that there are a trillion Web pages. There’s no practical limit to how much content the Net can hold, and no practical limit to how many links we all can make to filter forward the relationships among that content. For the first time, we can deal with the overload of knowledge without squinting our eyes and wishing we were back in the days when sixteen products with sixteen categories of information counted as too scary for “housewives.” At last we have a medium big enough for knowledge.
Of course, the Net can scale that large only because it doesn’t have edges within which knowledge has to squeeze. No edges mean no shape. And no shape means that networked knowledge lacks what we have long taken to be essential to the structure of knowledge: a foundation.