2

Creating Valuable Content:
The Internet Influence

Fears about new technology shaping and influencing society for the worst are nothing new. Consider this passage from Plato’s Phaedrus, in which Socrates describes the creation of writing in Egyptian theology. A minor god, Theuth, presented this new tool to the god of all Egypt, Thamus, saying, “This will make the Egyptians wiser and give them better memories; it is a specific both for the memory and for the wit.”

But when Thamus reviewed Theuth’s other innovations (including mathematics, astronomy, and dice), he balked at the use of letters, saying, “This discovery of yours will create forgetfulness in the learners’ souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves.”

In other words, people dependent on this daring innovation writing would inherently be less aware, less thoughtful, less capable. At the very least, that would disrupt the rhythms of society. Sounds familiar, doesn’t it?

SOURCE OF OR CURE FOR LITERACY WOES?

We’ve become quite casual about the mainstreaming of the Web—we can’t get through a nightly TV news show or morning classical radio program without being told there’s more on a story or a playlist available on the station’s Web site. Pointers in printed marketing, collateral, newsletters, magazines, journals, and books—including this one!—to visit Web sites for more and updated information are ubiquitous.

Will the drive toward digital and Web publishing and communications result in changes even more profound than now seems likely, eroding levels of literacy, standards of scholarship, and the viability of professional journalism?

Should print publishers scurry to placate the “lost” audience they envision flocking to spend time and money on the Web? Is it even possible or necessary to make all potential information seekers happy all the time?

Will those rushing their publishing programs online wind up, along with the artist’s rep from Decca Records who turned the Beatles down, squirming in the special circle of hell reserved for embarrassed executives?

Will people, tired of being pushed to maintain a state of always-on, retreat from their arsenals of electronica and trade in their quick-and-dirty user behaviors for more thoughtful, civilized, “high-touch” business and interpersonal transactions?

There was a rash of speculative writing in the 1990s about the possibilities of the Web, defined by Stevan Harnad as the “fourth revolution” in human communication; the others are language, writing, and the printing press. Yet a decade later, the very issue of literacy, particularly in the developing world, is still pressing.

With the provision of universal primary education as one of the [Millennium] Development Goals, UNESCO estimates that 771 million people in the world, two-thirds of whom are women, are illiterate: “This is—for a fifth of the world’s adult population—a serious violation of human rights. It also constitutes a major impediment to the realization of human capabilities and the achievement of equity and of economic and social development, particularly for women.”

The National Institute for Literacy (nifl.gov) tells us that “The Workforce Investment Act of 1998 and the National Literacy Act of 1991 define literacy as ‘an individual’s ability to read, write, speak in English, compute and solve problems at levels of proficiency necessary to function on the job, in the family of the individual and in society.’ This is a broader view of literacy than just an individual’s ability to read, the more traditional concept of literacy.”

Should we pat ourselves on the back, or hang our heads in shame, that in 1999 the Household Education Survey found that 50.2 percent of the US population aged over twenty-five had read a newspaper at least once a week, read one or more magazines regularly, and read one book in the past six months?

It may be too early to tell whether computers and the Web, at least in developed nations and the US in particular, are the source of or cure for literacy woes. What is evident to anyone who has the misfortune to read extensively online is that reading printed material is a vastly superior exercise with regard to ergonomics. However sophisticated a screen you use, pixilation of the font creates eyestrain and the upright position of the material, at arms length and eye level, is not comfortable.

ONLINE READABILITY ISSUES

Michael L. Bernard’s study at the Software Usability Lab at Wichita State University shows that a 14-point, serif font lends itself to thorough reading. Steve Outing, in his report on the 2004 Eyetrack III study (more on this below) observes, “Smaller type encourages focused viewing behavior (that is, reading the words), while larger type promotes scanning. In general, our testing found that people spent more time focused on small type than large type. Larger type resulted in more scanning of the page—fewer words overall were fixated on—as people looked around for words or phrases that captured their attention.”

But, wait—what exactly is a 14-point font on your computer screen? Your browser will allow you to change the font size, implying that if, for instance, you change it to a larger size, your concentration will go out the window, which is patently ridiculous.

The 1990s, when educators first began to speculate about the implications of the Internet, spawned much debate over the future of literacy. What’s surprising is that the debate didn’t continue. According to Charles A. Hill, activity dropped as participants realized their speculations were not fulfilled by developing technology. One “given” for the theorists of the 1990s, particularly those in English departments, was that there would be a lot more writing and more reading by students—and that texts online would become interactive, rather in the way video games have developed. Another universal prediction was that phones would become obsolete; no one foresaw the equally prevalent use of cell phones and e-mail, and the interaction between the two media.

Instead, Hill, writing in 1996, regretted the “disappearance of textual boundaries” in these terms: “Texts scroll on and off our screen, and we know that we are always looking at a small part of a work of indeterminable size.”

Online amendments and editing by the author make the size and scope of a piece even more changeable. Further, hyperlinks and the decision of each reader to take them or not means that to a certain extent the reader, not the writer, determines the structure of a piece. Some online publications, notably Wikipedia, which thrives on reader amendment, further blur the distinctions between writer, reader, and editor.

Now, with both print and digital devices sharing the market, users may be frustrated by the limitations of print, but they are equally frustrated at being tethered to a desktop or laptop. The current generation of students, Hill comments, is better at multitasking—we all know teenagers who IM, talk on their cells, watch television, and do homework simultaneously. Has the ever-surprising human brain developed a filter system to deal with so much information from so many sources? Or, is this generation’s approach merely superficial? In another decade we may have answers, or, more likely, further questions.

TRACKING WHAT THE EYE TRACKS

Another issue with reading material online is the way the eye behaves while viewing a computer screen. The established pattern of the eye on scanning a printed page—something Wall Street has known, and capitalized on, for decades—is, as you’d expect, a basic left to right pattern, reading from the beginning of one line to the end and then hopping diagonally backward to the beginning of the next, and so on in a series of zigzags. Faced with a page that is a blend of graphics and colors, the eye strives to make sense by imposing the familiar “Z” pattern, this time on the whole page.

Onscreen, it’s a different story. The Poynter Organization’s most recent eyetrack test, Eyetrack III (2004), examined how people read mock online news sites. The Z is still there—sort of—as the eye wanders around. The main difference is that instead of ending up safely in the lower right-hand corner (where, if this were an advertisement, a starburst would announce the final hard sell—fifty percent off today only!) the eye returns to the top of the screen. Here’s a simplified version of this pattern, with the circle representing the starting point. In Eyetrack lingo, the angles are points of eye fixation, and the lines between them, saccades (amaze your friends at Scrabble!):

 

f0014-01

(This Eyetrack diagram appeared in an article by Steve Outing, “What News Websites Look Like Through Readers’ Eyes,” and is reprinted by permission of the Poynter Institute for Media Studies. See poynteronline.org for more on the Poynter-cosponsored Eyetrack studies.)

So the eye attempts to synthesize information in a different way, drawn in the last stage upward to the top of the screen—and this is a journey that takes fractions of a second. Owners of commercial Web sites know that, based on the first page they come to, consumers make the decision whether to stay on that site or not; and if they stay, what they will buy and whether they will return.

To return to the somewhat redundant information that a serif font leads to greater legibility—one of the standard conventions of typesetting—this is all to the good if the reader decides to print out material. For all the assertions of the last decade—Hill’s ironic definition of the “late age of print” and Harnad’s “post-Gutenberg galaxy”—printed material is still, for many circumstances, the best option. Of course, when an affordable e-reader is on the market, that too may change.

ERODING TEXTUAL BOUNDARIES

With the availability of so much information online—where, incidentally, most of the research for this chapter was done, using browser bookmarks and printouts studded with colored Post-its—how, if at all, has this changed reading and our approach to books? And what about students in high schools and colleges for whom the family computer was friend, companion, and babysitter? Are they frustrated with print material, used to getting the information now—now!—fractious and uncomfortable without the comforting glow of a monitor?

Not so, according to Dr. Leigh Ryan, director of the Writing Center at the University of Maryland College Park. She admits that students are far more comfortable online, and reading online, than older generations—ah, those young eyes. It makes you speculate that some dormant part of the human brain has fortuitously sprung into action, rather as you wonder exactly how the Exner’s Writing Area in the left frontal lobe entertained itself during millennia of rock-banging. Visual rhetoric, according to Dr. Ryan, has replaced rhetoric as a subject, meaning that students’ assignments are now expected to be multimedia, words enhanced by images.

The truth is, books are books, computers are computers, and we have different expectations for both. As for the much-vaunted hyperlinks and their perceived advantage, it is always the reader’s choice to use them and it is the writer’s responsibility to maintain the linear flow of his or her material. It’s not possible to second-guess which links in a text will be clicked, but it is sensible to assume that a reader would like to see a figure, illustration, or other nontext feature near the text where it is mentioned.

Hill’s concept of eroding textual boundaries raises some interesting possibilities for the print publisher. No, we’re not suggesting that every book should become a Wikipedia, but discussion and further findings—particularly for a book like this where new information and stories come to light daily—can be continued on the Web, via a blog or forum.

As an example of new findings, here’s an interesting twist to the digital vs. print issue—about readers becoming listeners. Associated Press reported that college-level students are using MP3 players to listen to downloaded material, and a year ago Apple launched iTunes U, where teachers can post lectures for students to download. Professor Kathy O’Connor at Tidewater College, with four campuses in Virginia got an $11,000 grant to provide students with iPods; other schools’ libraries lend iPods to students. McGraw-Hill Cos., the leading textbook publisher, offers more than 800 digital products, an increase of 50 percent over the past 4 years. Leading digital book seller Audible, Inc. and Pearson Education joined forces in 2006 to launch VangoNotes, textbook chapter summaries and reviews in MP3 form.

Grade schools are also going digital, with Playaway—a two-ounce flashplayer preloaded with an audio book made by Follett Corp. and Findaway World—which has been sold to school districts for about six months. On loan at over 1,000 libraries, 15 percent of which are school libraries, the players are seen as an adjunct to parents reading to their children, a practice long encouraged by educators to improve literacy and a love of books.

ARE WE REVERTING TO A PRE-LITERATE CULTURE?

Of course, it could be argued that increasing reliance on iPods and digital listening devices erodes our full potential as readers—that we’re reverting to a pre-second-revolution stage (by Harnad’s definition, we have speech but not writing). But a technical concept seems appropriate and reassuring in the context of the fate of literacy as we’ve known it up to now. It’s the concept of graceful degradation.

Graceful degradation is an important property of large networks:

One of the original motivations for the development of the Internet by the Advanced Research Projects Agency (ARPA) of the U.S. government was the desire for a large-scale communications network that could resist massive physical as well as electronic attacks including global nuclear war. In graceful degradation, a network or system continues working to some extent even when a large portion of it has been destroyed or rendered inoperative.

(Source: http://searchnetworking.techtarget.com/sDefinition/0,290660, sid7_gci1238360,00.html)

“Fault-tolerance or graceful degradation is the property of a system that continues operating properly in the event of failure of some of its parts,” sums up Wikipedia. We prefer viewing the steady proliferation of applications and platforms with optimism rather than alarm. What’s so bad about having a host of alternative failsafes for disseminating information? It’s not as if we can control the definition of literacy, any more than we can control the outlets from which people take the words and images and sounds they’ve decided are worth their time and attention.

We probably aren’t rearing a generation of scholars addicted to online sources only, and the thundering migration to myriad versions of online content by no means makes all printed content obsolete or even inferior. We’re becoming a culture with many different sources of information; we have to become smart about the publishing tools and distribution channels available and use the best ones for the job. We have to broaden our definition of audience and of literacy—pluralizing both.

THE AGE OF INFORMATION: RICH BUT SCATTERED

One of the most intelligent analyses we’ve found is an article by Jonathan Follett, “Envisioning the Whole Digital Person” (Feb. 2007). Follett speaks of a rich but “scattered information environment” in which the search for valuable content is an important aspect of literacy—and information retrieval and storage are discretionary acts by individuals:

Our lives are becoming increasingly digitized—from the ways we communicate, to our entertainment media, to our e-commerce transactions, to our online research. As storage becomes cheaper and data pipes become faster, we are doing more and more online—and in the process, saving a record of our digital lives, whether we like it or not.

As a human society, we’re quite possibly looking at the largest surge of recorded information that has ever taken place, and at this point, we have only the most rudimentary tools for managing all this information—in part because we cannot predict what standards will be in place in 10, 50, or 100 years.

Our evolving digital existence has made it difficult to keep track of and control all of our information. People are executing more and more transactions online, but entities other than users govern the terms of many of these transactions. Within the digital world, there are items we might choose to share—like our videos, blogs, and playlists—and other items we would prefer to keep private—like our medical records, our financial transactions, and our personal communications. Our personal knowledge assets are scattered, haphazardly organized, and growing rapidly. As a result, we are struggling to access our data, organize it in a meaningful way, and interact with it....

Today, we can purchase storage media for one dollar a gigabyte or less. However, while we have the capability and capacity to save our data, that doesn’t mean we do so with any set purpose in mind. The full impact of our having unlimited digital storage has not yet become apparent, because its existence as a commodity has so far been relatively short. But there is no doubt that a massive amount of personal information is accumulating on people’s hard disks everywhere. The greatest piece of unmapped territory for the search industry to index may not be the dark data that resides on corporate servers, but rather people’s rapidly growing personal archives.

Follett advises, “We should take a holistic view of the digital person.” In short, we have our work cut out for us: “As designers of user experiences for digital products and services, we can make people’s digital lives more meaningful and less confusing. It is our responsibility to envision not only techniques for sorting, ordering, and navigating these digital information spaces, but also to devise methods of helping people feel comfortable with such interactions. To better understand and ultimately solve this information management problem, we should take a holistic view of the digital person. While our data might be scattered, people need to feel whole.”

Connectivity, community, collaboration: All give us more sources of information than we know what to do with. It can be difficult to decide what or whom to pay attention to—so much so that some people admit they’re reading less than ever, in any medium. Which is why the fate of reading in general and print media in particular—especially books—is considered questionable. We beg to disagree. Yes, the fate of content not worth its salt is sealed, and we should thank the impatience of the Internet culture for that. Print isn’t a magic medium—there’s lots of swill on paper. Valuable content in any medium will find its audience.

No, that’s not quite right. Audiences will find valuable content, as defined by themselves for themselves. Our job is to get out of their way.