3

EDUCATION

Two-Way TV

I Everything Has to Change

ONE AUTUMN AFTERNOON in 1959, Buckminster Fuller told some students at Southern Illinois University (SIU) that the Seagram Building was too heavy. Lecturing to the design department, he explained that strength need not depend on weight because new technology could make more with less. Modernist architecture was outdated, he asserted, and so was the competitive posturing of Cold War politicians, a needless struggle he equated with his old nemesis, Malthusianism. He assured the students that there would be plenty of resources to support the growing world population—and no more cause for war—as long as they listened to him carefully and committed themselves to designing comprehensively.

Fuller wasn’t just addressing the future architects and ad men of southern Illinois that afternoon. There was also a film crew in the classroom, recording the first footage for an eighty-hour-long documentary, with a five-year production schedule, comprehensively showing Fuller’s comprehensive thinking. The epic project was directed by the university’s design department chairman, Harold Cohen, who had recruited Fuller to the Carbondale campus as a research professor. While Cohen envisioned that the documentary would preserve Fuller’s ideas after his death, Fuller himself had grander ambitions for the film. As he told the SIU administration, consulting on a new campus they were planning in 1961, he believed that classrooms would soon be superseded by “an intercontinentally networked documentaries call-up system, operative over any home two-way TV set.” Schools would become obsolete, he foretold, and all the world’s great ideas would be instantaneously accessible to anyone anywhere—from New Delhi to Nairobi—elucidated by the world’s greatest communicators. When his eighty-hour documentary was complete, Fuller’s own ideas, as explained by him, would naturally be the first to go online.

The film was never finished.1   Yet Fuller’s technocentric educational ideas endure after more than half a century. Schools are still allegedly headed toward obsolescence, and the preferred fix is still some sort of telepresence. The rhetoric of MIT computer scientist Anant Agarwal is typical of contemporary educational thought: “Everything has to change,” he said in a 2013 TED Talk. “We need to go from lectures on the blackboard to online exercises, online videos. We have to go to interactive virtual laboratories and gamification. We have to go to completely online grading and peer interaction and discussion boards. Everything really has to change.”

Agarwal’s opinion is based on his own teaching experience. The year before his TED Talk, MIT freely offered his introductory electronics class online, uploading video of his lectures, as well as interactive course materials and tests, for absolutely anyone to audit. More than 150,000 students from 162 countries enrolled, staggering numbers that inspired MIT to partner with Harvard on a much broader free online curriculum, with classes ranging from solid state chemistry to social justice, each taught by a star professor, all under Agarwal’s management. By the time Agarwal appeared on the TED Global stage, hundreds of universities were offering Massive Open Online Courses (MOOCs), either through his nonprofit edX consortium or one of several private MOOC platforms.

MOOCs may be the ultimate fulfillment of Fuller’s two-way TV in terms of technology. Yet he would have been terribly disappointed, because the technology has wrought the opposite of Fuller’s comprehensivist intentions in terms of content. Most MOOCs are as narrowly traditional as the university courses they digitalize, a phenomenon driven by efficiency and reinforced by the quest for big audiences: The courses made into MOOCs tend toward core subjects and vocational training.

“Automation is with us,” Fuller argued in 1961, at the campus planning meeting where he described two-way TV. Embracing efficiency in all forms—and holding that automation would liberate humanity from the need to work—he prophesied that automated education would be “concerned primarily with exploring to discover not only more about the universe and its history but about … how can, and may man best function in universal evolution.” Looking at today’s MOOCs, there are plenty of reasons to believe he was simply deluded, irrationally infatuated with technology as a form of transcendental salvation. But it’s also possible that his pedagogical legacy has been ill served by contemporary technology: that the wrong educational strategies have been automated, motivated by the wrong intentions.

Fuller’s educational career was long and varied. Over a period of thirty-five years, he held teaching positions everywhere from Black Mountain College to MIT and Harvard. In those ever-changing circumstances, he consistently maintained that the innate curiosity of students needed only to connect with worldly experience for them to become comprehensive thinkers, and he experimented constantly to find the optimal way to foster that connection—to have the greatest impact on the most people. The potential and perils of automation converge on this ambition: how to automate education without educating automatons.

II Educational Television

TELEVISION WAS STILL an experimental medium when Buckminster Fuller first appeared on the air. He was a guest of Gilbert Seldes, the cultural critic recruited by CBS in 1937 to direct the network’s programming. During his half-decade tenure, Seldes sent TV cameras out to football games and down to the ocean, reckoning that audiences might enjoy seeing sports and watching tides. He also put great store in the potential for education. “Here is a blackboard for the mathematician, a laboratory for the chemist, a picture gallery for the art critic, and possibly a stage upon which the historian can reenact the events of the past,” he wrote in The Atlantic Monthly.2 Explaining provocative new ideas with tangible mechanical models, Fuller’s dynamic lectures were perfect fodder for the future educational platform.

Yet two decades later, when CBS finally inaugurated educational programming with an early morning show called Sunrise Semester, the material could not have been more conventional. Partnering with New York University, the network filmed a professor of romance languages named Floyd Zulli, Jr., recapitulating his introductory comparative literature course in a TV studio staged to look like a classroom. Seated at a desk or standing at a lectern, Zulli delivered thirty-minute lectures on the novels of Stendhal and Proust in an ersatz British accent. (“It is no mere coincidence that the word time appears in the first and last sentence of Marcel Proust’s great work, À la Recherche du Temps Perdu. It is no coincidence simply because by this time you are as well aware of the fact as I am that time is the leading figure in this magnificent novel. Time and memory: Those are the things that will occupy us this morning.”) Despite the tediously academic tone, or perhaps because of it, approximately 120,000 viewers tuned in daily at 6:30 am, and 177 of them paid NYU seventy-five dollars to receive course credit.3

Following the success of Zulli’s Comparative Literature 10 in the fall of 1957, Sunrise Semester became a staple at CBS, and NBC launched a rival pre-dawn program called Continental Classroom. The NBC show focused on science and math, beginning with a chemistry course taught by UC Berkeley professor Harvey E. White; 275,000 people tuned in, and some 250 colleges allowed students to watch for course credit provided they pass a midterm and final exam.

Five thousand students sought college credit in 1958, watching White standing in front of a studio-lit chalkboard crowded with chemical formulae. By 1961, with Harvard professor Charles F. Mosteller teaching probability, the number of college-enrolled students reached ten thousand. Televised education “is a very substantial and largely unexplored area,” Mosteller enthused to the Harvard Crimson, an opinion shared by White, who told Time that he didn’t miss the interaction of a traditional classroom setting. “Actually, most questions are asked by the dumber students,” he said.

Fuller developed his vision for automated education concurrently with these educational developments in network television, and he shared his views with the SIU administration when the popularity of televised lecturing was reaching its zenith.4 In his three-and-a-half hour oration to the university’s Edwardsville Campus Planning Committee—published by SIU the following year as the book Education Automation—he never mentioned Mosteller or Zulli or even Gilbert Seldes, but the vast number of people watching NBC and CBS must have stoked his optimism.5 He called education nothing less than “the upcoming major world industry.”

He believed that this glorious future was a natural consequence of technological progress. Advances in transportation meant that “the world is going from a Newtonian static norm to an Einsteinian all-motion norm,” he said. Soon people would consider every place home, and the local political interests that supported local schools would no longer be viable. National politics was also untenable because there was no reliable way for politicians to know the will of ever-expanding constituencies. Two-way TV was initially conceived by Fuller to address the political disconnect by letting constituents respond directly to proposed policies—“a constant referendum on democracy”—but he realized that the interactivity would also allow viewers to call up television programs on demand.6 There could be a vast library of authoritative documentaries on myriad subjects. “Simultaneous curricula are obsolete,” he decreed. Students would no longer need to see the same lecture at the same time, as they had to on NBC or at SIU, because everything there was to know would be recorded as video, available on-screen whenever someone grew curious. “There is no reason why everyone should be interested in the geography of   Venezuela on the same day and hour,” Fuller told the SIU planners. “However, most of us are going to be interested in the geography of  Venezuela at some time.” He argued that “real education”—possible only through his two-way TV—was something to which people would “discipline themselves spontaneously under the stimulus of their own tickertapes.”

It was a truly comprehensive vision: Fueled by automation, the vast new industry of education would provide lifelong schooling for everyone, and their schooling would in turn prepare them to contribute to the educational industry and to automation more broadly. “Research and development are a part of the educational process itself,” Fuller said at the SIU meeting. He claimed that the system he envisioned was sustainable because the efficiency of automation would always increase, providing an ever greater return on investment. In fact, the predominant form of research and development he foresaw would occur through “regenerative” consumption: People would guide the process of automated industrialization by what they chose to purchase, and they’d become better consumers, making wiser decisions, as a result of their automated education. Like most techno-utopian visions, it was a sort of Ponzi scheme driven by self-deception. Falling for it, Fuller enlisted education to serve technology, rather than the opposite.7

By the time he published Critical Path in 1981, Fuller had replaced two-way TV with a “world-satellite-interrelayed computer” that would provide “controlled video-encyclopedia access” for all. This computer would “make it possible for any child anywhere to obtain lucidly, faithfully, and attractively presented information on any subject,” he claimed, and to ensure the quality of the videos, there would be a fully streamlined production system beyond Floyd Zulli’s wildest dreams. “Those who love to teach and have something valuable to teach can discipline themselves to qualify for membership on the subject-scenario-writing teams or on the video-cassette or disc production teams,” Fuller explained. “Permission to serve on the world’s production teams will be the greatest privilege that humanity can bestow on an individual.” Though Fuller allowed that students would “be able to review the definitions and explanations of several authorities on any given subject,” the technological imperative of efficiency had all but completely obliterated his original intention of inspiring comprehensivism through spontaneous curiosity and individual exploration. Following the automation of education to its industrialized extreme, Fuller presented the perfect plan for training all of society to coalesce into a high-performance machine.8

Fuller could have argued that this total societal efficiency, in which regenerative consumption was programmed into a world-satellite-interrelayed global population, was desirable: that universal dehumanization was the way for man to best function in universal evolution. Instead, he simply chose to ignore the authoritarian implications of his proposed educational system. “I am certain that none of the world’s problems … have [sic] any hope of solution except through total democratic society’s becoming thoroughly and comprehensively self-educated,” he disingenuously proclaimed on the page facing his description of subject-scenario-writing teams.

Fuller wasn’t forced to be consistent because he wasn’t actually building any of the systems he was describing. In his lectures and writings, he didn’t have to decide between autodidactic open-endedness and prepackaged comprehensiveness, or to consider whether comprehensive thinking could be prepackaged. He could be hazily optimistic, dreaming his way to utopia. As so often in his prognostications, he left the difficult decisions to those who would actually attempt to automate education.

III The 160,000-Student Classroom

IT WAS THIS catalytic moment,” Stanford computer scientist Sebastian Thrun told Fast Company in November 2013. “I was educating more AI students than there were AI students in all the rest of the world combined.” Thrun was describing his first MOOC, an online version of his introductory artificial intelligence course, which he’d launched two years earlier with a brief post to an AI mailing list. Some 160,000 people from 195 countries signed up. His Stanford classroom capacity, in contrast, was just 200 students.

Like Anant Agarwal at MIT, Thrun saw vast potential in those numbers. Unlike Agarwal, he decided to launch a for-profit company that would produce MOOCs outside the classroom as educational products specifically designed for web viewing. He called his company Udacity.

Along with edX, Udacity is one of the three major purveyors of MOOCs. (The third, called Coursera, was founded by Thrun’s Stanford colleagues Andrew Ng and Daphne Koller; Coursera splits the difference between edX and Udacity as a for-profit platform hosting MOOCs independently generated by hundreds of universities.) Given Thrun’s insistence on original content professionally scripted by in-house writers, the Udacity model is probably the closest to what Fuller had in mind with his subject-scenario-writing teams and trained production crews. Yet for all their organizational differences, Udacity, Coursera, and edX are strikingly similar pedagogically. All conform to the same basic teaching parameters because all must contend with the same technological and psychological realities.

To capture and retain student attention, lectures are typically broken into many brief segments, often as short as five minutes apiece. Each segment is edited to convey a self-contained concept, followed by a short multiple-choice quiz to engage students and provide them with immediate feedback. A longer test is administered at the end of the lesson. In the sciences and math, grading is by computer. For the humanities, essays are evaluated by fellow students. Each student is responsible for grading the exams of several others, and the grades are averaged. This compromise is presented as a benefit: The process of reviewing other students’ work is supposed to enhance learning in its own right.

To simulate classroom camaraderie, students can interact through dedicated social networks, and are encouraged to meet in person with others who happen to live nearby. They answer one another’s questions, eliminating the need for teachers to interact directly with the masses. Sheer numbers ensure that any query will swiftly be addressed, often more than once, and search algorithms help to organize and validate answers. Professors can monitor the social networks, and also see aggregate results on quizzes and tests. They can refine future lessons based on these data.

Social networking and peer grading are possible because courses are run on a fixed schedule, with firm due dates for assignments. Each week, all ten thousand or hundred thousand students are expected to be at the same stage in the curriculum. However, what happens within that week is open: when people view the lectures, at what speed, and how often, is a personal decision. MOOCs can thereby accommodate people living in different time zones with different work and sleep patterns and different cognitive skills.

Such is the optimized product of Agarwal’s epiphany and Thrun’s catalytic moment. Yet even with the combined R&D resources of the Ivy League and Silicon Valley, all three MOOC platforms have faced dismal course completion rates. In most cases, only 10 percent of the students make it to the end of the semester. Often it’s 5 percent or fewer. Only half of the people who enroll bother to watch a single lecture. Thrun has been especially vocal in his frustration. “My aspiration isn’t to reach the 1% of the world that is self-motivating,” he told Nature in 2013. “It’s to reach the other 99%.”

The poor numbers have compelled Thrun and his fellow MOOC producers to temper their ambitions. By 2014, Udacity was seeking to motivate people by focusing lessons on practical skills that would help students get better jobs, and all three platforms were experimenting with various forms of certification that explicitly connect course completion with resume enhancement. (Exams could be proctored by webcam to discourage cheating and to add credibility to the certification process.) In other words, as MOOCs have matured, they’ve increasingly become mechanisms for professional development, as specialized as the contemporary job market. If they fail even at this degraded level, MOOCs will likely go the way of Continental Classroom and Sunrise Semester, which were canceled as mass audiences drifted to other interests. And if MOOCs succeed by following their increasingly careerist trajectory? They will only help to narrow the prospects of mass education, reinforcing specialization as a global phenomenon.

IV The Guru

A BUCKMINSTER FULLER lecture could last for three or four hours. Or it might endure for five or six or more. Nobody knew when Fuller might finish, himself included, because he didn’t plan in advance what he was going to say. Standing at the podium, he simply raised his hands and began with the first thought that entered his head.

He called his lectures “thinking-out-loud sessions.” They were the antithesis of five-minute MOOC lessons, and a far cry from anything that would have worked in his own proposed video-encyclopedia, let alone on two-way TV. As Stewart Brand wrote in his introduction to the 1968 Whole Earth Catalog (which he dedicated to Fuller), Fuller’s lectures had “a raga quality of rich nonlinear endless improvisation full of convergent surprises.” More than any specific invention or idea, this endless improvisation elevated Fuller to cult status on college campuses in the 1960s, bringing him his global reputation as a teacher and sage.9

Fuller’s lectures encouraged autodidactic learning by example—his own—showing the convergences that could be found through sheer curiosity about the world, and demonstrating that those convergences were worth discovering. His insights were inevitably woven into his personal myth. For instance, he’d describe the kindergarten class in which he built a tetrahedral house out of toothpicks and peas. He’d explain that he was myopic and ignorant of what housing looked like, so he’d experimentally found the simplest stable structure—a tetrahedron—an architectural unit so ideal that it must be the fundamental building block of the universe. It didn’t really matter that the tetrahedron was used in architecture long before he was born, or that his cosmology had the quality of medieval metaphysics. The substance of his lectures was secondary; they were primarily inspirational. They suggested that comprehensive thinking required nothing more than a curious mind. Anyone who heard him was motivated to emulate him—at least until the buzz wore off the following morning.

But the lectures were not stand-alone products. For all the words he used—and all the terminology he invented—Fuller was skeptical of spoken and written language. “My philosophy is one which has always to be translated into inanimate artifacts,” he asserted in Education Automation. When he actually taught at SIU or elsewhere, his lecturing was merely a prelude to the real work of making things.

Fuller’s first teaching position was at Black Mountain College, a small experimental school in North Carolina that hired him in June 1948 as a “summer substitute for a legitimate architect.” He arrived with an Airstream trailer full of geometric models that he’d fabricated over the previous year while exploring the structural properties of great circles and platonic solids. Carrying them into the dining hall, he introduced himself to one and all with a three-hour after-dinner thinking-out-loud session. (“Bucky whirled off into his talk,” his Black Mountain colleague Elaine de Kooning later recalled, “using bobby pins, clothespins, all sorts of units from the five-and-ten-cent store to make geometric, mobile constructions, collapsing an ingeniously fashioned icosahedron by twisting it and doubling and tripling the modules down to a tetrahedron; talking about the obsolescence of the square, the cube, the numbers two and ten (throwing in a short history of ciphering and why it was punishable by death in the Dark Ages); extolling the numbers nine and three, the circle, the triangle, the tetrahedron, and the sphere; dazzling us with his complex theories of ecology, engineering and technology.”) His lecture utterly seduced his audience.10 Entranced by his ideas, most of the seventy-four students were enticed to build on his geometric studies by helping him to erect the first large-scale geodesic dome. At forty-eight feet high, it would be more than ten times larger than the biggest model in his Airstream.

Over the course of several months, the students punched holes into long strips of venetian blind, the only material he’d been able to afford. The steel slats were then laid down on a field and bolted together, but the metal was too flimsy. Elaine de Kooning dubbed Fuller’s failure the “supine dome.” Fuller swiftly recast it as a lesson (later folded into his personal myth): By starting with a structure that wasn’t strong enough to stand, and then systematically reinforcing it, you could be sure that you weren’t wasting materials. The class was utterly convinced. As Fuller’s student Arthur Penn later said, the dome did not stand “because it was predicted to fall down.”

If the students benefited from the failed experiment, Fuller did so to an even greater extent. By the following February, learning from what went wrong in North Carolina, he’d successfully erected a human-scale dome at the Pentagon, a first step toward a lucrative long-term relationship with the US government. And the Pentagon dome in turn inspired new experiments with students.

At the Chicago Institute of Design, Fuller organized an architecture class around the problem of furnishing a dome with amenities for a family of six—including working bathrooms and kitchen—that would be as portable as the dome itself. They came up with a “standard-of-living package,” a complete home interior that could be folded to fit on a trailer. They built a small-scale model. He brought it back to Black Mountain, together with nine Institute of Design students, to work on a new dome in the summer of 1949. They fit his dome with a clear plastic skin. The structure became a shelter.

Other students worked on other projects with Fuller in the 1950s and 1960s. At Washington University in St. Louis, he challenged his class to develop a dome that would self-assemble, struts extended by compressed gas. (When they succeeded, he dubbed it a “Flying Seedpod,” declaring that it might serve as a rocket-deployed lunar habitat.) At the University of Michigan, his class worked on a skeletal “Dynamic Dome” that repelled rain by spinning really fast. At McGill University in Montreal, he worked with students on a low-cost paperboard dome shielded against the weather with aluminum-foil sheathing. At SIU, as the United States edged toward war in Vietnam, he had his class develop bamboo domes that he said might be a “solution” to the crisis, which he blamed, as always, on the abstract talk of politicians.

None of these class assignments was an exercise. All were active research projects, confronting problems that Fuller deemed important, crucial components of a world built upon his all-encompassing philosophy. Students were motivated not only by the opportunity to work on the cutting edge of architecture and engineering, but also by the chance to practice the sort of comprehensive thinking Fuller evoked in his epic lectures. They were invited to inhabit his mind.

Fuller generally started with a concept that extended previous innovations into new domains. For instance, at North Carolina State College in 1951, he conceived the idea of building an automated cotton mill inside a geodesic dome. The shape of the dome would allow machinery to be arranged radially on platforms suspended around a central elevator. “Thus a true flow pattern, similar to the digestive, shunting, secretive and regenerative pattern of human anatomy, will digest and process the cotton,” he wrote in the course description. The mill he envisioned would take far less material to build and far less energy to run than any factory in existence. The students’ task, under his guidance, was to figure out how to practically carry out his vision. “Within thirty days a general assembly and primary sub-assembly set of drawings must be developed, clearly revealing the fundamental scheme and cogently demonstrating the net economic gains in pertinent industrial logistics in metals, energy, and time investments of original installation and subsequent operation,” he elaborated. “It will be clear, as the problem develops, that this omni-directional, multi-dimensional spherical patterning introduces relationships and energy efficiencies that are not only novel but to be contrasted to the present 1-, 2-, and 3-dimensional geometry limitation of intermittent batch and production lines.” In other words, by working out the details of a cotton mill, the students would discover for themselves the design principles that Fuller deemed universal. In carefully controlled circumstances, working under the influence of his lectures, they’d replicate his autodidactic learning process.

The degree to which his best students were able to absorb his worldview, and to adopt it as their own, is demonstrated by the number of former students who became his professional assistants or associates. (One of them, Shoji Sadao, became his closest architectural collaborator, contributing to nearly every major Fuller project over the final three decades of his career.) Less felicitously for Fuller, the origin of new ideas sometimes came into dispute as students extended his principles independently of his classroom.

The most notorious case arose at Black Mountain, when Fuller’s 1948 summer student Kenneth Snelson returned in 1949 with a sculpture, Early X-Piece, composed of wooden slats held under tension with nylon string. Because the tension was continuous, the triangulated structure remained rigid even if crossed pairs of slats were noncontiguous. The wooden crossbars seemed to levitate. Snelson named his system “floating compression.”

Fuller immediately recognized the significance of Snelson’s achievement: Snelson had succeeded in completely separating compression and tension, a structural ideal that Fuller had been touting since the late 1920s when he conceived of mast-hung Dymaxion dwellings. Fuller sought to separate compression and tension because stretched wires were strong and light. Tension was an essential aspect of doing more with less, avoiding wastefully heavy buildings. In a geodesic dome, tension and compression were integrated and balanced, making domes far stronger for their weight than piles of bricks or poured concrete. Snelson’s floating compression arose from the same engineering problem as the geodesic dome, but represented an entirely different solution.

Fuller seized upon it as his own. He renamed Snelson’s principle tensegrity and used it to build new versions of the geodesic dome—working with University of Minnesota and Princeton students in 1953—eventually leading to a 1962 patent in Fuller’s name. For the rest of his life, he defended floating compression as his unique invention. “For twenty-one years before meeting Kenneth Snelson, I had been ransacking the Tensegrity concepts,” he wrote in a 1961 Portfolio & Art News Annual article.11 And in a letter to Snelson nearly two decades later, as Snelson continued to explore the sculptural potential of floating compression, Fuller condescendingly gave his former student credit for having come up with a “special-case demonstration” of his own principle. (At least he didn’t sue Snelson for patent infringement.)

Fuller’s all-encompassing ego extended beyond issues of intellectual property. He also had no tolerance for dissent. Since he believed that his worldview embodied the fundamental laws of the universe, as uniquely revealed to him through his autodidactic studies, he considered any alternative viewpoint unworthy of discussion. “I am quite confident that I have discovered the coordinate system employed by nature itself,” he said by way of personal introduction in Education Automation. But what if he was mistaken? From his veneration of triangles to his belief that automation was universally beneficial, there was plenty to dispute. In the rare cases that a pupil had the cheek to question his assumptions, Fuller changed the subject.

Fuller the guru was at odds with his own educational ideals, as was Fuller the industrialist. The authoritarian megalomania that predominated his plans for education automation also undermined his performance in the lecture hall and classroom. His rigid concept of comprehensivism paradoxically made students specialists in Fullerism.12 He was oblivious to this paradox, naively believing that if all students thought for themselves, they would think exactly as he thought. There’s no reason to think that Fuller was being hypocritical when he told his former Black Mountain student Ruth Asawa that the key to education was to “create an environment so that learning can take place.” But he was clearly not the right person to create that environment.

V Cultivating Curiosity

IN A 1965 essay on education titled Emergent Humanity, Buckminster Fuller characterized the autodidactic learning process as “first taking apart and then putting together.” He argued that this was the way in which people “learn to coordinate spontaneously. They learn about the way Universe works.” Fuller was describing the enlightened life led by children before getting “degeniused” by counterproductive formal education. He saw the child’s room as a “learning lab” where the materials taken apart and put together could be as simple as cloth and newspaper. Today the learning lab might be more elaborate, supplemented with programmable Arduino microcontrollers and MakerBot 3D printers.

If any educational trend is spreading more rapidly than the MOOC, it’s the makerspace. Unlike MOOCs, which primarily address higher education, making extends from early childhood into college and beyond.13 Making is motivated by a sense of opportunity and feelings of apprehension, both arising from premonitions of an impending “third industrial revolution.” (The first industrial revolution was driven by the steam engine, and the second arrived with the personal computer. This hypothetical new revolution involves a recombination of the two that came before, making everyone a manufacturer.)14 Industrially threatened by developing countries such as China and India, the United States in particular has emphasized making as economically imperative if future generations are to be competitive in the global marketplace. Maker faires have been hosted by the White House, and sponsored by companies including Hewlett Packard and Autodesk. The Defense Advanced Research Projects Agency (DARPA) has spent millions of dollars funding makerspaces in US high schools.

The economic influence of government and industry—which conflate education with global competitiveness—has made making highly pragmatic, less about learning to coordinate spontaneously than about developing new products. And even this pecuniary ideal has been trivialized. Students are presented with machinery and are asked to use it appropriately, rather than being creatively motivated to find the tools and materials that will realize their ideas.15 As a result, they follow paths of least resistance. Some of the most popular projects are custom iPhone cases and garments festooned with blinking lights. (The former is well adapted to the capabilities of a low-end 3D printer, while the latter takes advantage of washable Arduino Lilypads.)16

However, these faults aren’t inherent to making, any more than narrow-minded career enhancement is inherent to MOOCs. On the contrary, 3D printing and Internet connectivity enlarge the potential to make things and communicate ideas. If there is a lesson to be learned from the first generation of makerspaces and MOOCs, it’s that more powerful technologies call for stronger human convictions to guide them. For all that Fuller preached about technology as a panacea—and a replacement for politics—he understood the need to deploy technology responsibly. Competent stewardship of Spaceship Earth was one major reason that comprehensive learning was so important to him. Making can be an extremely effective means of attaining comprehensiveness, but only if undertaken in an environment that encourages discovery. That environment might potentially be fostered by itinerant lecturers of Fuller’s inspirational caliber. More realistically, given the rarity of inspirational teaching relative to the population in need of it, the motivating questions and context for making can be delivered by creatively re-engaging the MOOC.

At least one edX MOOC has already incorporated some hands-on experimentation. Teaching Fundamentals of Neuroscience in the fall of 2013, Harvard professor David Cox encouraged students to build a SpikerBox, an open source bioamplifier kit that would allow them to hear neurons firing in crickets. Though the course was otherwise conventional—and the experiments would be standard fare in any classroom—the addition of “guided interactivity” slightly tilted the balance of power from teacher to student, and the fact that the SpikerBox could readily be modified left open the possibility that unsupervised students would find their own way forward. Fundamentals of Neuroscience suggests how MOOCs can become platforms for cultivating curiosity.

The cultivation of curiosity is the essential educational bridge from the child’s own room to the wider world of adulthood. It’s what lifts native genius from toying with anything at hand to building a geodesic dome that outperforms the Seagram Building—or finding an alternately shaped solution to the problem of architectural wastefulness. Fuller’s great achievement as a teacher was to ask questions demanding answers that were both specific and holistic. His great pedagogical insight was that the process of making required the focused acquisition and integration of far-flung knowledge, an intellectual synthesis of disparate phenomena that literally would or would not hold together. You can’t cheat the laws of nature.

Though the White House and Hewlett Packard would like to believe that making is vocational—preparing children to succeed in a third industrial revolution—making really has nothing to do with training engineers or entrepreneurs. Given the right conditions, it’s about the intellectual development of a flesh-and-blood species living in the physical universe. That’s what Fuller had in mind when he preached about comprehensive design. Education is the design of a comprehensive mind.


1. By most calculations, Fuller was seldom on campus more than a few times per year, spending the remainder of his time on tour. Fuller’s arrangement with SIU required only that he deliver a few lectures annually and teach occasional seminars at his convenience. In return he was given an annual salary of $12,000, as well as a large office with a staff to manage his vast archives.

2. Titled “The ‘Errors’ of Television,” the article was actually what motivated CBS to hire Seldes.

3. Evidently Zulli became something of a matinee idol. In 1958, Al Hirschfeld drew his caricature for the women’s magazine Charm.

4. NBC canceled Continental Classroom in 1963, though Sunrise Semester lingered until 1982.

5. ABC also had a program, called Meet the Professor. It was canceled the same year as Continental Classroom.

6. Fuller envisioned a system that would beam optical signals back and forth between a local tower and individual television sets, proposing that the signals could be sent with “lassers.” Presumably he meant lasers, which were brand-new technology when he delivered his lecture. (The first working model was built in 1960.) Today the vast majority of two-way data is sent by laser, albeit through fiber-optic cables.

7. Apparently SIU wasn’t deceived. Despite all of his suggestions to the administration—including the advice that they should get “lots of airplanes”—the Edwardsville campus was perfectly conventional.

8. Not that it would have worked. Like many techno-utopians before and since Fuller assumed that efficiency could be increased indefinitely without compromises or diminishing returns.

9. It helped when his audiences were high, as was frequently the case in the 1960s. When available, acid was the drug of choice.

10. Faculty members in the audience that evening also included Willem de Kooning, John Cage, and Merce Cunningham. Watching Fuller in action, Cunningham was reminded of the Wizard of Oz.

11. Fuller wrote this article in the wake of a Museum of Modern Art exhibition devoted to his work. The curator, Arthur Drexler, had included several of Snelson’s sculptures in a vitrine, and credited Snelson with discovering the tensegrity principle in a wall text. Unable to change the exhibition content, Fuller attacked Snelson in print.

12. And Fuller scorned nobody more than the specialist. In Education Automation, for instance, he referred to specialists as “slaves to the economic system in which they happen to function”.

13. Of course this depends on what gets called a MOOC. For instance, the Khan Academy, which sometimes is categorized as a MOOC platform, offers video instruction in everything from counting to number theory.

14. A key aspect of this latest industrial revolution is additive manufacturing, a technology discussed in the previous chapter.

15. The problem is exacerbated by the popularity of kits. With only one correct way to assemble them, they’re the mechanical equivalent of multiple-choice tests.

16. The educational establishment encourages this laziness. For instance, educational consultant Gary Stager, one of the leading advocates of makerspaces in schools, published an article in the Winter 2014 issue of Scholastic Administrator offering the following words of inspiration: “Imagine a sweatshirt with directional signals on the back, a backpack that detects intruders, or a necklace that lights up when you approach your favorite class.” By these standards of creativity, the third industrial revolution can be expected to channel the world’s resources into trinkets sold on Etsy.