Leipzig, 1771. A handful of nervous but adventurous young men are gathered at the doorway of a second-floor room above a coffeehouse at the corner of Klostergasse and the Barfussgasschen. They are out on the town for the night, taking in a new spectacle that has been much discussed among the pamphleteers and chattering classes of Northern Europe. After a few minutes of waiting, a hooded figure emerges from the darkened room and beckons them inside. Death’s-head masks line the walls; an altar draped in black cloth stands across from them. The smell of sulfur pervades the space, illuminated only by a few flickering candles. Their host stands in a chalk circle at the center of the room and begins to read aloud ancient incantations conjuring the spirit world. At once, a blast of noise erupts in the room, and the flames are extinguished. In the gloom, a spirit appears, hovering over the dark altar. The guests feel a literal shock as the specter confronts them, a current of electricity running through their bodies. The host stabs at the conjured ghost with a sword to demonstrate its ethereal presence; its mouth opens and begins to speak, in “a hoarse and terrible tone.”
This eerie spectacle was the creation of a young German named Johann Georg Schröpfer, a troubled showman who would for a brief period of time become one of the most famous men in all of Europe. In the mid-1760s, Schröpfer had taken a job as a waiter at a Leipzig lodge that was frequently used by local Freemasons for their ritualistic gatherings. He soon assimilated the Freemason dogma and began styling himself as a channel to the spirit world. Like many in the orbit of the Freemasons, Schröpfer lived in the murky middle ground between science and occultism; he was versed in the new technologies of magic-lantern projections and dabbled in chemistry. But he also had a fascination with séances, and seems to have believed that using technology to create the illusion of spectral appearances could actually enable contact with the spirit world.
The combination of showmanship and spiritualism would prove to be lethal. In 1769, Schröpfer purchased the Leipzig coffeehouse, and promptly retrofitted its billiards room into a multimedia, immersive theater of terror. Before long, he was conducting séances for his acolyte customers, using magic lanterns to project ghostly images against curtains of smoke, while creepy sound effects thundered in the small space. Anticipating the “Sensurround” gimmicks of 1950s Hollywood, like the Vincent Price camp classic The Tingler, Schröpfer delivered the electric shocks to his clientele using the static-electricity machines that had become popular parlor amusements during that period.
Schröpfer had stumbled across a form of entertainment that would eventually mature into the immensely profitable genre of horror films. Before long he had become a legend across Europe: the Gespenstermacher—or “ghost maker”—of Leipzig. Pamphleteers debated whether his conjuring was reality or illusion, but Schröpfer appears to have fallen under his own spell. In addition to his necromantic illusions, Schröpfer crafted an entirely fictitious persona for himself as a man of great means whose fortune had for some obscure reason taken the form of a hidden treasure guarded by bankers in Frankfurt. The mix of occultism and constant deception eventually pushed an already disturbed Schröpfer over the edge. In 1774, he went for a stroll in a Leipzig park with a handful of friends, promising them “something you have never seen before.” At one point in their sojourn, he walked around a corner out of the view of his companions, who were then startled by a loud explosion. When they caught up with their friend, they discovered him bleeding to death on the ground, the victim of a self-inflicted gunshot wound to the head. Clearly deranged, Schröpfer had vowed to return to life during a future séance. No horror auteur since has shown such dedication to the suspension of disbelief.
The ghost maker’s shocking demise only heightened his reputation. “In dying,” Deac Rossell writes, “Schröpfer became the Lautréamont, the James Dean, the Jimi Hendrix of his generation.” While the debate continued to rage over the legitimacy of Schröpfer’s black arts, dozens of showmen across the continent constructed stage shows that re-created his special effects. The most successful of those disciples was another mysterious German named Paul Philidor, who began staging horror shows in Vienna in the late 1780s. Unlike Schröpfer, Philidor made no pretense that he was actually conjuring up the dead. (He referred to himself as “The Physicist,” presumably to differentiate himself from the ghost making of Schröpfer.) A newspaper report from 1790 described the show: “As soon as the peroration begins, distinct thunder is heard approaching, accompanied by wind, hail and rain; the lights extinguish themselves one by one, and in the impenetrable darkness various ghosts of all shapes flutter about the room; finally after a very furious storm and a rushing of the wind, a living ghost appears out of the Earth, then again slowly sinking into the abyss below.”
Philidor made three essential contributions to the genre. First, he began rear-projecting the spectral images on a thin, semitransparent curtain that was otherwise invisible to the spectators. (Schröpfer’s technique of projecting onto clouds of smoke was undeniably eerie, but also much less reliable as a canvas for the images—and filling a small chamber with thick smoke made the séances physically unpleasant.) Philidor also pioneered the technique of placing the magic lantern on wheels; by slowly moving the projector toward the screen, he created the illusion that the specters were growing larger as they approached the terrified spectators. (A little more than a century later, the technique would be reimagined as the cinematic tracking shot.) Philidor’s final contribution was a linguistic one. He gave his spook show a name, one that would haunt the imagination of Europeans for decades to come: the Phantasmagoria.
The roots of the Phantasmagoria lay in German soil, but the show truly flowered in Paris. Philidor took his exhibition to the Hôtel des Chartres in 1792, at the height of revolutionary turmoil; throngs of amazed Parisians shivered at Philidor’s illusions while Louis XVI stood trial before the National Convention. In April of 1793, while still performing for packed houses, Philidor mysteriously shut down his Phantasmagoria and went into hiding. (Rumors suggested he had somehow run afoul of the Committee for Public Safety.) A few years later, the Belgian showman Étienne-Gaspard Robertson revived the Phantasmagoria to great success, conducting his spectral displays in the vaults of an abandoned Capuchin monastery beneath the streets of Paris. The shows were a kind of fusion of two modern senses of terror: the audiences screamed and recoiled just as audiences at modern horror films do, but the ghosts Robertson conjured belonged to the political Terror, as magic-lantern projections of the death masks of Robespierre, Marat, and Louis XVI loomed in the shadow light.
New technologies or forms of popular entertainment change the world in direct ways: creating new industries, enabling new forms of leisure or escapism, sometimes creating new forms of oppression or physical harm to the environment. But they also change the world in more conceptual ways. Every significant emergent technology inevitably enters the world of language as a new metaphor, a way of framing or illuminating some aspect of reality that was harder to grasp before the metaphor began to circulate. As a show, the Phantasmagoria might seem to have been a folly, the eighteenth-century version of a slasher flick. But as a metaphor, it turned out to have a powerful philosophical allure.
The Phantasmagoria
Hegel invoked Philidor’s creation in his Jena lectures, delivered while writing Phenomenology of Spirit: “This is the night, the inner of nature that exists here—pure self. In phantasmagorical presentations it is night on all sides; here a bloody head suddenly surges forward, there another white form abruptly appears, before vanishing again. One catches sight of this night when looking into the eye of man—into a night that turns dreadful; it is the night of the world that presents itself here.” Schopenhauer described the human sensory apparatus as a “cerebral phantasmagoria.” In his 1833 satiric novel, Sartor Resartus, which contained a thinly veiled caricature of Hegel himself, Thomas Carlyle popularized the use of the term as a metaphor for an individual or society that has lost its grasp of reality: “We sit in a boundless Phantasmagoria and Dream-grotto; boundless, for the faintest star, the remotest century, likes not even nearer the verge thereof; sounds and many-colored visions flit round our sense . . .”
Double lens magic lantern
That broader definition of phantasmagoria as a kind of mass illusion would play a crucial role in one of the most influential paragraphs of political philosophy ever written, in the section of Das Kapital where Karl Marx first defines his notion of commodity fetishism:
As against this, the commodity-form, and the value-relation of the products of labour within which it appears, have absolutely no connection with the physical nature of the commodity and the material relations arising out of this. It is nothing but the definite social relation between men themselves which assumes here, for them, the fantastic form of a relation between things. In order, therefore, to find an analogy we must take flight into the misty realm of religion. There the products of the human brain appear as autonomous figures endowed with a life of their own, which enter into relations both with each other and with the human race. So it is in the world of commodities with the products of men’s hands. I call this the fetishism which attaches itself to the products of labour as soon as they are produced as commodities, and is therefore inseparable from the production of commodities.
“The fantastic form of a relation between things”—the English phrase derives from an 1887 translation, but the original German makes it clear that Marx was deliberately invoking the tradition of Schröpfer and Philidor: “the fantastic form” is in the original “die phantasmagoriche Form,” a “misty realm” with “autonomous figures endowed with a life of their own.” Capitalism, to Marx, was not just a form of economic oppression but, crucially, an economic system in which the objects produced were wrapped in a kind of ghostly illusion, an unreality that kept its participants from recognizing the truth of their oppression. Marx was never reluctant to lean on arcane philosophical arguments or advanced economic theory to explain his vision of history: Hegel, Adam Smith, and David Ricardo are referenced dozens of times over the course of Das Kapital’s three volumes. But at a crucial point in that argument, introducing a concept that would prove to be one of his most influential, he drew upon popular entertainment, not phenomenology, to convey his meaning. New ideas need new metaphors, and in Marx’s case the new metaphor came from a spook show.
—
In 1801, a man calling himself Paul de Philipsthal launched a new version of the Phantasmagoria at the Lyceum Theatre in the Strand. Many historians believe that Philipsthal was in fact Philidor himself, returning to the entertainment business with a new stage name, but Philipsthal’s true identity has remained something of a mystery. The Phantasmagoria became an anchor tenant of the booming marketplace of illusion that flourished in the West End during the first decades of the century. At some point in this general time frame—the exact dates have been lost to history—the Scottish scientist David Brewster began frequenting the Phantasmagoria during his stays in London. Brewster is one of those nineteenth-century characters who have no real equivalent today. An ordained minister in the Church of Scotland, he took an early interest in astronomy and became for a time one of the world’s leading experts on the science of optics. But he also harbored a great fondness for popular amusements. That obsession led him to invent the kaleidoscope, which was for a few years the PlayStation of the late Georgian era. (Either through incompetence or indifference, Brewster barely made a penny from the device, as imitators quickly flooded the market with clones of his original idea.) But his obsession also led Brewster to the spectacles of illusion and terror in the West End, to the Phantasmagoria and its brethren. He was there in part as a debunker, a skeptic discerning the secret craft behind the spectacle. But he also sensed that something profound was lurking in the trickery, that the showmen were exploiting the quirks in the human sensory system. Exploiting the quirks made them more visible to the scientist.
Of the Phantasmagoria’s spectral images, Brewster compiled this analysis, accompanied by diagrams:
[The] phenomena were produced by varying the distance of the magic lantern AB, Fig. 5, from the screen PQ, which remained fixed, and at the same time keeping the image upon the screen distinct by increasing the distance of the lens D from the slides in EF. When the lantern approached to PQ, the circle of light PQ, or the section of the cone of rays PDQ, gradually diminished and resembled a small bright cloud, when D was close to the screen. At this time a new figure was put in, so that when the lantern receded from the screen, the old figure seemed have been transformed into the new one.
Brewster’s analysis didn’t provide much ripe material for a promotional pull quote for the show. But then again, he was not reviewing the spectacles in the mode of a journalist or a critic. He was there at the encouragement of Walter Scott, taking notes for a book that he would come to call Letters on Natural Magic. Brewster had realized that, just as Enlightenment science had unlocked many doors for creating magical distortions of reality, it had also unlocked doors for detecting the laws behind that reality. The ability to understand the world advanced at roughly the same pace as the ability to deceive. That new deceptive power—natural magic, as opposed to the supernatural kind—was at its most vivid in the West End of London. Brewster was there, at Scott’s bidding, to explain away its illusions—and celebrate them at the same time.
David Brewster
The fact that a man like Brewster would seek out such popular entertainments is itself noteworthy. The “exhibitions” that began proliferating across Europe were among the first places where a shared culture began to take shape that compelled highbrow philosophers, common laborers, and landed gentry to take in the same amusements on more or less equal terms. Today we take it for granted that movie stars and politicians and factory workers will all happily turn out for the latest Pixar film or gather in a football stadium to cheer on their favorite team. But three hundred years ago, the different classes had few points of intersection. “At all times, curiosity was a great leveler,” the historian Richard Altick writes in his magisterial work, The Shows of London. “Exhibitions that engaged the attention of the ‘lower ranks’ also attracted the cultivated and, perhaps to a lesser extent, vice versa. The spectrum of available shows was not divided into one category that was exclusively for the poor and unschooled and another, entirely separately for the well-to-do and educated. As more than a few foreign visitors noted, no English trait was more widespread throughout the entire social structure than the relish for exhibitions, and, one might add, no trait was more effective in lowering, however briefly, the conventional barriers that kept class and class at a distance.” What brought them together was the strange, unpredictable pleasure of being fooled.
Early in Letters on Natural Magic, Brewster observes that a disproportionate amount of popular illusion is directed at the human visual system. “The eye,” he wrote, “is the most fertile source of mental illusions . . . the principal seat of the supernatural.” Since Brewster’s time, entire books—some targeted at seven-year-olds, others at neuroscientists—have cataloged a vast menagerie of optical illusions. Consider the two famous visual tricks, the Kanizsa triangle and the Necker cube.
Left: Kanizsa triangle; Right: Necker cube
In each case, the eye detects something that is quite literally not there: a white triangle and a three-dimensional box. In each case, it is almost impossible to un-see the illusion. The Necker cube can be visually flipped between two different three-dimensional orientations, but most of us can’t perceive it as it actually is: twelve intersecting lines lying on a two-dimensional surface. The mind’s eye conjures up a perception of depth that empirically does not exist. The smallest tweak to the image can eliminate the 3-D effect, as in this drawing, known as a Kopfermann cube, where the image appears to alternate between a 3-D cube and a 2-D pinwheel shape.
Kopfermann cube
Hundreds of similar illusions have been discovered by artists, showmen, and scientists over the centuries. Strangely, the human brain doesn’t seem to be nearly as vulnerable to being fooled by the other senses. A few comparable auditory illusions exist, most famously the illusion created by stereophonic sound, which tricks the ears into perceiving a sound emanating from a point between the two speakers. A number of automaton designers in the eighteenth century attempted to create a “speaking machine”—a robotic human head that could utter words and sentences through artificial means, following on the principles of Vaucanson’s automated flute player. But the human ear is not easily fooled by speech simulations: even today, with all of our computational power, a child can tell the difference between Siri and a human voice. And the other senses—touch, smell, taste—are even less prone to being tricked the way our eyes are tricked by the Necker cube. A handful of tactile illusions exist; with taste, the closest equivalent might well be the way chili peppers trick our brains into perceiving heat. But if you want to deceive the senses of another human being, your best bet is to do it through their eyes.
There is something paradoxical about this vulnerability. The human sense of sight is generally considered to be the most developed of our senses. Some estimates suggest that 85 percent of the information we take in arrives through our visual system. Why, then, should our strongest sensory tool be the most vulnerable to being tricked? Over the past few decades, researchers specializing in visual intelligence have come to understand this apparent paradox. The power of our visual system can’t be measured in simple resolution, like the megapixels of digital cameras. In fact, the percentage of your visual field that you actually perceive in focused “high definition” is shockingly small. What makes our eyes so perceptive lies in the way the brain interprets the information it receives through the optic nerve. In a sense, the brain has evolved a series of cheats that enable it to detect things like edges or motion or three-dimensional relationships between objects, filling in missing information on the fly. You can think of these as the rules of thumb that govern our sense of sight. For instance, when our eyes perceive two lines coinciding in a flat image, our brain assumes those lines intersect in three-dimensional space. (The Necker cube relies on this rule to create the sense of depth in the image.)
Millions of years of evolution created rules for interpreting visual information, helping the eye evaluate and predict the physical arrangement and motion of objects that it perceives. But through hundreds of years of cultural evolution, we began discovering unusual configurations that would confound those predictions, forcing the eye to see something that wasn’t, technically speaking, there. Natural selection created a kind of neural technology to interpret the information transmitted to our eyes, and then human beings deliberately set out to invent technology that short-circuited evolution’s inventions. This turned out to be surprisingly fun.
Despite his scholarly expertise in the science of optics, Brewster himself was happy to defy the rules of perception for mass amusement. Late in his life he invented the stereoscope, the handheld technology that fools the eye into perceiving two distinct flat images as a single 3-D scene. Unlike his earlier kaleidoscope invention, Brewster managed to build a successful business selling his contraption, properly branded as a “Brewster Stereoscope.” Queen Victoria famously marveled at one during the Great Exhibition of 1851. The stereoscope lives on to this day in the form of the popular View-Master toy, and the fundamental illusion the stereoscope relies on is also central to virtual reality goggles like Oculus Rift.
Optical illusions can be employed for more serious pursuits. Until the late nineteenth century, the most famous and influential “trick of the eye” was the invention of linear perspective, generally credited to the architect Filippo Brunelleschi, though the fundamental rules that governed the technique were first outlined in the book On Painting by Leon Battista Alberti. Like the Necker cube, it is almost impossible not to perceive the depth relationships in a painting that successfully executes the principles Brunelleschi and Alberti devised. Technically speaking, linear perspective is nothing more than an optical illusion, but it is rightfully considered one of the most transformative innovations of the Renaissance.
For a brief period at the end of the eighteenth century, it seemed as though a Scottish painter named Robert Barker had stumbled across an innovation of comparable significance. At some point in the mid-1780s, Barker took a stroll to the top of Calton Hill in Edinburgh. Standing near the current site of the Nelson Monument and gazing out over the city, Barker hit upon the idea of painting the entire 360-degree view by rotating a sequence of square frames around a fixed spot, sketching each part of the vista and then uniting them as a single wraparound image. With the assistance of his twelve-year-old son, Barker completed the epic project, but when he unfurled the final immense canvas and wrapped it around the viewer, he discovered that the concave surface distorted the image, making horizontal lines appear curved unless they were perfectly aligned with the viewer’s eyes. In a sense, it was the opposite of the problem Brunelleschi and Alberti had solved: instead of creating the illusion of 3-D on a flat surface, Barker had to eliminate the distortions that came from painting on a 3-D surface. He ultimately devised a technique whereby straight lines would be artificially curved to compensate for the distortion, not unlike the way vanishing points bring parallel lines closer together in linear perspective. Barker also imagined an entire built structure to house his illusion, with concealed overhead lighting and an entrance through stairs below the viewing platform. (A doorway stuck in the middle of the painting would break the spell.) He was granted a patent in 1787 for “an entire new Contrivance of Apparatus . . . for the Purpose of displaying Views of Nature at large.”
After successful prototype exhibitions in Edinburgh, Barker relocated to London, where he formed a joint-stock company, backed by a handful of wealthy investors, and began scouting for a site in the West End where he could produce his immersive spectacle to full effect. He sent his son to the roof of the Albion mills near Blackfriars Bridge to sketch the skyline of London, the way the two of them had captured Edinburgh from Calton Hill. At the suggestion of a “classical friend,” Barker hit upon a name for his creation, drawing on the Greek phrase for “all-encompassing view.” He called it the Panorama.
Cross section of the Rotunda in London where Robert Barker displayed two of his panoramas
By 1793, Barker had constructed a six-story building near Leicester Square, designed for the exclusive purpose of displaying two separate Panoramas to crowds of paying spectators. The lead attraction was an immense vista of London, based on the Albion mill sketches, which encompassed 1,479 square feet. (A smaller Panorama simulated the British fleet sailing at Spithead, the bay near Portsmouth.) Barker ran advertisements that modestly suggested his technique was “the greatest IMPROVEMENT to the ART of PAINTING that has ever yet been discovered.” For a time, the bombast seemed warranted. The show itself was a runaway success. The king and queen requested an advance viewing, though Queen Charlotte later reported that the illusion made her dizzy. The critical reception was equally enthusiastic: “No device . . . has approached so nearly to the power of placing the scene itself in the presence of the spectator,” one observer wrote. “It is not magic; but magic cannot more effectually delude the eye, or induce a belief of the actual existence of the objects seen.” Before long, imitations appeared across Europe, many re-creating famous military conflicts, some of them straight out of the news. The Panorama was the first of the temples of illusion to find a big audience in the newly formed United States. Coined specifically to refer to Barker’s showcase, the word panorama entered the common lexicon of at least a dozen languages, signifying any kind of broad view, artistic or otherwise.
Appropriately enough, given the Panorama’s strange brew of highbrow and lowbrow appeal, the best words describing its impact were penned by Charles Dickens decades after Barker opened the Leicester Square exhibit:
It is a delightful characteristic of these times, that new and cheap means are continuously being devised, for conveying the results of actual experience to those who are unable to obtain such experiences for themselves; and to bring them within the reach of the people—emphatically of the people; for it is they at large who are addressed in these endeavours, and not exclusive audiences . . . Some of the best results of actual travel are suggested by such means to those whose lot it is to stay at home. New worlds open out to them, beyond their little worlds, and widen their range of reflection, information, sympathy, and interest. The more man knows of man, the better for the common brotherhood among us all.
Barker’s Panorama suggested a higher purpose for the illusionists. Beyond the chills of the spook show or the amusements of the digesting duck, the illusion shows could also serve an almost journalistic function, reporting on current events. Journalists had been sending back dispatches from the front lines for at least a century before Barker staged his exhibits, but in an age without telegraphy and photography, those reports were slow in arriving and limited for the most part to textual accounts. (King George oversaw Great Britain’s war with the American colonies with a four-week lag—the length of time it took news of the battles to travel by ship across the Atlantic.) The illusionists couldn’t help speed up the transmission times, but they could marshal their powers to re-create the experience of being in battle, as Barker had done with the Panorama.
In September of 1812, the Bavarian musician and inventor John Nepomuk Maelzel found himself in Russia, just in time to witness the legendary burning of Moscow that greeted Napoleon’s arrival in that city, and that would soon lead to his epic defeat there. The fire and the subsequent battle for Moscow would inspire many great works of art in the years that followed: Tolstoy’s War and Peace, Tchaikovsky’s 1812 Overture. But one of the very first—and arguably most original—attempts to represent this world-historic event was engineered by Maelzel within a year of Napoleon’s defeat, in the form of an animated diorama called The Conflagration of Moscow. Maelzel’s creation premiered in Vienna, but he would ultimately take it across Europe and North America, dazzling audiences for decades with his mesmerizing reconstruction of the great city on fire.
A detailed inventory of the show that toured the United States gives some sense of the scale of the production. Movable frames representing the buildings of Moscow—the Kremlin, church spires, castles—were designed to collapse or explode on cue. Behind the skyline, Maelzel hung a transparent painting that suggested a haze of smoke and fire; behind it another painting depicted other buildings in the distance ablaze, with a moon glowing in the night sky above the carnage. At the front of the stage, two bridges and a causeway carried more than two hundred miniature Russian and French soldiers, featuring “musicians, snipers, cavalry, infantry mines and cannons”; the troops were pulled across the stage on hidden grooves, controlled by a hand crank. Fire screens enabled actual flames to creep across the urban landscape without actually damaging any of the equipment; Maelzel deployed small fireworks and burning pans of charcoal to enhance the effect. Lighting the tableau required “sixteen lanterns, twenty-five Argand lamps, six candlesticks with springs, snuffers, and trays, forty half-circular patent lamps with reflectors, nine square and six oblong lamps, and thirteen common japanned lamps with stands.”
The experience of watching The Conflagration of Moscow was not, strictly speaking, narrative in its form. Yes, events followed a preordained sequence on the stage: Napoleon’s army advancing; the Russians retreating; the flames surging across the skyline. But the true appeal of the spectacle came from the sense of immersion, just as it had with Barker’s Panorama. “The city was before us, closely built up and the houses all aflame,” one spectator recalled decades later. “We quivered at the sight; saw men, women and children making their escape from the burning buildings, with packs of clothing on their backs. The scene was terrible, and so realistic that when we went to bed after returning from the spectacle, we hugged each other and rejoiced that our house was not on fire.” Through these elaborate illusions, the viewers lived through the sensory overload of one of recent history’s most chaotic and terrifying events. In this, Barker and Maelzel had stumbled across an appetite in their audiences that would later be slaked by modern media’s endless recycling of disaster footage, from the Hindenburg to 9/11. (Nonnarrative immersive simulations of famous catastrophes may well have a renaissance if virtual reality becomes a mainstream pastime.)
While the age of illusion was dominated by optical effects, thanks in large part to the evolution of the human visual system, Maelzel did more than any illusionist of his generation to explore the aural dimension of this mechanical theater, using a variety of bespoke instruments to simulate the sounds of a city under siege:
A musket machine had twelve springs to force striking hammers. Cannon drums were struck with the fist in a sparring glove . . . An “iron explosion machine” contained about a peck of stones, and on being cranked created a noise like the crash of falling buildings and the explosion of gunpowder. There were table bells, glass bells, and Chinese gongs to represent church bells and various other city sounds. Hand organs with bellows and a collection of cylinders provided the two types of martial music used . . . A trumpet machine, with twelve trumpets in it, was capable of playing a dozen tunes. A hand organ supplied the sounds of the cymbals and bass drums.
Maelzel’s attempt to re-create the dissonant music of war dovetailed with another of his projects, a jack-of-all-trades musical instrument called the panharmonicon that could simulate the sounds of an entire military band, all the way down to gunfire and the boom of the canons. The contraption, which was roughly the size of a large closet, controlled its instruments via an immense rotating barrel, a scaled-up version of the pinned cylinders that controlled the music boxes and automata of the preceding centuries. Keeping with the military theme of his illusions, Maelzel collaborated with a friend of his to compose a score explicitly for the panharmonicon, celebrating Wellington’s victory over the French at the Battle of Vitoria. The friend happened to be none other than Ludwig van Beethoven. Alas, the panharmonicon never found a home in the classical mainstream. While Beethoven’s piece—now known as “Wellingtons Sieg”—was performed by Maelzel’s panharmonicon at a number of traveling exhibitions, the two men eventually had a falling-out, with Beethoven ultimately suing Maelzel and rewriting his composition for a traditional orchestra.
—
Maelzel’s collaboration with Beethoven suggests just how difficult it is today for us to place the illusionists on the spectrum of high art and low amusements. From a modern vantage point, they seem like carnival showmen and hucksters. No doubt some of them were. Yet the cast of characters involved in this strange new culture—from Brewster to Beethoven—and the seriousness with which many of the most advanced spectacles were taken suggests that something more profound was under way. But wherever you place them on the spectrum of artistic expression, one thing is clear: by the first decades of the nineteenth century, the success of Barker’s Panorama and Philipsthal’s Phantasmagoria had set off a kind of entertainment version of the Cambrian explosion. Bizarre new species of illusion proliferated across the West End. (Smaller versions of this craze occurred in New York, Paris, and other cities as well.) The names themselves—with their strange Greek neologisms—suggest just how far the language was straining to represent the novelty of the experiences. Along with the Panorama and the Phantasmagoria, a visitor to London in the early 1800s could enjoy a “Novel Mechanical and Pictorial Exhibition” called the Akolouthorama; a predecessor to Philipsthal’s spook show called the Phantascopia; an exhibition called the Spectrographia, which promised “TRADITIONARY GHOST WORK!”; an influential mechanical exhibition dubbed the Eidophusikon; the Panstereomachia, “a picto-mechanical representation,” according to the Times. A virtual orchestra created by a painter and musician named J. J. Gurk entertained audiences with performances of “Rule, Britannia.” (Confusingly, it was also called the Panharmonicon, though it had nothing to do with Maelzel’s device.) Dozens of derivations of Barker’s immersive paintings sprouted as well: the Diorama, the Cosmorama, the Poecilorama, the Physiorama, the Naturorama. An American showman named John Banvard popularized the “Moving Panorama,” which simulating a ride down the Mississippi River by slowly unfurling a painting that was over a thousand yards long.
Advertisement for an early slide projector, or “magic lantern”
Some of these creations, like Banvard’s scrolling landscape, were genuine innovations; others were cheap knockoffs. (Of the Naturorama, the Literary Gazette sneered, “You are allowed to look through glasses at miserable models of places, persons, and landscapes; while two or three nasty people sit eating onions and oranges.”) But what made the whole collection so remarkable was the sheer diversity of these wonderlands and magic shows. If you blur your eyes, the West End of London circa 1820 doesn’t seem all that different from the West End of today, with its marquees promoting the latest hit comedy or Andrew Lloyd Webber revival. But the entertainment variety today lives within the formal conventions of theatrical plays: an audience gathers in an auditorium and watches actors perform scripted material on a lighted stage, sometimes accompanied by music. The variety of today’s West End belongs to the content, not the form. Two centuries ago, there were plays and musicals, but there were also panoramas and magic-lantern ghost shows, and animated paintings populated by small robots—and dozens of other permutations. The West End functioned as a grand carnival of illusion, with each attraction dependent on its own unique technology to pull off its tricks.
Perhaps the single most significant fact about that carnival is this: almost every species in this genus of illusion died off by the dawn of the twentieth century. Many forms of entertainment from the end of the eighteenth century continue on in recognizable form. People still go to see musicals, attend operas, read novels, and visit art galleries. But other than the occasional diorama at a natural history museum, the marvelous diversity of West End illusion has been entirely extinguished. All those moving panoramas and magic-lantern shows were wiped off the map by a single new technology: the cinema.
In a sense, the temples of illusion helped create the technology that ultimately destroyed them. Most of their innovations turned out to be dead ends: no one bothers to paint a thousand-foot canvas anymore, for obvious reasons. But the Phantasmagoria and the Panorama and their many peers did help solidify a new convention: that human beings would pay money to crowd together in a room and lose themselves in immersive, illuminated images. In 1820, this practice was limited to a tiny portion of the planet’s population, clustered in a few affluent cities, but it would soon become a worldwide phenomenon with the rise of motion pictures.
The pattern that played out on the streets of the West End would recur many times over the subsequent decades. A cluster of innovations emerges, all experimenting with different variations on a single theme, until one specific solution arises that reaches critical mass and kills off its rivals. Think of the ecosystem of computer networks in the early 1990s: proprietary services like AOL and CompuServe; file-sharing protocols like Fetch or Gopher; private bulletin-board communities like The WELL or ECHO; hypertext experiments like Storyspace or HyperCard. Behind all these marginal new platforms, a shared consensus was visible: people were going to start consuming and sharing news, documents, personal information, and other media through hypertextual networks. But it was unclear whether a single platform would unite all these disparate activities, until the World Wide Web became the de facto standard in the mid-1990s. The process happened faster than it did in the days of West End illusion, but the underlying pattern was the same: early experiments, followed by explosive diversity, followed by radical consolidation.
The innovation that triumphs at the end of this sequence is often inferior in many ways to its rivals: remember that cinema, for all its advantages, lacked color for its first fifty years, and even in the age of 3-D IMAX, movies lack the 360-degree vista of the Panorama. But cinema was not a classic Clayton Christensen–style disruption where an inferior but cheap new product wipes out a more fully featured but expensive rival. As alluring as the mechanical dancers were at Merlin’s, no one mistook them for genuine human beings. Once you could project images of actual people onto the screen—dancing and gesticulating and emoting, even without color or sound—the appeal of magic-lantern specters dissolved into thin air. To be sure, some of the most popular early films took their cues from the West End illusionists, most spectacularly in the special-effects-laden shorts of Georges Méliès. But one could argue that the key innovation that secured the dominance of cinema was not the camera or projector, nor the trick shots that Méliès pioneered, but rather the invention of the close-up. Early or “primitive” cinema—as the film scholars refer to it—was effectively a continuation of the West End spectacles: either immersing the viewer in the experience of a distant place, the way Banvard’s moving panorama took the audience down the Mississippi, or subjecting the viewer to a series of mesmerizing special effects. Most of these films were shot in a way that mimicked the audience’s perspective in a traditional theater: a single long shot with the actors framed by a stage set. But, starting in the 1910s, directors like D. W. Griffith began tinkering with the close-up, a technique that brought the spectator into a kind of intimate relationship with the actors that no stage production could achieve. That was the moment when cinema left the world of amusement and became art.
All of these elements—the proto-cinemas of the West End, the transformative power of moving images, the formal seduction of the close-up—make it clear just how hard it is to pinpoint exactly when the cinema itself was invented. Like most important technologies, cinema was an amalgam of very different innovations, which themselves drew upon varied forms of expertise and were developed on widely divergent time scales. The practice of projecting an image onto a screen by shining a stable light behind a semitransparent plate became commonplace with the rise of magic lanterns in the 1600s; the glass lenses used by both film cameras and projectors predate the magic lantern by a few centuries. Capturing images directly to a photosensitive material became possible in the early 1800s. The modern cinema that began to emerge in the second decade of the twentieth century drew upon centuries of innovations—in chemistry, optics, glassmaking, and mechanics—not to mention creative innovations like the tracking shot or the close-up. On top of all these breakthroughs, the cinema relied on a business model that had developed among the West End showmen: charging a fixed price for tickets to immerse oneself in a darkened chamber of illusion. That, too, was a kind of invention.
Motion pictures departed from the techniques of West End illusion in one key respect. In the Phantasmagoria or Banvard’s moving panorama, the viewer’s perception of motion was based on the actual movement of physical objects: the magic lantern rolling back and forth on its tracks, the steady unfurling of Banvard’s thousand-yard painting. But the movement in moving pictures was merely a trick of the eye; to this day, every film or television show is constructed out of a stream of still images that our eye perceives as continuous motion. This phenomenon is commonly called persistence of vision, though there is intense debate in the scientific community over the neural mechanisms that make it possible. Like so many important elements of the human visual system, it was originally discovered by toymakers, in the spinning wheels and rotating drums of early nineteenth-century devices like the thaumatrope and the zoetrope, which spun through a dozen or so static images of a dancer or trotting horse, creating the illusion of movement. (Thaumatrope means wonder turner in Greek, while zoetrope roughly translates to life turner, or wheel of life.) Whatever its biological roots, persistence of vision appears to be a universal property of the human eye: when still images are flashed at more than ten or twelve times a second, our eye stitches them together into a continuous flow.
A crowd gathers to watch a magic lantern show
On some basic level, this property of the human eye is a defect. When we watch movies, our eyes are empirically failing to give an accurate report of what is happening in front of them. They are seeing something that isn’t there. Many technological innovations exploit the strengths that evolution has granted us: tools and utensils harness our manual dexterity and opposable thumbs; graphic interfaces draw on our powerful visual memory to navigate information space. But moving pictures take the opposite approach: they succeed precisely because our eyes fail.
Two early thaumatropes, 1826
This flaw was not inevitable. Human eyesight might have just as easily evolved to perceive a succession of still images as exactly that: the world’s fastest slide show. Or the eye might have just perceived them as a confusing blur. There is no evolutionary reason why the eye should create the illusion of movement at twelve frames per second; the ancestral environment where our visual systems evolved had no film projectors or LCD screens or thaumatropes. Persistence of vision is what Stephen Jay Gould famously called a spandrel—an accidental property that emerged as a consequence of other more direct adaptations. It is interesting to contemplate how the past two centuries would have played out had the human eye not possessed this strange defect. We might be living in a world with jet airplanes, atomic bombs, radio, satellites, and cell phones—but without television and movies. (Computers and computer networks would likely exist, but without some of the animated subtleties of modern graphical interfaces.) Imagine the twentieth century without propaganda films, Hollywood, sitcoms, the televised Nixon-Kennedy debate, the footage of civil rights protesters being fire-hosed, Citizen Kane, the Macintosh, James Dean, Happy Days, and The Sopranos. All those defining experiences exist, in part, because natural selection didn’t find it necessary to perceive still images accurately at rates above twelve frames a second—and because hundreds of inventors, tinkering with the prototypes of cinema over the centuries, were smart enough to take that imperfection and turn it into art.
—
Art is the aftershock of technological plates shifting. Sometimes the aftershock is slow in arriving. It took the novel about three hundred years to evolve into its modern form after the invention of the printing press. The television equivalent of the novel—the complex serialized drama of The Wire or Breaking Bad—took as long as seventy years to develop, depending on where you date its origins. Sometimes the aftershocks roll in quickly: rock ’n’ roll emerged almost instantaneously after the invention of the electric guitar. But some new artistic forms are deeply bound up in technological innovation: Brunelleschi using mirrors to trick his own eye into painting with linear perspective; Walter Murch inventing surround sound to capture the swirling chaos of Vietnam in Apocalypse Now. The artist’s vision demands new tools to realize that vision, and every now and then the artist turns out to be a toolmaker as well. When those skills overlap in a single person, things move fast.
A Disney animator works on cels for the film Snow White
You can make the argument that the single most dramatic acceleration point in the history of illusion occurred between the years of 1928 and 1937, the years between the release of Steamboat Willie, Walt Disney’s breakthrough sound cartoon introducing Mickey Mouse, and the completion of Disney’s masterpiece, Snow White, the first long-form animated film in history. It is hard to think of another stretch where the formal possibilities of an artistic medium expanded in such a dramatic fashion, in such a short amount of time. Steamboat Willie is rightly celebrated for what it brought to the animator’s art: synchronized sound and a memorable character with a defined personality. But it’s also worth watching today for what it is conspicuously missing: there’s no color, no spoken dialogue, only the hint of three-dimensionality. The story, only seven minutes long, revolves entirely around simple visual gags. Steamboat Willie was closer to a flip-book animation with a grainy soundtrack attached to liven things up. Viewed next to Snow White, Willie seems like it belongs to another era altogether, like comparing Méliès’s A Trip to the Moon from 1902 with Orson Welles’s Citizen Kane, made almost forty years later. Disney managed to compress a comparable advance in complexity into nine short years.
And even Welles, for all his genius, relied on innovations that had been pioneered by other filmmakers before him: Griffith’s close-up; the sound synchronization introduced in The Jazz Singer, the dolly shot popularized by the Italian director Giovanni Pastrone. The great leap forward that Disney achieved with Snow White was propelled, almost exclusively, by imaginative breakthroughs inside the Disney studios. To produce his masterpiece, Disney and his team had to reinvent almost every tool that animators had hitherto used to create their illusions. The physics of early animation were laughably simplified; gravity played almost no role in the Felix the Cat or Steamboat Willie shorts that amused audiences in the 1920s. For Snow White, Disney wanted the entire animated world to play by the physical laws of the real world. Before Snow White, one animator recalled, “no one thought of clothing following through, sweeping out, and dropping a few frames later, which is what it does naturally. Disney commissioned thousands of slow-motion photographic studies that the animators could analyze to mimic the micro-behaviors of muscles, hair, smoke, glass breaking, birds flying, and countless other physical movements that had to be re-created with pen and ink. The team also inaugurated a drawing technique they called overlapping motion, in which all the characters were drawn engaging in constant, if subtle, physical activity, rather than just cycling through a series of static poses. These artistic innovations required a new way to test visual experiments before committing them to final print. Disney’s team began sketching out ideas on cheap negative film that could be quickly processed and projected onto a tiny Moviola screen. They called these experimental trials pencil tests. The sheer length of Snow White required additional tools to map the overarching narrative; for that, the “storymen” on Disney’s team hit upon the idea of taking sketches corresponding to each major scene and pinning them to a large corkboard that let Disney and his collaborators take in the narrative in a single glance, inaugurating the tradition of “storyboarding” that would become a ubiquitous practice in Hollywood, for both live-action and animated films.
Sound and color also forced Disney and his team to conjure up new solutions. Creating the illusion of spoken dialogue emerging from a human character’s mouth required a level of synchronization and anatomical detail that early sound cartoons, like Steamboat Willie, had not required. While Disney had partnered with a new start-up called Technicolor to add a full palette to the final print of Snow White, the actual animation cels had to be painted in-house by the animation team. They ended up concocting a new kind of paint, using a gum arabic base that was “rewettable,” enabling the animators to fix small problems without tossing out the entire cel. Disney even purchased a cutting-edge tool called a spectraphotometer to measure color levels precisely, given the challenge of converting them into the less accurate Technicolor format.
The most impressive technical breakthrough behind Snow White was the multiplane camera that Disney and his team built to create the signature sense of visual depth that Snow White introduced to animation. Before Snow White, animated films lived in a two-dimensional world, with only a hint of depth provided by an occasional linear perspective trick borrowed from Brunelleschi. But mostly they looked like a series of drawings on white paper that had somehow come to life. Most animations did use semitransparent character and background cels, layered on top of each other, so that animators wouldn’t have to redraw the entire mise-en-scène for each frame. For Snow White, Disney hit upon the idea of multiple layers corresponding to different points in the virtual space of the movie, and separating those cels physically from each other while filming them: one layer for the characters in the foreground, one for a cottage behind them, another for the trees behind the cottage, and so on. By moving the position of the camera in tiny increments for each frame, a parallax effect could be simulated, creating an illusion of depth even more profound than the one Brunelleschi had invented five hundred years before. The multiplane camera was such an impressive feat of engineering that it warranted an extensive write-up in Popular Science:
[The device] consists of four vertical steel posts, each carrying a rack along which as many as eight carriages may be shifted both horizontally and vertically. On each carriage rides a frame containing a sheet of celluloid, on which is painted part of the action or background. Resembling a printing press, the camera stands eleven feet tall and is six feet square. Made with almost micrometer precision, it permits the photographing of foreground and background cels accurately, even when the first is held firmly in place two feet from the lens and the lowest rests in its frame nine feet away. Where the script calls for the camera to “truck up” for a close-up, the lens actually remains stationary, while the various cels are moved upward. By this means, houses, trees, the moon, and any other background features, retain their relative sizes.
Disney studio cameramen shooting pictures with the multiplane camera as women ink in colors below
All of these technical and procedural breakthroughs summed up to an artistic one: Snow White was the first animated film to feature both visual and emotional depth. It pulled at the heartstrings in a way that even live-action films had failed to do. This, more than anything, is why Snow White marks a milestone in the history of illusion. “No animated cartoon had ever looked like Snow White,” Disney’s biographer Neil Gabler writes, “and certainly none had packed its emotional wallop.” Before the film was shown to an audience, Disney and his team debated whether it might just be powerful enough to provoke tears—an implausible proposition given the shallow physical comedy that had governed every animated film to date. But when Snow White debuted at the Carthay Circle Theatre, near L.A.’s Hancock Park, on December 21, 1937, the celebrity audience was heard audibly sobbing during the final sequences where the dwarfs discover their poisoned princess and lay garlands of flowers on her. It was an experience that would be repeated a billion times over the decades to follow, but it happened there at the Carthay Circle first: a group of human beings gathered in a room and were moved to tears by hand-drawn static images flickering in the light.
In just nine years, Disney and his team had transformed a quaint illusion—the dancing mouse is whistling!—into an expressive form so vivid and realistic that it could bring people to tears. Disney and his team had created the ultimate illusion: fictional characters created by hand, etched onto celluloid, and projected at twenty-four frames per second, that were somehow so believably human that it was almost impossible not to feel empathy for them.
—
Those weeping spectators at the Snow White premiere signaled a fundamental change in the relationship between human beings and the illusions concocted to amuse them. Complexity theorists have a term for this kind of change in physical systems: phase transitions. Alter one property of a system—lowering the temperature of a cloud of steam, for instance—and for a while the changes are linear: the steam gets steadily cooler. But then, at a certain threshold point, a fundamental shift happens: below 212 degrees Fahrenheit, the gas becomes liquid water. That moment marks the phase transition: not just cooler steam, but something altogether different. When you cross the boundary of the phase transition, new possibilities emerge, ones you might have never imagined while contemplating the earlier state. In the case of liquid water, one of those new possibilities was life itself.
Twelve frames per second is the perceptual equivalent of the boundary between gas and liquid. When we crossed that boundary, something fundamentally different emerged: still images came to life. The power of twelve frames per second was so irresistible that it even worked with hand-drawn characters pulled from a storybook. But like the phase transitions of water, passing that threshold—and augmenting it with synchronized sound—unleashed other effects that were almost impossible to predict in advance. The consumers of illusion at the beginning of the nineteenth century wouldn’t be at all surprised to find that people two centuries later were gathering in dark rooms to be startled and surprised by special effects. But they would be surprised by something else in the culture: the enormous emotional investment that people have in the lives of other people they have never met, people who have done almost nothing of interest other than appear on a screen. The phase transition of twelve frames per second introduced a class of people virtually unknown until the twentieth century: celebrities.
Fame, of course, is an old story, as old as history itself. Kings, military heroes, statesmen, clerics, prophets—all possessed lives that reverberated far beyond their circles of immediate acquaintance. As the historian Fred Inglis describes it, “celebrity was inseparable from the public acknowledgement of achievement.” We can trace the origins of modern celebrity culture back to the coffeehouse chatter of early publications like the Tatler that shared moderately suggestive stories of London’s aristocracy in the early eighteenth century. Those voices were amplified by the end of the century with salacious tales about the debauched life of the Prince Regent; shortly thereafter, Lord Byron established a template for artistic genius and renegade sexual adventurism that would be emulated by a thousand rock stars in the postwar years. Stars of the stage like Sarah Siddons or Sarah Bernhardt generated intrigue about their private lives before the first Hollywood gossip columns appeared in the 1940s. But however much the prurient interest of the general public in these figures may remind us of modern celebrity culture, one key difference remains: the princes and poets and actresses that garnered attention in the age before television and cinema were living genuinely extraordinary lives—thanks either to the extreme good fortune of being born into a royal family or to their own achievements as artists or writers or actors. The surplus fixation of fame was still grounded in the use-value of an exceptional life. Today, of course, that pool of celebrity has widened dramatically, a phenomenon famously captured by Andy Warhol in his fifteen-minutes-of-fame witticism, but also by Daniel Boorstin in his 1961 classic, The Image: “We still try to make our celebrities stand in for the heroes we no longer have, or for those who have been pushed out of our view. We forget that celebrities are known primarily for their well-knownness.” Page through the gossip magazines of today and you will be astounded to see how even the Hollywood celebrities have taken a backseat to the stars of reality television. Barrels of ink are spilled each day sharing breathless accounts of date nights and pregnancy bumps in the lives of people who appeared on The Bachelor five years ago. Celebrity was once earned through an extraordinary career; later, it could be achieved by pretending to be extraordinary people onstage or on the screen. But today celebrity is just as likely to belong to people who have no claim to fame other than the fact that their ordinary lives appear on television.
It is hard not to feel that these shows—and the long echo of gossip that trails behind them—are simply a colossal waste of time. But as banal as these new “personalities” are, their existence still suggests an interesting question: Why did this kind of celebrity culture only emerge in recent decades? The answer, I think, comes down yet again to the power of illusion, its ability to distort our perception of reality, making it impossible for us to not see things that are, empirically, not there. At twelve frames per second, with synchronized sound and close-ups, it is almost impossible for human beings not to form emotional connections with the people on-screen. (Disney made it clear that you didn’t even need actual people!) We naturally feel interest in the everyday ups and downs of our close friends and family. Twelve frames a second tricks the brain into feeling that same level of intimacy with people we will never meet in person, what Inglis calls “knowability combined with distance.” When the tinkerers of the 1830s were exploiting persistence of vision to make a horse come to life in the circular motion of the thaumatrope, it never occurred to them that the perceptual error they were exploiting would one day cause people to weep and bristle at the mundane actions of total strangers living thousands of miles from them. But that is the strange cognitive alchemy that twelve frames per second helped stir into being. Persistence of vision is an evolutionary accident that created the conditions of possibility for a cultural accident. The modern celebrity is a spandrel of a spandrel.
It is possible—maybe even likely—that a further twist awaits us. Recall the “irresistible eyes” of the mechanical dancer that so entranced Charles Babbage in Merlin’s attic. Those robotic facial expressions would seem laughable to a modern viewer, but animatronics has made a great deal of progress since then. There may well be a comparable threshold in simulated emotion—via robotics or digital animation—that makes it near impossible for humans not to form emotional bonds with a simulated being. We knew the dwarfs in Snow White were not real, but we couldn’t keep ourselves from weeping for their lost princess in sympathy with them. Imagine a world populated by machines or digital simulations that fill our lives with comparable illusion, only this time the virtual beings are not following a storyboard sketched out in Disney’s studios, but instead responding to the twists and turns and unmet emotional needs of our own lives. (The brilliant Spike Jonze film Her imagined this scenario using only a voice, though admittedly the voice belonged to Scarlett Johansson.) There is likely to be the equivalent of a Turing Test for artificial emotional intelligence: a machine real enough to elicit an emotional attachment. It may well be that the first simulated intelligence to trigger that connection will be some kind of voice-only assistant, a descendant of software like Alexa or Siri—only these assistants will have such fluid conversational skills and growing knowledge of our own individual needs and habits that we will find ourselves compelled to think of them as more than machines, just as we were compelled to think of those first movie stars as more than just flickering lights on a fabric screen. Once we pass that threshold, a bizarre new world may open up, a world where our lives are accompanied by simulated friends. In a strange way, these virtual companions might be more authentic than the simulated friends of reality TV; at least the robots and virtual humans would acknowledge your existence and engage directly with your shifting emotional states, unlike the Kardashians. The ghost makers and automaton designers of the eighteenth century first tapped the power of illusion to terrify or amuse us; their descendants in the twenty-first century may draw on the same tools to conjure up other feelings: empathy, companionship, even love.