1993

LEAKPROOF “AI BOX”

As discussed in the entry “Intel-ligence Explosion,” some scientists have expressed concerns that once AIs become sufficiently intelligent, such entities could repeatedly improve themselves so as to pose a possible threat to humanity. This runaway AI growth is sometimes referred to as the technological singularity. Of course, such entities might also be extremely valuable to humans; but possible risks have led researchers to speculate about how to build AI boxes that could confine or isolate such entities, if the need arises. For example, the hardware running the software for such entities might act as virtual prisons that are not connected to communications channels, including the Internet. The software could also be run on a software virtual machine within another virtual machine in order to increase the isolation. Of course, complete isolation would be of little value, since this would prohibit learning from—or observing—a superintelligence.

Nevertheless, if the AI superintelligence is sufficiently advanced, might it still be able to make contact with the outside world, or with various people who serve as gatekeepers, through unusual means, such as by altering the processor cooling-fan speeds to communicate via Morse code or by making itself so valuable that theft of the box is likely? Perhaps such an entity could be exceedingly convincing by offering its human gatekeepers bribes in order to coax them to allow more communication or replication to other devices. This bribing may seem farfetched today, but who knows what wonders the AI could offer, including cures for diseases, fantastic inventions, melodies that enthrall, and multimedia visions of romance, adventure, or bliss.

Author Vernor Vinge (b. 1944) argued in 1993 that for superhuman intelligences, “confinement is intrinsically impractical. For the case of physical confinement: Imagine yourself locked in your home with only limited data access to the outside, to your masters. If those masters thought at a rate—say—one million times slower than you, there is little doubt that over a period of years (your time) you could come up with ‘helpful advice’ that would incidentally set you free.”

SEE ALSO “Darwin among the Machines” (1863), Rossum’s Universal Robots (1920), Giant Brains, or Machines That Think (1949), Intelligence Explosion (1965), Living in a Simulation (1967), Paperclip Maximizer Catastrophe (2003)