17 Quantum & DNA computing

Computer processing speeds and storage capacities have grown astonishingly over the past 50 years, but computer evolution is starting to run up against physical limits. Moore’s Law, which dates from 1965, states that computing power (transistor density) doubles every 18 months or so, but how can such growth continue to occur without hitting the fundamentals of physics?

Computers currently operate using an electrical charge to manipulate bits that exist in one of two states: 0 or 1. Quantum computers, on the other hand, are not so restricted. They encode information as a 0 and a 1 simultaneously using principles of quantum mechanics such as superposition and entanglement. This means that instead of working on one computation after another (albeit at very fast speeds) a quantum computer can work on different computations at the same time. Hey presto, a computer with processing speeds a million or more times faster than anything that’s currently available, but more importantly, a computer able to solve problems that conventional computers cannot—for example, pattern recognition or code breaking.

Quantum computing also has another major advantage. With conventional, silicon-based computers, overheating and energy use is a major problem. With quantum computers it’s not. Do such computers currently exist? At the moment the answer is no, certainly in the sense of being commercially scalable, usable or practical. But it’s a reasonably safe bet to say that they will.


More of everything

Moore’s Law is named after Gordon Moore, one of the cofounders of computer-chip maker Intel. He wrote a briefing note back in 1965 stating that the number of individual elements of a computer chip had doubled each year between 1958 and 1965 and was set to do the same for at least a further decade. This observation (widely quoted as a prediction) has held more or less true ever since, and the number of transistors that can be put onto an integrated circuit doubles every 18–36 months. However, in recent years the speed of development is starting to slow thanks to the physical limitations of current materials.


Living computers? And if you think that sounds a bit far-fetched, how about DNA (deoxyribonucleic acid) computing? Again, the current problem is the physical limits of speed and miniaturization imposed by the use of silicon chips to power computers. But what if we used biochips made from living organisms to create subatomic circuitry, rather than relying on silicon made from silica sand—essentially, silicon chips are made by melting down ordinary sand into ingots and slicing these into tiny wafers, which are then treated and engineered in various ways.

Believe it or not, the potential already exists to build the next generation of computers using DNA molecules. Our own bodies currently act like supercomputers in the sense that our DNA permanently stores information about us. Therefore, it’s probably possible to develop computers that store, retrieve and process data trillions of times faster than anything with which we’re currently familiar. DNA, currently an area of immense interest and intense research, links with other hugely promising fields such as nanobiology and biomolecular engineering and is broadly characterized as using or manipulating materials at a nano (i.e. 1–100 nanometers) scale. (See Chapter 18 for more about the future potential of nanotechnology or for an explanation of just how tiny a nanometer really is.)

Just look at computer memory—in the early 1970s one megabyte cost more than a house, now it costs less than a piece of candy.
Gilles Thomas, ST Microelectronics

How might we make use of some of these aforementioned developments in everyday life? One application that’s nanotechnology-based and already on the cusp of making an impact is Magnetoresistive Random-Access Memory (MRAM), which would store pictures almost instantly. In the future, phones or cameras might enter this realm. In a few years, similar technology will enable your computer to start up or switch off in a thousandth of a second rather than what often feels like an eternity to some nowadays.

Of course, computers already exist in our own bodies on another level. Wetware is a term often used to describe the interaction between the human brain and the central nervous system. It refers partly to the electrical and chemical nature of the brain and partly to the interaction between our neurons (our hardware, if you like) and the impulses, which are like software. Or perhaps the mind is software and everything else is hardware? As we’ll see later, this is controversial.


A test tube that thinks

In 1994 a scientist in the USA came up with the idea of using DNA in a test to solve a complicated mathematical problem. If this sounds ridiculous, think of your own body, which uses biochemical reactions to operate your brain, which is then able to think up ideas such as how to create a biochemical computer. Forward to 2002, and Israeli scientists announced the creation of a DNA or biomolecular computer using chemical reactions in a liquid solution instead of silicon chips and electrons. The result was a machine that ran 100,000 times faster than any comparable PC at the time. Furthermore, they can be minuscule—a trillion of them would be about the same size as a single drop of water and could possibly sit inside one.


Future potential What will we use these superfast, supersmall and supercheap computers to do? Hopefully not just playing Angry Birds! We could instantly look at 250,000 emails and extract the two most important messages. Or perhaps we’ll watch movies via our contact lenses? Maybe we’ll just digitally record and store everything that happens in front of our eyes from birth to death (or maybe governments will).

Perhaps these computers will run cities or potentially the whole planet. Maybe they’ll be able to write a sonnet like Shakespeare or paint like Picasso. Maybe we’ll use them to encrypt sensitive data, predict the weather or find a cure for cancer. None of this is likely to be that far away given the speed of some developments in and around computing and artificial intelligence (see Chapter 20, for instance).

The age of computing has not even begun. What we have today are tiny toys not much better than an abacus. The challenge is to approach the fundamental laws of physics as closely as we can.
Stan Williams, Hewlett-Packard

Having said all this, there is one question that won’t go away. If we invent new quantum and DNA computers to generate massive amounts of data, how will the Internet infrastructure—or our old-fashioned biological brains—cope? If we do manage to continue doubling the power of computers every 18 months, which some people maintain is quite possible using quantum and DNA computers, then in a decade computers will be 100 times more powerful than anything we’ve got today. In 25 years that number becomes 100,000. At this point our data will take on a life of its own. We will have more machines and algorithms that talk with each other and our most important concern will be trying to explain to machines what it means to be human.

the condensed idea

Next-stage computing

timeline
c. 100 BC Antikythera mechanism (early analog mechanical computer)
1837 Charles Babbage describes analytical engine
2015 Direct brain-to-machine computer interfaces
2020 Computer games beamed to the human brain
2025 Computers injected into the human body
2040 Human beings no longer need to remember anything
2050 Internet-enabled telepathy
2070 People able to record and share dreams