From insight to naches
Samuel Arbesman
Forward
In the future, machines will make discoveries beyond the ken of humankind. Samuel Arbesman thinks this merits a proud hug
Scientific knowledge has been expanding for some time. No one can be expected to be well-versed in our entire body of knowledge. These days, our insights often come from recombining what we happen to know. And that’s the trouble: imagine a paper in one corner of science says that A implies B, and another paper, elsewhere, says that B implies C. Due to the magnitude of scientific endeavour, there is no longer any guarantee that someone will think to combine these papers.
This is real. Don Swanson, an information scientist active in the 1980s, dubbed this elusive, tip-of-the-tongue not-quite-insight “undiscovered public knowledge” and now, with computational help, many discoveries and relationships have been revealed that previously lay hidden. The balance between the scientific abilities of humans and computers is shifting. What will happen when computers cease to merely assist us with our discoveries, and discover things for themselves – things we cannot understand?
Unassisted human insight seems to be reaching its limits. In the future – and sooner rather than later – we will arrive at a point in science and mathematics where any discovery that is made (by computers, of course) will be only dimly understood by human beings. Steven Strogatz has written about his with a bit of worry. He argues that we are living in a special window of time, stretching from the dawn of the scientific revolution 350 years ago to a point a few decades into the future. Only people living in this window can truly say that they have understood the world.
Strogatz’s window is already closing. In mathematics there is something called the four-colour problem. Draw a map as intricate as you like, full of wiggles, crenellations and complicated frontiers. You will always be able to colour that map, distinguishing each territory from its neighbours, with just four colours. The proof of this was assembled with the help of computers in 1977, and no single person understands it. The proof is inelegant, gargantuan and computationally complex. We know it is true, but we do not know why. More recently, in 2009, Michael Schmidt and Hod Lipson created an AI program that could distil the laws of motion merely by observing data from the swings of a double pendulum . In the process, they created an AI capable of deriving meaning from datasets too large or complex for humans to study.
Soon, we will no longer be able to understand a large fraction of the knowledge we have generated. In Scientific American in 2010, Danny Hillis made a similar point. He was speaking about the world that we ourselves have created – an unbelievably complex anthropic society, complete with computer networks, manufacturing systems and transportation structures. Hillis argued that we have moved from the Enlightenment, a period where logic and reason could bring understanding, to the Entanglement, where everything is so unbelievably interconnected that we can no longer understand systems of our own making.
Should this matter? Perhaps we are simply following the same trajectory that we have been tracing for thousands of years, in which fewer and fewer people are able to understand the most complex parts of our world. For a great deal of our history, the vast majority of humanity has understood its surroundings according to the knowledge of the day. From the four elements to the workings of the screw and the pulley, a significant fraction of our world’s knowledge was within the grasp of most individuals. As our world has become more complex and knowledge has increased rapidly, a smaller and smaller fraction of society has felt it has a true-enough understanding of everything.
In order to comprehend any advanced topic, one must learn all the foundational knowledge first, thereby recapitulating society’s creative process. In general, novel contributions to a field only come from those who have a firm grasp of the field’s foundations. As society’s knowledge increases, it takes longer and longer to acquire enough mastery of the basics to say something new. As our knowledge increases, and the amount of time necessary to spend learning foundational knowledge becomes prohibitive, fewer and fewer people will invest the time and effort necessary to make new discoveries – or, indeed, to understand them. We may eventually reach the point where discoveries require quantities of time and understanding beyond the capacity of any single human being.
Distributions of brain power and mental capacity are not changing very much. We can only develop so much cognitive ability, whether in chess or in scientific understanding. As the human population expands, those with exceptional abilities become easier to spot. Sampling the curve of normal distribution for talent, we find athletes who, with the right training, continue to set new world records. However, there are physical limits. It is unlikely that humans will ever break a three-minute mile, or manage a thirty-foot vertical leap.
By the same logic, though we continue to learn more and think harder, we are going to reach certain cognitive limits. We occasionally catch glimpses of the outer boundary of what is possible when we see genius at work. There is George Green, a miller’s son, whose work in mathematical physics was so complex, Einstein said his achievements were decades ahead of their time. There is Srinivasa Ramanujan, whose intuitive grasp of mathematics simply defied understanding. But such people are rare, and it is unclear how they acquired their abilities.
Should we be concerned? These outliers have advanced mathematics and physics in ways the vast majority of us cannot understand, never mind emulate. Most of us cannot even grasp the advanced topics that are regularly taught in graduate school, such as measure theory or quantum mechanics. There are some concepts understood by one person in a hundred, and other concepts that are clear only to one in a million.
The worry begins when we arrive at ideas that can be only understood by one person in a billion. That’s fewer than ten people on the planet. But is there any practical difference between one or two individuals understanding a concept, and none at all?
If the idea is usable, perhaps that is all that we should care about. This is our current mindset: to exploit new knowledge while at the same time fretting about the nefarious powers of advanced technology , from The Terminator to “grey goo”. It’s time we adopted a more positive viewpoint.
The perspective I am trying to get at here has a name: naches . This is the Yiddish word for joy. To shep naches is to derive joy from the accomplishments of those around you, especially your children. It is one of the purest pleasures, and one that you hear spoken of during bar mitzvahs and weddings, as well as graduations.
Immigrants feel it is important for the next generation to be better off, and for the generation after that to positively thrive. For parents, their offspring must always be more intelligent and more successful than they are. Why should we not shep naches from the accomplishments of our machines?
This vicarious joy or success sounds somewhat odd, but it shouldn’t be. We get excited when our sports team wins a game; why should it disturb or disappoint us when our creations turn out to be more accomplished than ourselves?
Our intellectual offspring can give us naches . We have valued this feeling for thousands of years, and it brings us great happiness. We just need to transfer our parental pride to the technological realm.