6

RISE AND FALL

THE ROLLERCOASTER OF AI ADVANCEMENTS

Lee Sedol steps back into the game room after a cigarette break. Cameras and crowd watch him as he walks up to the stage, to the game board. He is widely acknowledged as one of the world’s grandmasters in the ancient Chinese game of Go, and he is currently playing against the artificial intelligence system known as AlphaGo. The system did not wait for him to return before playing its next move – partly because it didn’t know he had left, but mostly because it’s not made to sense or comprehend such events of the real world. It has only been made to master the patterns and strat-egies of Go. That is its raison d’être. And it does it immensely well. When Lee sits back down at the Go table he is shocked to see Move 37 of this game from AlphaGo, which is so unexpected he wonders if it might be a glitch. This move is either nonsensical . . . or beautifully genius.

This was a moment that redefined our understanding of AI. It led to a great defeat of Lee in March 2016. This defeat will be etched into the history books of major leaps in AI – because it had been a holy grail of computer capability since chess was mastered by machines in 1997 with the defeat of the world champion of the time, Garry Kasparov. But what’s so special about Go? Well, to put it into perspective, Go has many more possible moves than chess does . . . as many as there are atoms in the universe. So how could this have occurred?

Beyond robots, what else do we think of when we imagine AI? In sci-fi these entities are often envisaged as the masterminds behind a robot uprising or a central system in control of all the advanced technology around it. You might imagine the likes of HAL 9000 from 2001: A Space Odyssey; the invisible intelligence Skynet from The Terminator; the AI controlling humans in The Matrix; or maybe even VIKI (Virtual Interactive Kinetic Intelligence) in the 2004 film I, Robot, a large digital floating head that talks about its undeniable logic. These are sci-fi imaginings that grow in intelligence without necessarily requiring a physical robot form, and are usually portrayed as dark characters that believe they know better than humans, or even feel the need to eradicate us.

AI characters in sci-fi are also often humanised to take some type of human-like appearance, as we have trouble envisaging intelligent systems as anything too far removed from our own form. Perhaps a more positive view of an intelligent agent is Jarvis (or J.A.R.V.I.S. – Just A Rather Very Intelligent System – see I told you acronym names are common) in the Iron Man movies, who was created as a digital assistant, adviser and companion to Tony Stark. This is the type of useful AI many of us would like helping out day to day – although of course they already exist in some forms as today’s voice assistants.

So what is AI? In short, it is intelligence exhibited by computing systems and machines, as opposed to natural intelligence in humans and animals. It is often nature-inspired in design, converted into mathematical algorithms, implemented in computer programs that can learn and perform tasks that we consider intelligent. That doesn’t mean it learns with understanding or performs with comprehension – again, we often anthropomorphise here.

In some ways I liken the term ‘AI’ to the term ‘art’, in that it can come in many forms – a few of the names and types of which you may have heard floating around – machine learning, artificial neural networks, deep learning, genetic algorithms, evolutionary algorithms, swarm intelligence, reinforcement learning, convolutional neural networks, recurrent neural networks, supervised and unsupervised learning, natural language processing and generation, to name a few. No matter what the term, some main characteristics of each type are the data it learns from, how it learns and its ability to adapt to change.

But what is intelligence? The phenomenon has been defined in numerous ways. Generally, it describes the ability to perceive and infer information, retain knowledge and apply it to adaptive behaviour within environments and contexts. General intelligence traverses multiple domains and has come to be regarded not as a single ability, or a certain number or level of abilities, but as an effective drawing together of many abilities. American developmental psychologist Howard Earl Gardner’s theory of multiple intelligences differentiates human intelligence into specific modalities instead of a single general ability. These include logical-mathematical, linguistic, spatial, musical, bodily-kinesthetic, interpersonal, intrapersonal, naturalistic and existential intelligence.

The topic of what constitutes intelligence is itself a very controversial one and has been under debate with countless theories, philosophies and measures, because intelligence is inherently complex. Many define intelligence in their own image. Scientists often create definitions that describe good scientists, engineers that describe good engineers, and likewise with artists, athletes, medical professionals. And if you think about it, this also applies on a species level. Any definition of general intelligence will be something that humans can pass.

Of course, our definitions wouldn’t include the likes of a natural telepathic ability, an ability to deconstruct and reconstruct one’s molecular construction and morph at will, or any other superhero abilities we may have imagined but cannot naturally achieve. Our definitions will update and improve as we learn to better understand ourselves. But to know that we’re an intelligent species with a general comprehension of what intelligence is, yet haven’t been able to agree on a universal definition is a bit of a mind warp. And that’s okay – we need to learn about and keep striving for ultimate understanding of many aspects of our existence. That’s an exciting thing.

The late Professor Stephen Hawking was one of the greatest minds of our time, and opened our eyes, minds and hearts to the stars, to the universe, to physics and galaxies and blackholes. He once stated: ‘Intelligence is the ability to adapt to change,’ and this sentiment could not be truer than in this day and age. We humans can now further our own evolution in a number of ways, including neurologically, technically, biologically and genetically. Those who thrive are those who continue to adapt to the relentless onslaught of change itself. Similarly, AI will advance the more it can exhibit these qualities. We’re still traversing – albeit quickly – the early days of AI tech, where it is showing improvements in narrow fields of intelligence – in an analogous way to how we define human intelligence quotient (IQ) through a range of narrow skills, whereas human intelligence more generally seems to involve drawing effectively on multiple skills and abilities as needed. These narrow forms of AI will become the basis and building blocks of a general form of machine intelligence.

Beyond intelligence, can AI have cognition, sentience, consciousness? Cognition is the mental action or process of acquiring knowledge and understanding through thought, experience and the senses, and generating new knowledge. Sentience is the ability to perceive, feel or experience subjectively. Consciousness goes a step beyond these, adding in an awareness of internal or external existence – as René Descartes famously wrote in 1637, ‘Je pense, donc je suis’ (‘I think, therefore I am’). Consciousness is another elusive phenomenon for which we haven’t yet filled in all the pieces. We still don’t know exactly what makes consciousness possible, why it evolved, or even what it really is. This is why it constantly stimulates great interdisciplinary research. Some neuroscience theories of consciousness have hypothesised that it could be generated by various parts of the brain interoperating and connecting, one such theory being the neural correlates of consciousness, though as is commonly the case with such theories, not everyone agrees on this perspective. In some ways, AI may traverse these defining human traits of cognition, sentience and consciousness – but often we anthropomorphise its ability too, feeling it has subjective experience, understanding and intention, when the models I know so far do not. Whether AI reaches these milestones or not seems less important than what AI can and will be able to do. Fully formed AI may not need consciousness to achieve transformational realisations – it may just need to simulate it.

No longer confined to the realms of sci-fi, AI seems to have been lifted out of the movies and into our real-life technologies. It’s already used in some way in nearly every type of technology and every sector and industry we can think of. Just some of the broad range of expanding uses of AI already in existence include:

•   smarter consumer technology, ranging through:

–   auto-complete and correct features in several composing applications

–   smart composing features, auto-categorisation, auto-labelling and spam filtering of emails

–   auto-suggest and search in search engines

–   facial recognition in phone unlocking and social media

–   smartphone and smart-home assistants that use voice recognition and natural language processing among a range of AI abilities

•   synthetic text generation – online reports, media and articles are sometimes written by AI and are becoming increasingly difficult to discern from text created by humans

•   medical triage and diagnostics – which can assist doctors and nurses by drawing on a wealth of historical data, help prioritise treatment of patients based on their symptoms and the severity of their condition, and assist in predicting and diagnosing cancer, diabetes and other conditions

•   stock market prediction and management of trading – watching the patterns and movements to make educated predictions, and even automated trading by ‘trading bots’

•   driverless vehicles – drawing on a wide range of AI abilities, including machine vision, image processing, object recognition, and intelligent adaptable path planning and following features.

AI didn’t just appear out of nowhere. No, it has been around for many decades and has seen slow and steady progressions, with larger and broader explosions of use in recent years. The two elements of AI and robots seem to go hand in hand – and they definitely do to some extent – but AI has potential far beyond the confines of a robotic body. For robots to reach their true potential, they need AI. For AI to reach its true potential, it does not need robots. See, AI could very well be the technology to surpass all technologies, possibly even to surpass us humans. But where did it come from?

AI has been around for many generations in some form or another. It was in the 1940s that AI really started to rise as a theoretical possibility, with Asimov’s laws implying a robot could have independent comprehension of what the laws entailed. Having been pivotal in the birth of the computer, Alan Turing published a landmark paper in 1950 about the possibility of creating machines that think, in which he famously devised the Turing test. This was designed to determine whether a machine (or AI) was advanced enough to successfully exhibit intelligent behaviour indistinguishable from that of a human, even for a human subject in a blind interaction. However, computers of the time were too expensive for such ambitious research (only affordable for big technology companies and prestigious universities) and could be given commands but not store them. So AI would need to wait for a bigger spark before it could really take off.

The Logic Theorist, considered by many to be the first AI program, was written in the mid-1950s by Allen Newell, Cliff Shaw and Herbert Simon. It was the first program deliberately engineered to mimic human problem-solving skills and established the field of heuristic programming – which can help rank alternatives in a search algorithm at each branching step to decide which branch to follow based on the information available, analogous to the decisions drivers make when trying to figure out which roads to take for the quickest possible path to a destination. The Logic Theorist was presented at a 1956 conference, where John McCarthy, an American computer scientist, coined the term ‘artificial intelligence’. From then on, AI began its long history of being the next big thing.

With hope, hype and ambitious promise as its fuel, the golden years of early enthusiasm for AI would rise through the 1950s and 1960s. From 1956 to 1974, computers became faster, cheaper and more accessible, while also increasingly storing more information. Moore’s law was devised in 1965 as an observation by Gordon Moore that transistors (semiconductor devices used to switch or amplify electronic signals and electrical power – the building blocks of the integrated circuits that make digital computing possible) were shrinking so fast that the number that could fit on integrated circuit chips doubled every year (revised in 1975 to doubling every two years, translating to computing performance doubling roughly every eighteen months). Moore went on to co-found Intel Corporation in 1968 with Robert Noyce. Machine learning algorithms started improving and flourishing. Early demonstrations like the General Problem Solver from the Logic Theorist creators showed promise towards problem-solving, and ELIZA in 1966 fuelled visions of AI interpreting natural language.

The first fall in AI, known as an AI winter, came in about 1974–80. The field came under criticism, with obstacles mounting and computer storage and processing speeds just not adequate to meet the challenges, so funding, expectations and acknowledgement of the possibilities all plummeted.

Then 1980–87 saw reignition of interest in AI, with new funding boosting the research and the fledgling personal computing industry. This second AI boom was marked by a form of AI that came with plenty of hype, known as expert systems. The intention was to emulate the decision-making of human experts by mapping out expert responses to many given situations. Once these were learnt, the computer could respond the way an expert would, allowing people to learn from programs. US universities offered courses, while top companies applied the technology in daily business, and massive investment was poured into the field from Japan and Europe. Unfortunately, the implementation of expert systems didn’t meet the big ambitions of revolutionising computer processing and logic programming, nor really launching AI into a new era.

So again, 1987–93 ushered in a second AI winter, at which point I watched my father push on with AI research. Despite the field being more challenging to advance during these times, visionaries like Dad could often see future value in pushing on through the hard times. And it was in these hard times that many landmark goals of AI research were quietly achieved. IBM’s Deep Blue chess-playing computer, started in 1985, continued to develop throughout this period, and in 1997 beat Garry Kasparov, the world chess champion. This huge leap towards a vision of AI decision-making was highly publicised and the upward trend in AI has seemingly continued since. When I met Garry in 2015, something that really stood out to me was his enthusiasm for AI rather than a fear, despite being defeated by it while holding that top world ranking.

Known strategic games that challenge the most intellectual humans have proven to be ideal grand challenges for testing the advancement of AI systems, and quickly make headlines around the world when the machines triumph. IBM’s Watson wowed the world in 2011 when the system won the American TV quiz show Jeopardy! against two of the world’s best players at the time. This was incredibly challenging because contestants are presented with wide-ranging general knowledge clues, sometimes including pop-culture references, in the form of answers – and need to provide their response in the form of corresponding questions. This astonishing display of AI capability, was, as we have seen, followed up in 2016 by the headline-grabbing AlphaGo defeat of Lee Sedol. More landmarks have been passed since, as AI storms its way through various board and computer games. Advancements are always happening in the background too, aside from these big, widely publicised events.

For now, though, rather than one complete general AI, there are various forms of narrow intelligence, which are all impressive in their own right. Commonly used examples include machine learning (ML) and natural language processing (NLP). In ML, a software system can learn from data, usually becoming proficient in finding patterns and differences within that data. NLP performs tasks like taking audio as its data, separating out words (which itself is a difficult problem when we tend not to take pauses between words when speaking), matching the individual audio patterns with the words being spoken, understanding nuances and tones in speech, recognising the context behind collections of words and much more. Voice-activated home assistants are a prime example of how this is put into practice, and most include text-to-speech through speakers for two-way simulated conversation where the system’s text response is turned into audio speech, often in calm female voices. These are a few types of AI that have been taking great strides in recent years and will continue to as more forms of AI and alternative approaches arise.

Let’s use the analogy of art here. If we liken the broad terms AI and art, then ML could be like painting and NLP like music. What I mean here is that painting and music can be two very different types of art, but to achieve the full breadth of art one must be able to exhibit many different forms of it. Just because you can paint doesn’t mean you can create music, and vice versa. Similarly, being able to achieve one form of application in AI does not necessarily mean you can instantly do another. A smartphone might have deep learning (a type from the ML group of AI) built into an app to recognise people from camera images, but it would also need a type of NLP if we wanted the app to understand speech like a voice assistant. So currently AI is split into a broad range of different groups and types. The reason this is important is that although you will often hear the term AI used in a broad sense – such and such an app or company ‘has AI under the hood’ – that doesn’t mean they can do anything and everything because they have AI. When the relevant types are understood and applied, they can be powerful and add great value to projects, designs, technology, processes and businesses.

Some of the types I find most interesting are genetic algorithms, artificial neural networks (ANNs), and advanced ANN architectures like deep learning neural networks. These approaches to AI mathematics are fascinating because they have been inspired by systems in nature. Genetic algorithms are not among the best-known forms of AI, nor commonly used compared to others like ANNs. But I have drawn on these systems in the past and believe they are a great showcase for how nature can inspire great mathematics. Genetic algorithms are inspired by systems of natural evolution and belong to a class of AI known as evolutionary computation.

Let’s imagine for a moment that you’ve made two different mathematical algorithms, each of which is a candidate for magically solving climate change (we’re definitely making wild exaggerations here, but it’s only a hypothetical). You like both these approaches and decide you want the best of both worlds, almost like breeding a special type of puppy from two very different parents – like a corgi crossed with a German shepherd (I’ve seen this mix, so cute!). You get all anthropomorphic and you name the first algorithm Adam and the second algorithm, oh, I dunno, let’s say Eve. Adam and Eve cross-multiply as naughty algorithms sometimes do, and they produce a bunch of offspring, each with different mathematical traits from the parents (introducing small changes similar to natural mutation). They then all cross-multiply with each other (calm down, they’re only algorithms, remember), and each of the combinations also produces a number of offspring with mathematical traits from their parents. So already by this third generation, the number of different algorithms is growing very quickly.

Throughout the continuing process, each algorithm is tested for how well it stacks up in solving the original problem – climate change. If it does well, it survives. If it doesn’t, it dies out. What we end up with is generations and generations of a sort of Darwinian evolution via natural selection, processed in powerful computers possibly in a matter of minutes. This isn’t to say the output is guaranteed to be useful, but it’s an intriguing model of how algorithms can be inspired by nature. You may even end up with a super solution at the end, an algorithm so amazingly evolved you realise it is The One. So you name it Neo! Unfortunately, the super solution at the end is not how it usually works and it can be very difficult to make the process efficient enough for many applications.

ANNs, on the other hand, are much more common and widely applicable. Also inspired by nature, these systems were originally conceived as a rudimentary version of the way the human brain forms natural neural networks and how it can learn through approaches such as repeatedly analysing data to find patterns. Think of how you might go over and over memorising things for an exam or sing along with a song until you remember all the words. Neurons (nerve cells), the fundamental units of the brain and nervous system, are electrically excitable cells that communicate with each other through specialised connections called synapses. Our own neural networks allow us to receive input from our senses about the external world, send motor commands to control our muscles and relay electrical signals around the body. We have roughly 100 billion neurons, and their interactions define who we are as people, our personalities. A big difference between natural and artificial neural networks is that ANN neurons make very specific sequences of connections, whereas our brain’s neurons can brilliantly connect to many other neurons within their vicinity and alter these connections over time. Inspired by natural systems, these technological designs for various types of neural networks in the ML family, particularly deep learning, are helping to drive the use and applications of AI across the world. They are among the most widely utilised forms of AI in today’s technology.

There are so many types of AI because it always depends on the problem being solved (the purpose); the sort of data, if any, the system has access to; the tasks and goals; and what sort of computational systems are available. Certain forms will flourish while others fall by the wayside, all in the race towards massively beneficial and useful AI. I believe this to be the most powerful of all technologies ever invented by humans. But why is it that only in more recent times it really seems like it’s here to stay? Well, our storage and computing systems are only now catching up to the promise of the technology, and so too is our ability to obtain large volumes of useful data – which is like food to AI. And AI is all over it like a monkey on a cupcake. Mmm, cupcake.

 

If intelligence is the ability to adapt to change,

then artificial intelligence will improve with adaptability.

But how far can it adapt?

What could it achieve?

And what purpose will humans have in the future?