Anyone keeping up with artificial intelligence in the news in recent months might be justified in feeling nervous: AI that is learning to bluff and beats a poker world champion. AI that is developing its own language and was blocked by Facebook. AI that tech pioneer Elon Musk warns against, because it could become smarter than people someday. This fearful attitude reveals strong reservations towards artificial intelligence.
It also shows how imprecise our understanding of the term really is. True artificial intelligence doesn't even exist. The term is often used for very different machine processes because it currently creates the most buzz. With good reason: AI may be the most important technology area of all, because it drives things in so many other segments. Autonomous driving, voice control, augmented reality – all these cutting-edge technologies are largely dependent for their own development on the progress made in AI – for example in image and object recognition. When AI – in simple terms – learns to understand its environment, such as to differentiate between a dog and a small child, this will be an important step in developing an autonomous driving system in city traffic. Or in the augmented reality field: When AI begins not just to process the environment in the form of abstract shapes and objects, but also truly understands these objects to be things like tables, lamps and chairs – that's when it will be possible with AR technologies to support context-sensitive and persistent embedding of digital objects in a manner that is logical to the user.
Outside of its key positioning with respect to tomorrow's technologies, artificial intelligence is unfortunately also a blanket term used for almost any kind of computerized data processing. No matter what algorithms are running in the background, or how conventional or outdated the methods being used – they're all called artificial intelligence because nearly every software startup wants to catch some of this hype for themselves. "Artificial intelligence is working in the background," "AI generates the selection" or "It wouldn't be possible without AI." And yet in general none of these things have much to do with artificial intelligence.
Often these systems are merely "good old AI," rule-based systems (known as expert systems) that don't learn independently. In an expert system, the knowledge of a specific field of expertise is summarized in a set of rules and knowledge. Such systems, which were developed in the 1980s, are relatively primitive and can be broken down into the formula: "if A, then B." Indiscriminately labeling things AI is also due to the lack of clarity around what artificial intelligence really means.
Those in the field distinguish between strong and weak AI. Researchers talk about strong AI when an intelligence acts independently and has a form of consciousness. We're familiar with this from films like Terminator and Star Trek. But humans are still a long way from developing this kind of AI. At this point science hasn't even precisely defined what consciousness really is – let alone how to create one.
The systems that exist today are what researchers call weak AI – artificial intelligence that is used for a very specific purpose, such as to analyze patient data to improve diagnoses, or using image recognition to sort through large numbers of photos. The decisive difference, in simple terms: AI that is developed for image recognition can't use its conclusions for other purposes. And certainly not autonomously like a person.
That said, more recent machine learning methods like neural networks are particularly exciting, because they're able to learn independently from large amounts of data, for example to recognize patterns. Artificial neural networks were first described and utilized in the 1950s, and today's more powerful computers create much greater possibilities for these approaches.
However, even if newer machine learning methods like neural networks come into use, it still doesn't mean that the complex structures of the human brain are anywhere near being duplicated. AI researcher Toby Walsh in an interveiw with t3n magazine describes the human brain as “the most complex system that we know of in our universe.” So far there is no other system that can begin to compete with the billions of neurons and trillions of connections and synapses in the human brain.
But where does this powerful narrative around the dangers of AI systems come from? It's interesting that this potential threat was already part of Alan Turing's theoretical reasoning about computers. As part of his research on the universal computing machine, Turing stated that such a machine would be able to solve any problem – provided it could be presented in the form of an algorithm. That would mean that any cognitive process that can be represented by algorithms, including human cognition, could be carried out by a machine. Following this logic, it would only require enough computing power and the corresponding algorithms for machine intelligence to exceed that of humans.
Fierce debate ensued as to whether the human ability to think and reason can really be completely reproduced by algorithms. This thesis met with strong resistance in AI research circles in the 1970s, when philosopher John Searle argued insistently that human thinking is intrinsically connected to the human body and in particular the brain. In other words, a person is also a body and cannot be reduced merely to their thought processes. Their emotions have a direct impact on their thinking and intelligence. According to Searle, human thought can be imitated, but not reproduced. Following this logic, the very term artificial intelligence would be at least questionable, because a machine could never have intelligence in the human sense – no matter how powerful the computer.
The technocentric perspective in Silicon Valley that technology will some day be able to solve nearly every human problem, also fuels belief in the development of AI that will one day not just equal, but surpass human intelligence. This approach that sees technology as capable of anything is reflected in the perspective of someone like Peter Thiel, who applies it to the notion of death in particular. Thiel sees death as a problem that can be solved – like a programming problem that can be overcome with the right lines of code. But even if we could make a backup of our own brains someday, how much of our humanity would really be contained in this backup? And would it still be intelligent?