There are a variety of terms used when it comes to artificial intelligence. We want to explain what they mean.Dieter Petereit
Does your head sometimes spin when a bunch of technical terms are tossed around that you can't interpret using simple, everyday common sense? The terms surrounding the notion of artificial intelligence are certainly one thorny example. People talk about deep learning, machine learning, neural networks and natural language processing. We want to give you a clear sense of the meanings of these different terms, without getting too scientific on you.
All technologies involved in generating thinking that only humans could previously achieve, are included under the umbrella of AI. These days AI is simply an overall term we can use when we don't want to go into too much detail. Within AI, we differentiate between strong and weak AI. Strong AI describes a situation where a machine is in principle capable of doing anything a person could do. It is strong AI that holds such fascination for film makers. But in reality this concept has not moved past the philosophical level. Weak AI is when individual human abilities are transferred to machines, such as recognizing text or images, playing games, speech recognition, etc. This is where rapid progress has been made for many years. Machine learning, deep learning, natural language processing (NLP) and neural networks are simply subcategories of AI, sometimes subcategories within subcategories.
Machine learning describes mathematical techniques by which a system or machine generates knowledge independently, from experiences.
Natural language processing (NLP)
Natural language processing is a fairly old field of research that is part of the exploration of the human-machine interface, but which for a few years now can be properly categorized under the general term of machine learning. In the past, people tried to manage machine processing of written and spoken language using comprehensive rule sets. This was an approach that resulted in very few successes.
It was machine learning methods that first boosted the development of NLP, whose most important tasks include optical character recognition (OCR), translation between languages, automatic answering of questions in natural language, and speech recognition itself. These days, deep learning methods are used in areas of NLP, above all speech recognition.
Deep learning and artificial neural networks
Deep learning is a subset of machine learning, and the area that will most powerfully transform our lives in the next few years.
The terms "deep learning" and "artificial neural networks" are sometimes used interchangeably. However these terms are used, deep learning works with artificial neural networks to achieve particularly efficient learning successes.
So it is not wrong to say that deep learning is a learning method that is part of machine learning. Machines using neural networks are able to identify structures independently, evaluate these structures and improve autonomously by using numerous forwards and backwards processes.
Neural networks are structured in several layers for this purpose. You can picture these as something like a filter that works from rough estimates to more specific detail, thus raising the probability of identifying and outputting a correct result. The human brain works in a similar fashion.
These days, deep learning methods using neural networks are nearly always what is meant when talking about artificial intelligence. The rapid progress that has been made with deep learning in recent years is due above all to the fact that more powerful hardware has been available for the necessary calculations, but also derives from the ever larger volumes of data that are available for initial training of the neural networks.
After this training, "deep learning" means that the application running keeps on learning continuously. Systems like these practically optimize themselves, constantly boosting their identification accuracy and the usefulness of their results. Deep learning relies on statistical data analysis and not on a deterministic algorithm. Statistical data analysis is always necessary when there are no clear rules identified, such as for image recognition or similar applications.
Such a task might involve identifying all the pictures in a pool of images that depict cats. Asking about animals in the pictures would be much simpler, but deep learning is meant to do more.
Developers would feed the machine with all kinds of cat pictures: photographed in summer environments, in winter backgrounds, in the rain and the sun, under the sofa, small and large, black and white cats. For us humans, identifying cats is no problem. The machine first has to learn what the animal looks like, to develop an identification model. After the training, developers present the machine with photos that were not part of the initial sets, to see how well the system works.
What is valuable here is that once the machine has learned the process, it can work much faster than people. We would surely need several days to identify 1,000 cats from a pool of 50,000 photos, but artificial intelligence could complete this same task in just seconds or minutes, depending on the computing power available.
AI or artificial intelligence is the umbrella term for all areas of research that involve giving machines the ability to think like people. NLP, or natural language processing, involves the recognition and processing, as well as the output of natural language in written and spoken form. Machine learning is the umbrella term for all processes that enable machines to generate knowledge from experiences, i.e. to learn. Deep learning with artificial neural networks is a particularly efficient method of continuous machine learning based on statistical analysis of large volumes of data (big data), and is the most significant future technology within AI.