What happens when machines become smarter and can accomplish an ever-growing number of tasks? This question has been a topic for years, but in 2016 it reached a new climax, with artificial intelligence (AI) developing into an all-pervasive topic. The market research and consulting firm Tractica expects global sales of AI to grow from the current US$ 643.7 million annually to $38.8 billion in 2025. Sales of corporate AI applications are expected to grow from $358 million annually to $31.2 billion by 2025.
According to Tractica, this will translate into an average annual growth rate of 64.3 percent. Companies from all industries, of all shapes and sizes are thus faced with an important set of questions: Which AI business models and applications can I use to win over my customers? And what technologies and infrastructures are required?
"The goal of artificial intelligence is to develop machines that behave as if they were intelligent," wrote the American logician and computer scientist John McCarthy back in 1955. In other words, it is the responsibility of humans to share their knowledge with machines as if they were sharing it with their children, partners or co-workers. This is the only way for hardware and software to evolve into what could be termed "intelligent". And it is also the only way to create the foundation for self-learning systems. AI research distinguishes between three types of AI:
"Strong AI" describes a self-confident machine which is endowed with thoughts, feelings and awareness as well as the corresponding neural networks. Anyone who is already looking forward to this kind of reality, depicted in movies like "Her" or "Ex Machina", will have to wait a little longer: Strong AI does not yet exist and it is unclear how long it will take to develop this.
This type of AI excels in solving very specific problems, for example, recommending songs on Pandora or optimizing tomato cultivation in a greenhouse. Most AI applications are currently focused on exactly those types of answers for highly specific tasks.
This type of AI can process tasks from multiple fields and origins. In addition, it can shorten its training intervals by transferring the experience gained in one area to a different and unrelated one. This kind of knowledge transfer can only take place if there is a semantic connection between the areas. The stronger and denser the connection, the faster and easier knowledge transfer can take place. In contrast to narrow AI, general AI can optimize not only the cultivation of tomatoes, but also of cucumbers, eggplant, bell peppers and radishes – i.e. it can solve more than just one specific task.
Without technologies like cloud computing, AI could never have experienced this boom. Cloud services and advanced machine intelligence make it easier for companies to communicate more closely with their customers using AI-based functionality. Companies like Airbnb, Uber or Expedia are already making use of cloud-based systems to process AI-relevant tasks – tasks that require intensive CPU or GPU usage and extensive data processing and analysis services.
Companies which plan to go in this direction need to develop an AI strategy first, on the basis of which they can then evaluate the various AI services offered by cloud providers and map out an AI-defined infrastructure. Such an infrastructure needs to be based on a general AI that combines three typical (human) characteristics. On that basis, companies can manage their IT and business processes using AI.
Experts pass on their proven methods, procedures and thought processes to the “general AI” in learning units. Granular knowledge particles convey the important processes, piece by piece. Just taking the example of a greenhouse, the experts would teach the AI all the process steps necessary to produce a tomato, cucumber, eggplant or bell pepper. The AI receives contextual knowledge like “What needs to be done?” and “Why does it need to be done?”
The "general AI" creates a semantic graph from all this information. Based on this, it understands the world surrounding the company, including its IT and business mission. Based on the greenhouse example, this can include various contexts – for example, the characteristics and specifics of the greenhouse as well as of cucumber, eggplant and bell pepper cultivation. The "general AI" continually feeds this graph with new knowledge from additional learning units. The company's IT plays an important role in this process, as all data is collected here.
The concept of machine reasoning can solve problems even in unclear and constantly changing environments. This way, “general AI” can continuously react to changing contexts before selecting the best procedure. The greenhouse general AI can use machine reasoning not just to learn and understand – that would be narrow AI.
Instead, it can also identify the best possible combinations of knowledge in order to find solutions to problems on its own. This not only cuts down on the processing time, it also allows for an exponential increase in the number of possible permutations as the store of knowledge increases. The larger the semantic graph, the more types of vegetables the greenhouse general AI can cultivate.
AI can not only improve existing infrastructures, such as cloud environments; it can also drive a whole new generation of infrastructure technologies. Not only does this require new programming frameworks, it also places completely different demands on the hardware.
Mobile applications and applications for the Internet of Things have so far placed few demands on the runtime environment of the respective infrastructure. On the other hand, they are heavily dependent on backend services. In contrast, AI applications expect sophisticated backend services and optimized runtime environments that are tailored to the GPU-intensive requirements of AI solutions. That is because AI applications challenge the infrastructure by processing tasks in parallel in very short cycles.
GPU processors are consequently preferred for accelerating deep learning applications. GPU-optimized applications distribute the CPU-intensive areas of the application to the GPU, i.e. the graphics processor, allowing the CPU to process the simple calculations as usual. This speeds up the execution of the application as a whole.
Nvidia was previously known primarily as a producer of high-performance graphics cards, for example for gaming PCs. However, because AI applications run particularly well on GPUs, the company is experiencing a real upswing. Pictured here: the Titan-X graphics card. (Photo: Nvidia)
The advantage of GPUs over CPUs is evident in the corresponding architectures: A CPU is designed for serial processing and contains only a small number of cores. A GPU on the other hand consists of a parallel architecture with numerous small cores which process tasks simultaneously. According to Nvidia, the application throughput of a GPU is 10 to 100 times greater than that of a CPU. An AI infrastructure should therefore be capable of providing a deep learning framework such as Tensorflow or Torch, which has hundreds or even thousands of nodes available with the ideal GPU configuration.
This means that companies have the basis for compiling a list of requirements for their AI infrastructure. This checklist should be able to answer the following questions:
Over the past few years, companies have invested an enormous amount in improving the AI functionality of their cloud platforms. The leading public cloud providers, Amazon, Microsoft and Google, are well ahead of the pack. But many PaaS providers have also expanded their offerings to include AI services. Currently, the AI technology map consists of three main groups: cloud machine learning (ML) platforms, AI cloud services and technologies for private and public cloud environments.
Cloud ML platforms include Azure Machine Learning, AWS Machine Learning and Google Machine Learning. Companies can use these to create a machine learning model based on proprietary technologies. But that also means that, except for Google Cloud ML, which relies on Tensorflow, and AWS Machine Learning, most of the existing cloud ML services do not support AI applications based on Theano, Torch, Tensorflow or Caffe.
Using AI cloud services like Microsoft Cognitive Services, Google Cloud Vision or Natural Language APIs, companies can access complex AI or cognitive computing capabilities by means of a simple API which allows them to develop AI-capable applications without needing to invest in the requisite AI infrastructures.
Technologies for private and public cloud environments can be deployed in public cloud environments such as Amazon Web Services or in private cloud environments such as Openstack or VMware. Using these, companies can develop and operate context-spanning AI-based business models based on "general AI".
Finding new talent and implementing effective change are only some of the challenges facing today’s entrepreneurs. They are now also forced to compete with high-tech companies like Amazon, Google, Microsoft and Facebook, which are inexorably marching into established markets. With huge financial resources, they can hijack the entire customer lifecycle of previously ironclad competitive landscapes.
Amazon is just one case in point: This U.S.-based company has eliminated the middlemen in its supply chain – and with "Amazon Prime Air", is putting strong pressure on companies like DHL, UPS and Fedex. How long will it take for Facebook to apply for a banking license? It already has sufficient access to potential customers, with more than enough user information and capital. Established companies are thus required to come up with an effective response.
AI is capable of providing the answers and can serve as an innovations driver to help companies gain a competitive edge. An AI-defined infrastructure has thus become an essential part of up-to-the-minute enterprise stacks: It creates the foundation for AI-capable companies. The development of AI applications is going to help transform IT infrastructures from being merely a supportive tool to becoming a functional model which will not only support Web applications and services, as is currently the case, but AI applications as well.