Faster, easier, cheaper: AI is becoming more accessible

Faster, easier, cheaper: AI is becoming more accessible


In the next 3 years, global corporate spending on machine learning technologies will triple and reach$52.2 billion. What developments in this area will have a decisive impact on business development in the next few years? What are the key trends in the field of AI that you need to pay attention to now in order to keep up with the leaders?


Artificial intelligence is worth it on 3 whales: this is hardware, data and algorithms. A smart video surveillance system in a store, a robot lawyer for automatically drawing up contracts or a complex banking investment system-all of them use computing power to process information, work on “fuel” from video files, images and documents and get the results of the algorithm at the output: they recognize the buyer’s face, form a contract or find a source of financial risk. Let’s look at what is happening on the market with these components of any AI solution.

Iron


One of the reasons for today’s boom in the development of AI is related, paradoxically, with video games: it was for them that companies initially developed and improved graphics processors (GPUs). When you played a shooter game twenty years ago, you hardly thought that you were helping the development of artificial intelligence. The fact is that it turned out to be more convenient and cheaper to train artificial neural networks using a GPU than using classical processors (CPUs). Computing power, which seemed fantastic ten years ago, is now available to any enthusiast. Moreover, analysts predict that video cards will continue to become cheaper against the background of weakening interest in cryptocurrencies and mining. In addition to video cards, there are more and more special solutions created for machine learning on the market. Simply put, hardware becomes faster and cheaper by itself, and cloud technologies make it even more accessible and allow you to easily scale if your product suddenly starts to grow dramatically. Machine learning as a service
(MLaaS) in one form or another is available in the Amazon cloud, Microsoft, and Google. These companies provide ready-made and customized machines for rent, which simplifies and accelerates the development of products and application solutions using machine learning.

Data


If things are going well with iron, then everything is not so rosy with data. On the one hand, developers know how important high-quality datasets are. Several major players have announced their initiatives in this direction. For example, Google recently announced a service for searching for Google Dataset Search: while the system can search for information about the environment, social sciences, data from government organizations, new sources will be added over time. On the other hand, more and more companies are beginning to understand that data is a valuable resource that should be protected and hidden from competitors. It is not very clear which of the two trends will take up. In any case, there will be more open data, however, against the background of the growing popularity of the Internet of Things, wearable electronics, and the further penetration of the mobile Internet in the most remote corners of the world, the percentage of the “iceberg” of data that is prominent to scientists is likely to be less and less. Businesses and users generate new data much faster than we have time to process it.


At the same time, in many cases, machine learning requires a huge amount of highly specialized data. For example, if these are X-rays, the expert marks areas with a tumor on them in order to teach the technology to identify such problem areas independently. But what if there is too little data about a specific type of tumor? New machine learning methods are designed to solve the problem of “small data”. So far, these are separate experiments, but they will affect the development of AI in business in the near future. This transfer learning and learning from the first time (one-shot learning). But here we are already moving smoothly from data to algorithms.

Algorithms


Transfer learning method The idea is that if you train a deep neural network to perform one task, then you can use the same network architecture to train on a different data set. To put it quite simply, no matter what machine vision problem you solve, first you need your algorithm to be able to find lines or angles on the image. And if one algorithm has already learned to do this while you taught it to distinguish photos of a dog from a photo of a cat, then the same algorithm can be trained to select a person’s face on the image. The “ability” to find lines and angles will be useful in any machine vision task, it is necessary to retrain only the last layers of the network responsible for what is called upper-level features. ABBYY uses transfer learning in the new version of the FlexiCapture intelligent platform: we pre-train the network to classify documents on our side, so that the algorithm can start working on the minimum number of examples on the partner’s side.


One-shot learning it boils down to the following idea: if we want the algorithm to recognize some objects, then we can lay in it a basic model of the subject area, which will allow the algorithm to build plausible hypotheses on a minimum number of examples. This is another way to ensure that the algorithm starts working efficiently, having received the minimum amount of data.


Another significant direction: training with reinforcement. When using this method, the neural network analyzes the situation using an artificial model that repeats the features of the external environment. In this virtual world, the machine learns to find the optimal chain of solutions for a particular complex task, receiving information about whether a particular chain of actions is successful only at individual moments, and not at each step. One example is a game of chess: before the famous match, where the program AlphaZero she beat a person, she played with herself for 9 hours. During this time, she played 44 million games-several times more than all chess professionals in the entire history of mankind. And the algorithm received information about whether the decisions were made correctly or not only at the end of each game, when it “found out” which side won. Algorithms trained with reinforcement have been participating in Dota competitions for several years. A year ago, an algorithm created by Open AI won against a person one – on-one. This year, the “team” of bots did not cope with a team of professional human players. The more deterministic the environment, the easier it is for the algorithm, the more changeable — the more difficult, if the environment requires effective communication and cooperation, then it becomes even more difficult for algorithms.


Artificial intelligence algorithms have been studied for many decades, but right now their capabilities have coincided with the tasks of large business, and new technological developments are making it increasingly accessible to banks, oil, energy companies, retail chains, telecommunications operators and other business areas. By 2020, AI elements will be present in 90% of software products and applications for the corporate segment – such an assessment is given by Oracle. These technologies will become a priority for investments of almost a third of companies in the world and the basis for global GDP growth, and in the next 2-3 years they will help to reduce your costs, attract more customers and increase revenues.


Ivan Yamshchikov, Senior Researcher at the Max Planck Institute in Leipzig, AI evangelist at ABBYY

Go to our cases Get a free quote