Artificial Intelligence is a current topic, which causes many reasons for debate and discussions. Someone names AI as machine learning or cognitive calculations. It is quite obvious because AI comprises many technologies. They also consist of robotics and machine learning. The main aim of AI is a creation of engines, which may accomplish tasks and analytical functions.
Below there will be presented 6 spheres of AI, which are worthy to pay attention to and important for the future technological progress. These spheres also concern specialized companies and investigators, who are dealing with them.
- Reinforcement Learning
Reinforcement Learning is standard for studying by test-and-mistakes. In the conventional RL system, an agent should detect the current surrounding condition and perform actions to maximize the results. The RL agent is capable of stabilizing the research of surrounding and getting proper goal strategies.
Investigators: Pieter Abbeel, David Silver, Rich Sutton, John Shawe-Taylor.
Companies: Open AI, Google DeepMind, Alberta, Osaro, Maluuba/Microsoft.
- Generative Models
Generative models research a probability allocation according to the examined models. By choosing examples from high-dimensional allocation, generative models find new information, which is similar to the practice data.
Investigators: Ian Goodfellow, Soumith Chintala, Sahkir Mohamed.
Companies: Open AI, Facebook AI research, Berkeley,Google DeepMind, Twitter Cortex, Apple, Creative.ai, Adobe.
- Connection systems with memory
AI systems require the new tasks to carry out in order to advance and generalize in different surroundings. But classic connection systems cannot learn new tasks without forgetting. In this connection, there are some powerful structures, which may provide a special kind of memory for networks. These contain long-short-termed memory systems, which are able to handle and predict time periods to perform diverse algorithm calculations.
Investigators: Alex Graves, Geoffrey Hinton, Koray Kavukcuoglu, James Weston, Antoine Bordes.
Companies: Facebook Ai research, Google DeepMind, Microsoft Research.
- Practice from Minimum data and Small Models
Deep studying models constantly demand a lot of observed information have a perfect performance. A good way to improve the function of learning a new task is to transfer information from another experienced model. The issue lies in the creation of less deep studying structures with fewer parameters.
Investigators: Zoubin Ghahramani, Oriol Vinyals, Yoshua Bengio.
Companies: Google, Curious AI Company, Microsoft Research.
- Hardware for study and assumptions
The main impulse for the AI development is a reuse of graphics processing units. They are made to contribute the graphics for video games, not for study and conclusions. These processing units are strict, which make some creation startups in the companies. New chips help to intensify the efficiency and achievement of AI system.
Companies: Google, Intel. Graphocore.
- Imitation environments
Forming the practical information for AI structures is always difficult. The development of digital surrounding that imitates the real world situations and give people the experience they don’t have before. The learning in this imitating surrounding can help understand the AI system itself, its functions and errors.
Companies: Unity 3D, Google DeepMind, OpenAI, Improbable, Unreal Engine.
In conclusion, the future is changeable. The principal goal of humanity according to AI is to make the life easier and more available.