Neural networks: how artificial intelligence helps in business and life. Artificial intelligence (AI) Artificial intelligence (AI)

News about new developments in the field of artificial intelligence appear with enviable frequency. So in January of this year, Google announced its plans, in partnership with Movidius, to create mobile processors with machine learning capabilities. The stated goals of the partnership are to bring machine intelligence capabilities to people in their handheld devices. And in February, MIT engineers already introduced the Eyeriss processor, thanks to which artificial intelligence can appear in portable devices. And this is against the backdrop of the fact that the volume of investment in the development of artificial intelligence systems is growing from year to year.

Everything suggests that soon artificial intelligence will penetrate our smartphones, which will become significantly smarter. So we are not far from the uprising of the machines? How much smarter do machines need to become to take power over people? And how real is it?

Artificial intelligence one, artificial intelligence two, artificial intelligence three

When we read or hear about artificial intelligence, many of us imagine SkyNet and the machines from the famous Terminator movie. What do researchers and developers mean by this concept?

There are three types of AI that we have to, or may have to create:

Narrowly targeted artificial intelligence. This is exactly what we will get in our new smartphones in the near future. Such intelligence is superior to human intelligence in certain activities or operations. A computer with highly targeted artificial intelligence can beat a world chess champion, park a car, or select the most relevant results in a search engine.

The power of such artificial intelligence lies in the computing capabilities of processors. The greater these opportunities, the more effectively the assigned tasks are solved. And with the increase in processor power there are no problems now. Narrowly focused AI, in the philosophy of artificial intelligence (there is such a thing) is called weak.

But computing capabilities alone, according to scientists, are not enough to create truly smart machines. Although it was precisely the fictitious case of the spontaneous transition of weak artificial intelligence to strong that formed the basis for the script of the Terminator films. SkyNet, a US Department of Defense supercomputer designed to control the missile defense system, gains consciousness and begins to make its own decisions.

General artificial intelligence. If we have already created systems with narrowly targeted AI and found practical applications for them, then with General AI everything is much more complicated. This type of AI is already human-level intelligence. It is universal and capable of performing the same intellectual operations as the human brain.

If we see fully humanoid robots in our lifetime, they will have exactly this type of intelligence. Remember the android Andrew from Chris Columbus's film Bicentennial Man. Robots with such AI will be able to independently learn, think and make decisions like humans. They will be able to build relationships with people around them, becoming friends and helpers. It is this kind of artificial intelligence that is called strong.

But there is a gulf between strong and weak artificial intelligence. To go from one to the other, it is not enough to increase the computing power of computers, you also need to give them intelligence. Scientists have not yet seen a clear way to do this.

Artificial superintelligence. It is this type of artificial intelligence that is attracting widespread attention. Largely because the possibility of its creation is perceived by many scientists as a danger to humanity. SkyNet is an illustration of such a threat.

Superintelligence will be smarter than any of the people. He will be superior to man in almost every field. Will be able to solve complex problems and make scientific discoveries. How will an intelligent machine behave in relation to humanity?

Scientists suggest three models of interaction:

Oracle- we can get an answer to any complex question.

Gin- he will do everything we need himself, using for this at least a molecular assembler, even robotic laboratories and factories operating without human intervention.

Sovereign- He will find the problem himself and solve it himself.

As you can see, the term “artificial intelligence” contains three forms of existence of artificial intelligence. And their differences from each other are significant, as are the consequences of the transition from one AI to another. Can we determine the level of intelligence of smart machines in order to understand who we are dealing with?

How to measure artificial intelligence?


People differ from each other in their level of intelligence. To quantify it, special tests are used. The IQ test is known to many. How is the intelligence of machines measured?

If we take an uncritical approach to media reports, then the intellectual level of modern machines varies between the IQ of a 4-year-old child and a 13-year-old teenager. These two numbers illustrate two approaches to measuring machine intelligence.

In 2015, a team of scientists from Illinois tested the ConceptNet artificial intelligence system created at the Massachusetts Institute of Technology using a standard IQ test for children aged 2.5 to 7 years. The machine's results corresponded to the average performance of a four-year-old child.

In addition to the use of tests designed for humans, a special test designed for machines is widely known and used. The Turing test is designed to determine whether a machine can think.

The test is as follows. One person – the judge – communicates with two interlocutors whom he does not see. All interaction is carried out by correspondence using an intermediary computer. One of the interlocutors is a person, and the other is a computer program posing as a person. If the judge cannot definitely say which of his interlocutors is the program, then the machine is considered to have passed the test.

To date, the Turing Test has only been passed once. In 2014, the Eugene Goostman program, which imitated a 13-year-old teenager named by the developers as Zhenya Goostman, was able to mislead judges and impersonate a person.

However, there are many objections to such tests. Both computers and their programs today are carriers of weak, narrowly focused artificial intelligence. Such intelligence can only imitate the person who is taking the test.

Everything will change when we move from weak artificial intelligence to strong one. A machine endowed with general artificial intelligence, which will be similar to human intelligence, will already have consciousness and self-awareness, and therefore will think. Such a computer would pass a standard IQ test, answering questions consciously as a human would.

The human IQ ranges from 85 to 130. The same indicators will be available to general AI. But the upper level of IQ of artificial superintelligence will have no restrictions. It could be 1,000 or 10,000. What awaits us as AI improves?

Since the invention of computers, their ability to perform various tasks has continued to grow exponentially. People are developing the power of computer systems by increasing tasks and reducing the size of computers. The main goal of researchers in the field of artificial intelligence is to create computers or machines as intelligent as humans.

The originator of the term “artificial intelligence” is John McCarthy, inventor of the Lisp language, founder of functional programming, and Turing Award winner for his enormous contributions to the field of artificial intelligence research.

Artificial intelligence is a way of making a computer, computer-controlled robot or program capable of thinking intelligently like a human.

Research in the field of AI is carried out by studying human mental abilities, and then the results of this research are used as the basis for the development of intelligent programs and systems.

AI Philosophy

While operating powerful computer systems, everyone asked the question: “Can a machine think and behave the same way as a human?” "

Thus, the development of AI began with the intention of creating similar intelligence in machines, similar to human intelligence.

Main Goals of AI

  • Creation of expert systems - systems that demonstrate intelligent behavior: learn, show, explain and give advice;
  • The implementation of human intelligence in machines is the creation of a machine capable of understanding, thinking, teaching and behaving like a person.

What is driving the development of AI?

Artificial intelligence is a science and technology based on disciplines such as computer science, biology, psychology, linguistics, mathematics, and mechanical engineering. One of the main areas of artificial intelligence is the development of computer functions related to human intelligence, such as reasoning, learning and problem solving.

Program with and without AI

Programs with and without AI differ in the following properties:

AI Applications

AI has become dominant in various fields such as:

    Games - AI plays a decisive role in games related to strategy such as chess, poker, tic-tac-toe, etc., where the computer is able to calculate a large number of various decisions based on heuristic knowledge.

    Natural language processing is the ability to communicate with a computer that understands the natural language spoken by humans.

    Speech recognition - some intelligent systems are able to hear and understand the language in which a person communicates with them. They can handle different accents, slangs, etc.

    Handwriting recognition - the software reads text written on paper with a pen or on the screen with a stylus. It can recognize letter shapes and convert it into editable text.

    Smart robots are robots capable of performing tasks assigned by humans. They have sensors to detect physical data from the real world, such as light, heat, movement, sound, shock and pressure. They have high-performance processors, multiple sensors and huge memory. In addition, they are able to learn from their own mistakes and adapt to a new environment.

History of AI development

Here's the history of AI development during the 20th century

Karel Capek is directing a play in London called "Universal Robots", which was the first use of the word "robot" in English.

Isaac Asimov, a graduate of Columbia University, coins the term robotics.

Alan Turing develops the Turing test to assess intelligence. Claude Shannon publishes a detailed analysis of the intellectual game of chess.

John McCarthy coins the term artificial intelligence. Demonstration of the first launch of an AI program at Carnegie Mellon University.

John McCarthy invents the lisp programming language for AI.

Danny Bobrow's thesis at MIT shows that computers can understand natural language quite well.

Joseph Weizenbaum at MIT is developing Eliza, an interactive assistant that conducts dialogue in English.

Scientists at Stanford Research Institute have developed Sheki, a motorized robot capable of sensing and solving certain problems.

A team of researchers at the University of Edinburgh have built Freddy, the famous Scottish robot capable of using vision to find and assemble models.

The first computer-controlled autonomous car, the Stanford Trolley, was built.

Harold Cohen designed and demonstrated the compilation of the program, Aaron.

A chess program that beats world chess champion Garry Kasparov.

Interactive robotic pets will become commercially available. MIT displays Kismet, a robot with a face that expresses emotions. Robot Nomad explores remote areas of Antarctica and finds meteorites.

This year, Yandex launched the voice assistant Alice. The new service allows the user to listen to news and weather, get answers to questions and simply communicate with the bot. "Alice" sometimes he gets cocky, sometimes seems almost reasonable and humanly sarcastic, but often cannot figure out what she is being asked about and ends up in a puddle.

All this gave rise not only to a wave of jokes, but also to a new round of discussions about the development of artificial intelligence. News about what smart algorithms have achieved comes almost every day today, and machine learning is called one of the most promising areas to which you can devote yourself.

To clarify the main questions about artificial intelligence, we talked with Sergei Markov, a specialist in artificial intelligence and machine learning methods, the author of one of the most powerful domestic chess programs SmarThink and the creator of the XXII Century project.

Sergey Markov,

artificial intelligence specialist

Debunking myths about AI

so what is “artificial intelligence”?

The concept of “artificial intelligence” has been unlucky to some extent. Originating initially in the scientific community, it eventually penetrated into fantastic literature, and through it into pop culture, where it underwent a number of changes, acquired many interpretations and, in the end, was completely mystified.

This is why we often hear statements like this from non-specialists: “AI does not exist,” “AI cannot be created.” Misunderstanding of the nature of AI research easily leads people to other extremes - for example, modern AI systems are attributed to the presence of consciousness, free will and secret motives.

Let's try to separate the flies from the cutlets.

In science, artificial intelligence refers to systems designed to solve intellectual problems.

In turn, an intellectual task is a task that people solve using their own intelligence. Note that in this case, experts deliberately avoid defining the concept of “intelligence,” since before the advent of AI systems, the only example of intelligence was human intelligence, and defining the concept of intelligence based on a single example is the same as trying to draw a straight line through a single point. There could be any number of such lines, which means that the debate about the concept of intelligence could last for centuries.

“strong” and “weak” artificial intelligence

AI systems are divided into two large groups.

Applied artificial intelligence(the term “weak AI” or “narrow AI” is also used, in the English tradition - weak/applied/narrow AI) is AI designed to solve any one intellectual problem or a small set of them. This class includes systems for playing chess, Go, image recognition, speech, making decisions about issuing or not issuing a bank loan, and so on.

In contrast to applied AI, the concept is introduced universal artificial intelligence(also “strong AI”, in English - strong AI/Artificial General Intelligence) - that is, hypothetical (for now) AI capable of solving any intellectual problems.

Often people, without knowing the terminology, equate AI with strong AI, which is why judgments arise in the spirit of “AI does not exist.”

Strong AI really doesn't exist yet. Almost all of the advances we have seen in the last decade in AI are advances in application systems. These successes should not be underestimated, since applied systems in some cases are capable of solving intellectual problems better than universal human intelligence.

I think you've noticed that the concept of AI is quite broad. Let's say, mental calculation is also an intellectual task, and this means that any calculating machine will be considered an AI system. What about bills? Abacus? Antikythera Mechanism? Indeed, all of these are formally, albeit primitive, AI systems. However, usually, by calling a system an AI system, we thereby emphasize the complexity of the problem solved by this system.

It is quite obvious that the division of intellectual tasks into simple and complex is very artificial, and our ideas about the complexity of certain tasks are gradually changing. The mechanical calculating machine was a miracle of technology in the 17th century, but today people, who have been exposed to much more complex mechanisms since childhood, are no longer able to be impressed by it. When cars playing Go or self-driving cars cease to amaze the public, there will probably be people who will wince because someone will classify such systems as AI.

“Excellent Robots”: about AI’s learning abilities

Another funny misconception is that AI systems must have the ability to self-learn. On the one hand, this is not a necessary property of AI systems: there are many amazing systems that are not capable of self-learning, but, nevertheless, solve many problems better than the human brain. On the other hand, some people simply do not know that self-learning is a property that many AI systems acquired more than fifty years ago.

When I wrote my first chess program in 1999, self-learning was already a completely common place in this area - programs could remember dangerous positions, adjust opening variations to suit themselves, and regulate the style of play, adapting to the opponent. Of course, those programs were still very far from Alpha Zero. However, even systems that learned behavior based on interactions with other systems through experiments in so-called “reinforcement learning” already existed. However, for some inexplicable reason, some people still think that the ability to self-learn is the prerogative of human intelligence.

Machine learning, an entire scientific discipline, deals with the processes of teaching machines to solve certain problems.

There are two big poles of machine learning - supervised learning and unsupervised learning.

At training with a teacher the machine already has a certain number of conditionally correct solutions for a certain set of cases. The task of training in this case is to teach the machine, based on available examples, to make correct decisions in other, unknown situations.

The other extreme is learning without a teacher. That is, the machine is placed in a situation where the correct decisions are unknown, only data is available in raw, unlabeled form. It turns out that in such cases you can achieve some success. For example, you can teach a machine to identify semantic relationships between words in a language based on the analysis of a very large set of texts.

One type of supervised learning is reinforcement learning. The idea is that the AI ​​system acts as an agent placed in some simulated environment in which it can interact with other agents, for example, with copies of itself, and receive some feedback from the environment through a reward function. For example, a chess program that plays with itself, gradually adjusting its parameters and thereby gradually strengthening its own game.

Reinforcement learning is a fairly broad field, with many interesting techniques used, ranging from evolutionary algorithms to Bayesian optimization. The latest advances in AI for games are all about enhancing AI through reinforcement learning.

Risks of technology development: should we be afraid of “Doomsday”?

I am not one of the AI ​​alarmists, and in this sense I am by no means alone. For example, the creator of the Stanford course on machine learning, Andrew Ng, compares the problem of the danger of AI with the problem of overpopulation of Mars.

Indeed, it is likely that humans will colonize Mars in the future. It is also likely that sooner or later there may be an overpopulation problem on Mars, but it is not entirely clear why we should deal with this problem now? Yann LeCun, the creator of convolutional neural networks, and his boss Mark Zuckerberg, and Yoshua Benyo, a man largely thanks to whose research, modern neural networks are able to solve complex problems in the field of text processing, agree with Ng.

It will probably take several hours to present my views on this problem, so I will focus only on the main points.

1. YOU CANNOT LIMIT THE DEVELOPMENT OF AI

Alarmists consider the risks associated with the potential destructive impact of AI, while ignoring the risks associated with attempting to limit or even stop progress in this area. Humanity's technological power is increasing at an extremely rapid rate, leading to an effect that I call "cheapening the apocalypse."

150 years ago, with all the desire, humanity could not cause irreparable damage either to the biosphere or to itself as a species. To implement the catastrophic scenario 50 years ago, it would have been necessary to concentrate all the technological power of the nuclear powers. Tomorrow, a small handful of fanatics may be enough to bring about a global man-made disaster.

Our technological power is growing much faster than the ability of human intelligence to control this power.

Unless human intelligence, with its prejudices, aggression, delusions and limitations, is replaced by a system capable of making better decisions (whether AI or, what I think is more likely, human intelligence, technologically enhanced and combined with machines into a single system), we may wait for a global catastrophe.

2. creating superintelligence is fundamentally impossible

There is an idea that the AI ​​of the future will certainly be superintelligence, superior to humans even more than humans are superior to ants. In this case, I am afraid to disappoint technological optimists as well - our Universe contains a number of fundamental physical limitations that, apparently, will make the creation of superintelligence impossible.

For example, the speed of signal transmission is limited by the speed of light, and at the Planck scale the Heisenberg uncertainty appears. This leads to the first fundamental limit - the Bremermann limit, which introduces restrictions on the maximum speed of calculations for an autonomous system of a given mass m.

Another limit is associated with the Landauer principle, according to which there is a minimum amount of heat generated when processing 1 bit of information. Too fast calculations will cause unacceptable heating and destruction of the system. In fact, modern processors are less than a thousand times behind the Landauer limit. It would seem that 1000 is quite a lot, but another problem is that many intellectual tasks belong to the EXPTIME difficulty class. This means that the time required to solve them is an exponential function of the size of the problem. Accelerating the system several times only gives a constant increase in “intelligence”.

In general, there are very serious reasons to believe that super-intelligent strong AI will not work out, although, of course, the level of human intelligence may well be surpassed. How dangerous is this? Most likely not very much.

Imagine that you suddenly started thinking 100 times faster than other people. Does this mean that you will easily be able to persuade any passerby to give you their wallet?

3. we are worried about the wrong things

Unfortunately, as a result of alarmists’ speculations on the fears of the public, brought up on “The Terminator” and the famous HAL 9000 of Clark and Kubrick, there is a shift in emphasis in the field of AI safety towards the analysis of unlikely, but effective scenarios. At the same time, real dangers are lost sight of.

Any sufficiently complex technology that aspires to occupy an important place in our technological landscape certainly brings with it specific risks. Many lives were destroyed by steam engines - in manufacturing, transportation, and so on - before effective regulations and safety measures were developed.

If we talk about progress in the field of applied AI, we can pay attention to the related problem of the so-called “Digital Secret Court”. More and more AI applications are making decisions on issues affecting people's lives and health. This includes medical diagnostic systems and, for example, systems that make decisions in banks about issuing or not issuing a loan to a client.

At the same time, the structure of the models used, the sets of factors used and other details of the decision-making procedure are hidden as trade secrets from the person whose fate is at stake.

The models used may base their decisions on the opinions of expert teachers who made systematic errors or had certain prejudices - racial, gender.

AI trained on the decisions of such experts will faithfully reproduce these biases in its decisions. After all, these models may contain specific defects.

Few people are dealing with these problems now, since, of course, SkyNet starting a nuclear war is, of course, much more spectacular.

Neural networks as a “hot trend”

On the one hand, neural networks are one of the oldest models used to create AI systems. Appearing initially as a result of the bionic approach, they quickly escaped from their biological prototypes. The only exception here is pulsed neural networks (however, they have not yet found wide application in the industry).

The progress of recent decades is associated with the development of deep learning technologies - an approach in which neural networks are assembled from a large number of layers, each of which is built on the basis of certain regular patterns.

In addition to the creation of new neural network models, important progress has also been made in the field of learning technologies. Today, neural networks are no longer taught using computer central processors, but using specialized processors capable of quickly performing matrix and tensor calculations. The most common type of such devices today is video cards. However, the development of even more specialized devices for training neural networks is actively underway.

In general, of course, neural networks today are one of the main technologies in the field of machine learning, to which we owe the solution to many problems that were previously solved unsatisfactorily. On the other hand, of course, you need to understand that neural networks are not a panacea. For some tasks they are far from the most effective tool.

So how smart are today's robots really?

Everything is relative. Compared to the technology of 2000, current achievements look like a real miracle. There will always be people who love to grumble. 5 years ago they were talking with all their might about how machines would never win against people at Go (or, at least, they wouldn’t win very soon). They said that a machine would never be able to draw a picture from scratch, while today people are practically unable to distinguish paintings created by machines from paintings by artists unknown to them. At the end of last year, machines learned to synthesize speech that is practically indistinguishable from human speech, and in recent years, the music created by machines has not withered the ears.

Let's see what happens tomorrow. I am very optimistic about these applications of AI.

Promising directions: where to start diving into the field of AI?

I would advise you to try good level master one of the popular neural network frameworks and one of the most popular programming languages ​​in the field of machine learning (the most popular combination today is TensorFlow + Python).

Having mastered these tools and ideally having a strong foundation in the field of mathematical statistics and probability theory, you should direct your efforts to the area that will be most interesting to you personally.

Interest in the subject of your work is one of your most important helpers.

The need for machine learning specialists exists in the most different areas- in medicine, in banking, in science, in production, so today a good specialist is given a wider choice than ever before. The potential benefits of any of these industries seem to me to be insignificant compared to the fact that you will enjoy the work.

“We are on the threshold of the greatest changes comparable to human evolution” - Science fiction writer Vernor Stefan Vinge

How would you feel if you knew you were on the verge of a huge change like the little man in the graph below?

The vertical axis is the development of humanity, the horizontal axis is time

Exciting, isn't it?

However, if you hide part of the graph, then everything looks much more prosaic.

The distant future is just around the corner

Imagine that you find yourself in 1750. In those days, people had not yet heard of electricity, communication at a distance was carried out with the help of torches, and the only means of transportation needed to be fed with hay before the trip. And so you decide to take the “person from the past” with you and show him life in 2016. It is impossible to even imagine what he would have felt if he found himself on wide, level streets along which cars were rushing. Your guest would be incredibly surprised that modern people can communicate even if they are on different sides of the globe, follow sporting events in other countries, watch concerts from 50 years ago, and also save any moment in time in a photo or video. And if you told this man from 1750 about the Internet, the International Space Station, the Large Hadron Collider and the Theory of Relativity, his view of the world would probably collapse. He could even die from an overabundance of impressions.

But here’s what’s interesting: if your guest returned to his “native” century and decided to carry out a similar experiment, taking a person from 1500 for a ride in a time machine, then although a visitor from the past might also be surprised by many things, his experience would not be as impressive — the difference between 1500 and 1750 is not as noticeable as between 1750 and 2016.

If a person from the 18th century wants to impress a guest from the past, then he will have to invite someone who lived in 12,000 BC, before the Great Agrarian Revolution. He really could have been “blown away” by the development of technology. Seeing the tall bell towers of churches, ships plowing the oceans, cities with thousands of inhabitants, he would faint from the surging emotions.

The pace of development of technology and society is constantly increasing. The famous American inventor and futurist Raymond Kurzweil calls this the term “The Law of Acceleration of History.” This happens because the introduction of new technologies allows society to develop at an ever faster pace. For example, people who lived in the 19th century had more advanced technology than those in the 15th. Therefore, it is not surprising that the 19th century brought more achievements to humanity than the 15th.

But if technology is developing faster and faster, we should expect many greatest inventions in the future, right? If Kurzweil and his like-minded people are right, then in 2030 we will experience the same emotions as a person who came from 1750 to ours. And by 2050, the world will have changed so much that we will hardly be able to discern the features of previous decades.

All of the above is not science fiction - it is scientifically confirmed and quite logical. However, many are still skeptical about such claims. This happens for a number of reasons:

1. Many people believe that the development of society occurs evenly and straightforwardly. When we think about what the world will be like in 30 years, we remember what happened in the last 30 years. At this point, we make the same mistake as the person from the example above, who lived in 1750 and invited a guest from 1500. To properly imagine the progress ahead, you need to imagine that development is happening at a much faster pace than in the distant past.

2. We incorrectly perceive the trajectory of development of modern society. For example, if we look at a small segment of an exponential curve, it may appear to us to be a straight line (just as if we were looking at part of a circle). However, exponential growth is not smooth and smooth. Kurzweil explains that progress follows an s-shaped curve, as shown in the graph below:

Each “round” of development begins with a sudden jump, which is then replaced by steady and gradual growth.

So, each new “round” of development is divided into several stages:

1. Slow growth (early phase of development);
2. Rapid growth (the second, “explosive” development phrase);
3. "Alignment" when new technology brought to perfection.

If we look at recent events, we may come to the conclusion that we are not fully aware of how quickly technology is advancing. For example, between 1995 and 2007 we could see the emergence of the Internet, Microsoft, Google and Facebook, social networks, mobile phones, and then smartphones. But the period between 2008 and 2016 was not so rich in discoveries, at least in the field of high technology. Thus, we are now at stage 3 of the s-shaped development line.

3. Many people are hostage to their own life experiences, which distort their view of the future. When we hear any prediction about the future that contradicts our point of view based on previous experience, we consider this judgment to be naive. For example, if they tell you today that in the future people will live 150-250 years or , then most likely you will answer: “This is stupid, because it is well known that everyone is mortal.” Indeed, all people who have ever lived in the past have died and continue to die today. But it’s worth noting that no one flew airplanes either until they were finally invented.

In fact, a lot will change in the next few decades, and the changes will be so significant that it is difficult to even imagine it now. After reading this article to the end, you can learn more about what is happening now in the world of science and high technology.

What is artificial intelligence (AI)?

1. We associate AI with movies like “Star Wars”, “Terminator” and so on. In this regard, we treat it as fiction.

2. AI is a fairly broad concept. It applies to both pocket calculators and self-driving cars. Such diversity is confusing.

3. We use artificial intelligence in our daily lives, but we don't realize it. We perceive AI as something mythical from the world of the future, so it is difficult for us to realize that it is already around us.

In this regard, it is necessary to understand several things once and for all. First, artificial intelligence is not a robot. A robot is a kind of AI shell that sometimes has the outline of a human body. However, artificial intelligence is a computer inside a robot. It can be compared to the brain inside the human body. For example, the female voice that we hear is just a personification.

Secondly, you have probably already come across the concept of “singularity” or “technological singularity”. This term was used to describe a situation in which the usual laws and rules do not apply. This concept is used in physics to describe black holes or the moment of compression of the Universe before the Big Bang. In 1993, Vernor Vinge published his famous essay in which he used the singularity to identify a point in the future when artificial intelligence would surpass our own. In his opinion, when this moment comes, the world with all its rules and laws will cease to exist as before.

Finally, there are several types of artificial intelligence, among which three main categories can be distinguished:

1. Limited Artificial Intelligence (ANI, Artificial Narrow Intelligence). It is an AI that specializes in one specific area. For example, he can beat the world chess champion in a chess game, but that's all he can do.

2. General Artificial Intelligence (AGI, Artificial General Intelligence). Such AI is a computer whose intelligence resembles that of a human, that is, it can perform all the same tasks as a person. Professor Linda Gottfredson describes this phenomenon as follows: “General AI embodies generalized thinking abilities, which also include the ability to reason, plan, solve problems, think abstractly, compare complex ideas, learn quickly, and use accumulated experience.”

3. Artificial Superintelligence (ASI, Artificial Superintelligence). Swedish philosopher and Oxford University professor Nick Bostrom defines superintelligence as “an intelligence that is superior to that of humans in virtually all areas, including scientific invention, general knowledge, and social skills.”

Currently, humanity is already successfully using limited AI. We are on our way to mastering AGI. The following sections of the article will discuss each of these categories in detail.

A World Ruled by Limited Artificial Intelligence

Limited artificial intelligence is machine intelligence that is equal to or superior to human intelligence in solving narrow problems. Below are some examples:

  • a self-driving car from Google that recognizes and reacts to various obstacles in its path;
  • is a "haven" various forms limited AI. When you move around the city using navigation tips, get music recommendations from Pandora, check the weather forecast, talk to Siri, you are using ANI;
  • spam filters in your email - first they learn to recognize spam, and then, analyzing their previous experience and your preferences, they move letters to a special folder;
  • the Google Translate translator is a classic example of limited AI that copes well enough with its narrow task;
  • at the moment the plane lands, a special AI-based system determines through which gate passengers should exit.

Limited artificial intelligence systems do not pose any threat to humans. In the worst case, a failure in such a system could cause a local catastrophe such as a power surge or a small collapse in the financial market.

Every new invention in the field of limited AI brings us one step closer to the creation of general artificial intelligence.

Why is this so difficult?

If you tried to create a computer with the same intelligence as a human, you would begin to truly value your ability to think. Designing skyscrapers, launching rockets into space, studying the Big Bang theory - all this is much easier to accomplish than studying the human brain. At the moment, our mind is the most complex object in the observable Universe.

The most interesting thing is that the difficulties in creating general AI arise in the most seemingly simple things. For example, creating a device that could multiply ten-digit numbers in a fraction of a second is not difficult. At the same time, it is incredibly difficult to write a program that could recognize who is in front of the monitor: a cat or a dog. Create a computer that can beat a human at chess? Easily! Make a machine read and understand what is written in a children's book? Google is spending billions of dollars to solve this problem. Things like mathematical calculations, creating financial strategies, translating from one language to another have already been solved with the help of AI. However, vision, perception, gestures, and movement in space still remain unsolved problems for computers.

These skills seem simple to humans because they have developed over millions of years of evolution. When you reach out to pick up an object, your muscles, ligaments, and bones perform a series of operations that are consistent with what your eyes see.

On the other hand, multiplying large numbers and playing chess are completely new actions for biological beings. That's why it's very easy for a computer to beat us in this. Think about which program would you rather create: one that could quickly multiply large numbers or simply recognize the letter B from thousands of others written in different fonts?

Another fun example: by looking at the image below, both you and the computer can unmistakably recognize that it represents a rectangle consisting of squares of two different shades:

But, as soon as we remove the black background, the full, previously hidden picture will open before us:

It will not be difficult for a person to name and describe all the figures that he sees in this picture. However, the computer will not cope with this task. And after analyzing the image below, he will conclude that in front of him is a combination of many two-dimensional objects of white, black and gray colors. In this case, a person can easily say that the picture shows a black stone:

Everything that was mentioned above concerned only the perception and processing of static information. To match the intelligence level of a human, a computer needs to learn to recognize facial expressions, gestures, and so on. But how to achieve all this?

The first step towards creating general AI is increasing computer power

Obviously, if we are going to create “smart” computers, they must have the same thinking abilities as humans. One way to achieve this is to increase the number of operations per second. To do this, it is necessary to calculate how many operations per second each human brain structure performs.

Ray Kurzweil did some calculations and managed to come up with a number of 10,000,000,000,000,000 operations per second. The human brain has approximately the same productivity.

Currently, the most powerful supercomputer is the Chinese Tianhe-2, whose performance is 34 quadrillion operations per second. However, the size of this supercomputer is impressive - it covers an area of ​​720 square meters and costs $390,000,000.

So, if you look from the technical side, we already have a computer comparable in performance to the human brain. It is not available to the mass consumer, but within ten years it will become so. However, performance is not the only thing that can give a computer intelligence like a human. The next question is: how to make a powerful computer intelligent?

The second step towards creating general AI is to endow the machine with intelligence

This is the most difficult part of the process, because no one really knows how to make a computer smart. There is still debate about how to enable a machine to distinguish cats from dogs or recognize the letter B. However, there are several strategies, some of which are briefly described below:

1. Copying a human brain

Currently, scientists are working on the so-called reverse engineering of the human brain. According to optimistic forecasts, this work will be completed by 2030. Once the project is created, we will be able to learn all the secrets of our brain and draw new ideas from it. An example of such a system is an artificial neural network.

Another more extreme idea is to completely imitate the functions of the human brain. During this experiment, it is planned to cut the brain into many thin layers and scan each of them. Then, using a special program, you will need to create a 3D model, and then implement it into a powerful computer. After this, we will receive a device that will officially have all the functions of the human brain - all that remains is for it to collect information and learn.

How long do we have to wait until scientists can create an exact copy of the human brain? Quite a long time, because to date specialists have not been able to copy even a 1mm layer of the brain, consisting of 302 neurons (our brain consists of 100,000,000,000 neurons).

2. Recapitulating the evolution of the human brain

Creating a smart computer is theoretically possible, and the evolution of our own brains is proof of this. If we can't create an exact copy of the brain, we can try to imitate its evolution. In fact, for example, it is impossible to build an airplane simply by copying the wings of a bird. To create a high-quality aircraft, it is better to use some other approach.

How can we simulate the evolutionary process to create general AI? This method is called genetic algorithm. The essence of this approach is that optimization and modeling problems are solved using mechanisms similar to natural selection in living nature. Several computers will perform different tasks, and those that prove to be the most efficient will be "crossed" with each other. Machines that fail to complete the task will be excluded. Thus, after many repetitions of this experiment, the natural selection algorithm will create an increasingly better computer. The difficulty here lies in automating the process of evolution and “crossing,” because the evolutionary process must go on by itself.

The disadvantage of the described method is that in the nature of evolution it takes millions of years, but we need results within a couple of decades.

3. Transfer all tasks to the computer

When scientists get desperate, they try to create a program that tests itself. This may be the most promising method for creating general AI.

The idea is to create a computer whose main functions will be AI research and coding changes. Such a computer will not only learn independently, but also change its own architecture. Scientists plan to teach a computer to be a researcher whose main task will be to develop its own intelligence.

All this could happen very soon

Continuous improvement of computers and innovative experiments with new software occur in parallel. Artificial general intelligence may emerge quickly and unexpectedly for two main reasons:

1. The exponential growth rate seems very slow, but it can speed up at any time.

2. When it comes to software, progress seems to be very slow, but a single discovery can take us to the next level in the blink of an eye. new level development. For example, we all know that previously people thought that the Earth was at the center of the Universe. In this regard, many difficulties arose in the study of space. However, then the world system unexpectedly changed to heliocentric. Once ideas changed dramatically, new research became possible.

On the Path from Limited AI to Artificial Superintelligence

At some point in the development of limited AI, computers will begin to surpass us. The fact is that artificial intelligence, identical to the human brain, will have several advantages over people, among which the following can be distinguished:

Speed. Our brain neurons operate at a maximum frequency of 200Hz, while modern microprocessors operate at 2GHz, or 10 million times faster.

Dimensions. The human brain is limited by the size of the skull and therefore cannot grow larger. The computer can be any size, providing more space for storing files.

Reliability and durability. Computer transistors operate with greater precision than brain neurons. In addition, they can be easily repaired or replaced. The human brain tends to get tired, while a computer can work at full capacity around the clock.

Artificial intelligence, programmed for constant self-improvement, will not limit itself to any limits. This means that once a machine reaches the level of human intelligence, it will not stop there.

Of course, when a computer becomes “smarter” than us, it will be a shock to all humanity. In fact, most of us have a distorted view of intelligence that looks like this:

Our distorted view of intelligence.

The horizontal axis is time, the vertical axis is intelligence.

Levels of intelligence go from bottom to top: ant, bird, chimpanzee, stupid person, Einstein. Between the stupid man and Einstein there is a man who says: “Ha ha! These funny robots act like monkeys!”

The development of artificial intelligence is indicated in red.

So, the development curve of artificial intelligence on the graph tends to reach the human level. We watch as the machine gradually becomes smarter than the animal. However, once the AI ​​reaches the level of "close-minded man" or, as Nick Bostrom puts it, "village idiot", it will mean that artificial general intelligence has been created. In this case, it will not be difficult for a computer to reach Einstein’s level. This rapid development is shown in the figure below:

But what happens next?

Intellectual explosion

Here it would be useful to recall that everything written in this article is a description of real scientific forecasts compiled by respected scientists.

In any case, most models of limited artificial intelligence include the function of self-improvement. But even if you create an AI that does not initially provide such a function, then, having reached the level of human intelligence, the computer will acquire the ability to learn independently at will. As a result of this, machine intelligence will gradually develop and become a superintelligence that will be many times superior to the human mind.

There is currently debate about when AI will reach the level of human intelligence. Hundreds of scientists agree that this will happen around 2040. Not too long a time, right?

So, it will take decades for artificial intelligence to reach the level of human intelligence, but eventually it will happen. Computers will learn to understand the world around them in the same way that a 4-year-old child understands. Suddenly, having absorbed this information, the system will master theoretical physics, quantum mechanics and the theory of relativity. In an hour and a half, AI will turn into artificial superintelligence, 170 thousand times greater than the capabilities of the human brain.

Superintelligence is a phenomenon that we cannot even partially comprehend. In our opinion clever man has an IQ of 130, and stupid has less than 85. But what word can you choose for a creature with an IQ of 12952?

Intelligence is synonymous with power, which is why at the moment man is at the pinnacle of evolution, subjugating all other living beings. This means that with the advent of artificial superintelligence, we will cease to be the “crown of nature.” We will be subject to the supermind.

If our limited brains could create Wi-Fi, imagine what a mind hundreds, thousands, even millions of times larger than us could create. This intelligence will be able to control the location of every atom on the planet. Everything that we now consider magic or the power of God will become the daily task of superintelligence. Supermind will be able to defeat old age, heal diseases, destroy hunger and even death. It will even be able to reprogram the weather to protect life on Earth. But superintelligence can destroy life on the planet in the blink of an eye. In our current understanding of reality, God will settle next to us in the role of superintelligence. The only question we need to ask ourselves is: will this be a good God?

Artificial intelligence: how and where to study - experts answer

“I want to do AI. What's worth studying? What languages ​​should I use? What organizations should I study and work in?

We turned to our experts for clarification, and we present the answers received to your attention.

It depends on your basic training. First of all, you need a mathematical culture (knowledge of statistics, probability theory, discrete mathematics, linear algebra, analysis, etc.) and a willingness to learn a lot quickly. When implementing AI methods, programming (algorithms, data structures, OOP, etc.) will be required.

Different projects require knowledge of different programming languages. I would recommend knowing at least Python, Java and any functional language. Experience working with various databases and distributed systems would be helpful. English language skills are required to quickly learn industry best practices.

I recommend studying at good Russian universities! For example, MIPT, MSU, and HSE have corresponding departments. A wide variety of thematic courses are available on Coursera, edX, Udacity, Udemy and other MOOC platforms. Some leading organizations have their own training programs in the field of AI (for example, the School of Data Analysis at Yandex).

Application problems solved by AI methods can be found in a wide variety of places. Banks, the financial sector, consulting, retail, e-commerce, search engines, postal services, the gaming industry, the security systems industry and, of course, Avito - all need specialists of various qualifications.

Promote Demote

We have a fintech project related to machine learning and computer vision, in which its first developer wrote everything in C++, then a developer came along and rewrote everything in Python. So language is not the most important thing here, since language is first and foremost a tool, and it is up to you how to use it. It’s just that in some languages ​​problems can be solved faster, and in others more slowly.

It’s hard to say where to study - all our guys studied on their own, fortunately there is the Internet and Google.

Promote Demote

I can advise you to prepare yourself from the very beginning for the fact that you will have to study a lot. Regardless of what is meant by “doing AI” - working with big data or neural networks; development of technology or support and training of a certain already developed system.

Let's take the trending profession of Data Scientist for the sake of specifics. What is this person doing? In general, it collects, analyzes and prepares big data for use. These are the ones on which AI grows and trains. What should a Data Scientist know and be able to do? Static analysis and mathematical modeling are by default, and at the level of fluency. Languages ​​- say, R, SAS, Python. It would also be nice to have some development experience. Well, generally speaking, a good data scientist should feel confident in databases, algorithms, and data visualization.

It is not to say that such a set of knowledge could be obtained in every second technical university in the country. Large companies that prioritize AI development understand this and develop appropriate training programs for themselves - there is, for example, the School of Data Analysis from Yandex. But you must be aware that this is not the scale where you come to courses “from the street”, but leave them as a ready-made junior. The layer is large, and it makes sense to study a discipline when the basics (mathematics, statistics) have already been covered, at least within the framework of the university program.

Yes, it will take quite some time. But the game is worth the candle, because a good Data Scientist is very promising. And very expensive. There is also another point. Artificial intelligence is, on the one hand, no longer just an object of hype, but a technology that has completely reached the stage of productivity. On the other hand, AI is still developing. This development requires a lot of resources, a lot of skills and a lot of money. So far this is the major league level. I’ll say the obvious now, but if you want to be at the forefront of the attack and drive progress with your own hands, aim for companies like Facebook or Amazon.

At the same time, the technology is already being used in a number of areas: in banking, telecoms, giant industrial enterprises, and retail. And they already need people who can support it. Gartner predicts that by 2020, 20% of all enterprises in developed countries will hire dedicated employees to train the neural networks used in these companies. So there is still a little time to learn on your own.

Promote Demote

AI is now actively developing, and it is difficult to predict ten years in advance. Over the next two to three years, approaches based on neural networks and GPU computing will dominate. The leader in this area is Python with the Jupyter interactive environment and the numpy, scipy, and tensorflow libraries.

There are many online courses that provide a basic understanding of these technologies and general principles AI, for example Andrew Ng's course. And in terms of teaching this topic, Russia is now the most effective selfeducation or in a local interest group (for example, in Moscow I know of the existence of at least a couple of groups where people share experience and knowledge).

Promote Demote

Promote Demote

Today, the most rapidly progressing part of artificial intelligence is, perhaps, neural networks.
The study of neural networks and AI should begin with mastering two branches of mathematics - linear algebra and probability theory. This is a mandatory minimum, the unshakable pillars of artificial intelligence. Applicants who want to understand the basics of AI, when choosing a university, in my opinion, should pay attention to faculties with a strong mathematics school.

The next step is to study the problems of the issue. There is a huge amount of literature, both educational and specialized. Most publications on the topic of artificial intelligence and neural networks are written in English, but Russian-language materials are also published. Useful literature can be found, for example, in the public digital library arxiv.org.

If we talk about areas of activity, here we can highlight the training of applied neural networks and the development of completely new versions of neural networks. A striking example: there is such a very popular specialty now - “data scientist” (Data Scientist). These are developers who, as a rule, study and prepare certain data sets for training neural networks in specific application areas. To summarize, I would like to emphasize that each specialization requires a separate path of preparation.

Promote Demote

Before starting specialized courses, you need to study linear algebra and statistics. I would recommend starting your immersion in AI with the textbook “Machine Learning. The Science and Art of Building Algorithms That Extract Knowledge from Data" is a good primer for beginners. On Coursera, it is worth listening to the introductory lectures by K. Vorontsov (I emphasize that they require a good knowledge of linear algebra) and the “Machine Learning” course at Stanford University, taught by Andrew Ng, professor and head of Baidu AI Group/Google Brain.

The bulk is written in Python, followed by R and Lua.

If we talk about educational institutions, it is better to enroll in courses at the departments of applied mathematics and computer science; there are suitable educational programs. To test your abilities, you can take part in Kaggle competitions, where major global brands offer their cases.

Promote Demote

In any business, before starting projects, it would be good to get a theoretical basis. There are many places where you can earn a formal master's degree in this field, or improve your qualifications. For example, Skoltech offers master’s programs in the areas of “Computational Science and Engineering” and “Data Science,” which includes courses in “Machine Learning” and “Natural Language Processing.” You can also mention the Institute of Intelligent Cybernetic Systems of the National Research Nuclear University MEPhI, the Faculty of Computational Mathematics and Cybernetics of Moscow State University and the Department of Intelligent Systems of MIPT.

If you already have formal education, there are a number of courses available on various MOOC platforms. For example, EDx.org offers artificial intelligence courses from Microsoft and Columbia University, the latter of which offers a micro-master's program for a reasonable price. I would like to especially note that you can usually get the knowledge itself for free; you only pay for the certificate if it is needed for your resume.

If you want to “deeply dive” into the topic, a number of companies in Moscow offer week-long intensive courses with practical exercises, and even offer equipment for experiments (for example, newprolab.com), however, the price of such courses starts from several tens of thousands of rubles.

Among the companies that develop Artificial Intelligence, you probably know Yandex and Sberbank, but there are many others of different sizes. For example, this week the Ministry of Defense opened the ERA Military Innovation Technopolis in Anapa, one of the topics of which is the development of AI for military needs.

Promote Demote

Before studying artificial intelligence, we need to decide a fundamental question: should we take the red pill or the blue one.
The red pill is to become a developer and plunge into the cruel world of statistical methods, algorithms and constant comprehension of the unknown. On the other hand, you don’t have to immediately rush into the “rabbit hole”: you can become a manager and create AI, for example, as a project manager. These are two fundamentally different paths.

The first one is great if you have already decided that you will write artificial intelligence algorithms. Then you need to start with the most popular direction today - machine learning. To do this, you need to know the classical statistical methods of classification, clustering and regression. It will also be useful to get acquainted with the main measures for assessing the quality of a solution, their properties... and everything that comes your way.

Only after the base has been mastered is it worth studying more specialized methods: decision trees and ensembles of them. At this stage, you need to dive deeply into the basic methods of building and training models - they are hidden behind the barely decent words begging, boosting, stacking or blending.

It’s also worth learning about methods for controlling model retraining (another “ing” - overfitting).

And finally, a very Jedi level - obtaining highly specialized knowledge. For example, deep learning will require mastery of basic gradient descent architectures and algorithms. If you are interested in natural language processing problems, I recommend studying recurrent neural networks. And future creators of algorithms for processing pictures and videos should take a good look at convolutional neural networks.

The last two structures mentioned are the building blocks of popular architectures today: adversarial networks (GANs), relational networks, and mesh networks. Therefore, it will be useful to study them, even if you do not plan to teach the computer to see or hear.

A completely different approach to studying AI - aka the “blue pill” - begins with finding yourself. Artificial intelligence gives birth to a bunch of tasks and entire professions: from AI project managers to data engineers capable of preparing data, cleaning it and building scalable, loaded and fault-tolerant systems.

So, with a “managerial” approach, you should first assess your abilities and background, and only then choose where and what to study. For example, even without a mathematical mind you can design AI interfaces and visualizations for smart algorithms. But get ready: in 5 years, artificial intelligence will start trolling you and calling you a “humanist.”

The main ML methods are implemented in the form of ready-made libraries available for connection to different languages. The most popular languages ​​in ML today are: C++, Python and R.

There are many courses in both Russian and English, such as the Yandex School of Data Analysis, SkillFactory and OTUS courses. But before investing time and money in specialized training, I think it’s worth “getting into the topic”: watch open lectures on YouTube from DataFest conferences over the past years, take free courses from Coursera and Habrahabr.