Artificial Intelligence (AI, artificial intelligence, AI) is the science of creating intelligent technologies and computer programs.
Artificial intelligence is closely related to the task of understanding human intelligence using computer technology. At the moment, it is impossible to say for sure which computational methods can be called intelligent. Some mechanisms of intelligence are open to understanding, others are not. At the moment, the programs use methods that are not found in humans.
Artificial intelligence has a scientific direction that studies the solution of problems of human intellectual activity. Artificial intelligence is aimed at performing creative tasks in an area, knowledge about which is stored in the intellectual system of the program - knowledge base.
With this knowledge the program mechanism works - problem solver... Then the person gets an idea of \u200b\u200bthe result of the program through the intelligent interface. The result of an artificial intelligence program is the re-creation of intellectual reasoning or intelligent action.
One of the main properties of artificial intelligence is the ability to learn by itself. First of all, it is heuristic learning - continuous training of the program, the formation of the learning process and their own goals, analysis and awareness of their learning.
The scientific direction studying artificial intelligence began to emerge a long time ago:
- philosophers thought about knowing the inner world of man
- psychologists studied human thinking
- mathematicians were doing calculations
Soon, the first computers were created, which made it possible to perform calculations faster than humans. Then scientists began to ask the question: where is the limit of the capabilities of computers and can they reach the human level?
Alan Turing, an English scientist, a pioneer of computing, wrote an article "Can a machine think?", Where he described a method that will help determine at what point a computer can be compared to a person. This method was named - turing test.
The essence of the method is for a person to first answer the questions of the computer, then to the questions of another person, without knowing who exactly asked him the questions. If, when answering the computer's questions, the person did not suspect that it was a machine, then passing the Turing test can be considered successful, as well as the fact that the computer is artificial intelligence.
Thus, if a computer exhibits behavior similar to human behavior in any natural situations and is able to maintain a dialogue with a person, then we can say that it is artificial intelligence. Another suggested method for determining whether a machine is intelligent is its ability to be creative and its ability to feel.
There are many different approaches to studying and understanding artificial intelligence.
Symbolic approach
The symbolic approach was the first in the digital age of machines. After the creation of the Lisp symbolic computing language, its authors began to implement intelligence. Symbolic approach use loosely formalized representations. So far, only a person can perform intellectual work and tasks related to creativity. The work of computers in this direction is biased and, in fact, cannot be performed without human participation.
Symbolic computation helped create rules for solving problems during the execution of a computer program. However, it became possible to solve only the simplest tasks, and when any complex task appears, it is necessary for a person to connect again. Thus, such systems do not allow us to call them intelligent, since their capabilities do not allow solving emerging difficulties and improving the already knowing ways of solving problems to solve new ones.
Logical approach
The logical approach is based on modeling reasoning and using a logical programming language. For example, the Prolog programming language is based on a set of inference rules without rigid sequential actions to achieve a result.
Agent-based approach
The agent-based approach is based on methods that help the intellect survive in the environment to achieve certain results. The computer perceives its environment and acts on it using the set methods.
Hybrid approach
The hybrid approach includes expert rules that can be generated by neural networks, and generative rules using statistical learning.
Modeling reasoning
There is such a direction in the study of artificial intelligence as reasoning modeling. This direction includes the creation of symbolic systems for setting tasks and solving them. The task must be translated into mathematical form. At the same time, she does not yet have an algorithm for solving because of the complexity. Therefore, reasoning modeling contains theorem proving, decision making, planning, forecasting, etc.
Natural language processing
Another important area of \u200b\u200bartificial intelligence is natural language processing, which analyzes and processes texts in a human-readable language. The goal of this direction is natural language processing for self-acquisition of knowledge. The source of information can be text entered into the program or received from the Internet.
Representation and use of knowledge
Knowledge engineering is the direction of obtaining knowledge from information, their systematization and further use for solving various problems. With the help of special knowledge bases, expert systems receive data for the process of finding solutions to the assigned tasks.
Machine learning
One of the main requirements for artificial intelligence is the ability of a machine to learn independently without teacher intervention. Machine learning includes pattern recognition tasks: character, text and speech recognition. This also includes computer vision related to robotics.
AI biological modeling
There is such a direction as quasi-biological paradigm, which is otherwise called Biocomputing... This direction in artificial intelligence studies the development of computers and technologies using living organisms and biological components - biocomputers.
Robotics
The field of robotics is closely related to artificial intelligence. The properties of artificial intelligence are also required by robots to perform many different tasks. For example, to navigate and determine your location, study subjects and plan your movement.
Artificial Intelligence Applications
Artificial intelligence is created to solve problems from various fields:
- Intelligent systems for education and recreation.
- Synthesis and recognition of text and human speech is used in customer service systems.
- Pattern recognition systems are used in security systems, optical and acoustic recognition, medical diagnostics, and target determination systems.
- In computer games, AI systems are used to calculate game strategies, simulate character behaviors, and find a path in space.
- Algorithmic trading and decision making systems.
- Financial systems for consulting and financial management.
- Robots used in industry for solving complex routine tasks: robots for patient care, robotic consultants, as well as those engaged in activities dangerous to human life: rescue robots, miner robots.
- Human resource management and recruiting, screening and ranking candidates, forecasting employee success.
- E-mail spam recognition and filtering systems.
These are not all areas where artificial intelligence can be applied.
Now the creation of artificial intelligence is one of the most important tasks of a person. However, there is still no single point of view on what can be considered intelligence and what cannot. Many questions cause controversy and doubt. Is it possible to create an intellectual mind that will understand and solve people's problems? A mind that is not devoid of emotions and with abilities inherent in a living organism. Until the time has come when we will see it.
Artificial intelligence (AI or AI) encompasses more than just technologies that make it possible to create intelligent machines (including computer programs). AI is also one of the branches of scientific thought.
Artificial intelligence - definition
Intelligence - this is the mental component of a person, which has the following abilities:
- opportunistic;
- learning through the accumulation of experience and knowledge;
- the ability to apply knowledge and skills to manage the environment.
Intellect unites in itself all human abilities to cognize reality. With the help of it, a person thinks, remembers new information, perceives the environment, and so on.
Artificial intelligence is understood as one of the areas of information technology, which deals with the study and development of systems (machines), endowed with the capabilities of human intelligence: the ability to learn, logical reasoning, and so on.
At the moment, work on artificial intelligence is carried out by creating new programs and algorithms that solve problems in the same way as humans do.
Due to the fact that the definition of AI evolves as this direction develops, it is necessary to mention AI Effect. It refers to the effect that artificial intelligence creates, which has made some progress. For example, if the AI \u200b\u200bhas learned to perform any actions, then critics are immediately involved, who argue that these successes do not indicate the presence of thinking in the machine.
Today, artificial intelligence is developing in two independent directions:
- neurocybernetics;
- logical approach.
The first area involves the study of neural networks and evolutionary computation from a biological point of view. The logical approach implies the development of systems that simulate high-level intellectual processes: thinking, speech, and so on.
The first work in the field of AI began in the middle of the last century. The pioneer of research in this direction was Alan Turing, although certain ideas began to be expressed by philosophers and mathematicians in the Middle Ages. In particular, at the beginning of the 20th century, a mechanical device capable of solving chess problems was presented.
But this trend really took shape by the middle of the last century. The emergence of works on AI was preceded by research on human nature, ways of knowing the world around us, the possibilities of the thought process and other areas. By that time, the first computers and algorithms appeared. That is, the foundation was created, on which a new direction of research was born.
In 1950, Alan Turing published an article in which he asked questions about the capabilities of future machines, as well as whether they can bypass humans in terms of intelligence. It was this scientist who developed a procedure that was later named after him: the Turing test.
After the publication of the work of the English scientist, new research in the field of AI appeared. According to Turing, only a machine that cannot be distinguished from a person during communication can be recognized as thinking. Around the same time that the scientist's article appeared, a concept was born, called the Baby Machine. It provided for the progressive development of AI and the creation of machines, the thought processes of which are first formed at the child's level, and then gradually improve.
The term "artificial intelligence" was born later. In 1952, a group of scientists, including Turing, gathered at the American University of Dartmund to discuss issues related to AI. After that meeting, the active development of machines with artificial intelligence capabilities began.
A special role in the creation of new technologies in the field of AI was played by the military departments, which actively funded this area of \u200b\u200bresearch. Subsequently, work in the field of artificial intelligence began to attract large companies.
Modern life poses more complex tasks for researchers. Therefore, the development of AI is carried out in fundamentally different conditions, if we compare them with what happened during the period of the birth of artificial intelligence. The processes of globalization, the actions of cybercriminals in the digital sphere, the development of the Internet and other problems - all these pose complex problems for scientists, the solution of which lies in the field of AI.
Despite the successes achieved in this area in recent years (for example, the emergence of autonomous technology), the voices of skeptics who do not believe in the creation of truly artificial intelligence, and not a very capable program, still do not subside. A number of critics fear that the active development of AI will soon lead to a situation where machines will completely replace humans.
Research directions
Philosophers have not yet come to a consensus about what is the nature of human intelligence, and what is its status. In this regard, in scientific works devoted to AI, there are many ideas that tell what tasks artificial intelligence solves. There is also no common understanding of the question of which machine can be considered intelligent.
Today, the development of artificial intelligence technologies goes in two directions:
- Descending (semiotic). It provides for the development of new systems and knowledge bases that imitate high-level mental processes such as speech, expression of emotions and thinking.
- Ascending (biological). This approach involves conducting research in the field of neural networks, through which models of intelligent behavior in terms of biological processes are created. On the basis of this direction, neurocomputers are created.
Determines the ability of artificial intelligence (machines) to think in the same way as a person. In general terms, this approach provides for the creation of AI, the behavior of which does not differ from human actions in the same, normal situations. In fact, the Turing test assumes that a machine will be intelligent only if, when communicating with it, it is impossible to understand who is talking: a mechanism or a living person.
Science fiction books offer a different method of assessing the capabilities of AI. Artificial intelligence will become real if it feels and can create. However, this approach to definition does not hold up to practical application. Already now, for example, machines are being created that have the ability to respond to changes in the environment (cold, heat, and so on). However, they cannot feel the way a person does.
Symbolic approach
Success in solving problems is largely determined by the ability to be flexible about the situation. Machines, unlike humans, interpret the data they receive in a uniform way. Therefore, only a person takes part in solving problems. The machine performs operations based on written algorithms that exclude the use of several abstraction models. You can achieve flexibility from programs by increasing the resources involved in solving problems.
The above disadvantages are characteristic of the symbolic approach used in the development of AI. However, this direction of development of artificial intelligence allows you to create new rules in the computation process. And the problems arising from the symbolic approach can be solved by logical methods.
Logical approach
This approach involves the creation of models that mimic the process of reasoning. It is based on the principles of logic.
This approach does not provide for the use of rigid algorithms that lead to a specific result.
Agent-based approach
It employs intelligent agents. This approach assumes the following: intelligence is the computational part through which goals are achieved. The machine plays the role of an intelligent agent. She learns the environment using special sensors, and interacts with it through mechanical parts.
The agent-based approach focuses on the development of algorithms and methods that allow machines to remain operational in various situations.
Hybrid approach
This approach involves the combination of neural and symbolic models, due to which the solution of all problems associated with the processes of thinking and computation is achieved. For example, neural networks can generate the direction in which the machine is moving. And static learning provides the basis for solving problems.
According to forecasts of the company's experts Gartner, by the beginning of the 2020s, almost all released software products will use artificial intelligence technologies. Also, experts suggest that about 30% of investments in the digital sphere will come from AI.
According to Gartner analysts, artificial intelligence opens up new opportunities for the cooperation of people and machines. At the same time, the process of displacing a person by AI cannot be stopped and in the future it will accelerate.
In company PwC believe that by 2030 the global gross domestic product will grow by about 14% due to the rapid introduction of new technologies. Moreover, approximately 50% of the increase will ensure an increase in the efficiency of production processes. The second half of the indicator will be the additional profit received due to the introduction of AI into products.
Initially, the United States will receive the effect of using artificial intelligence, since this country has created the best conditions for the operation of AI machines. In the future, they will be overtaken by China, which will extract the maximum profit by introducing such technologies into products and their production.
Company experts Saleforce say AI will increase small business profitability by about $ 1.1 trillion. And this will happen by 2021. In part, this indicator will be achieved through the implementation of solutions offered by AI in the systems responsible for communication with customers. At the same time, the efficiency of production processes will be improved due to their automation.
The introduction of new technologies will also create an additional 800 thousand jobs. Experts point out that this indicator neutralizes the loss of vacancies that occurred due to the automation of processes. Analysts predict, based on a survey of companies, their spending on automation of production processes by the early 2020s will rise to about $ 46 billion.
In Russia, work is also underway in the field of AI. Over the course of 10 years, the state has funded more than 1.3 thousand projects in this area. Moreover, most of the investments went to the development of programs not related to the conduct of commercial activities. This shows that the Russian business community is not yet interested in the introduction of artificial intelligence technologies.
In total, about 23 billion rubles have been invested in Russia for these purposes. The size of government subsidies is less than the amount of AI funding that other countries demonstrate. In the United States, about $ 200 million is allocated for these purposes every year.
Basically, in Russia, funds are allocated from the state budget for the development of AI technologies, which are then used in the transport sector, the defense industry and in projects related to security. This circumstance indicates that in our country they are more likely to invest in areas that allow you to quickly achieve a certain effect from the invested funds.
The above study also showed that Russia has now accumulated a high potential for training specialists who can be involved in the development of AI technologies. Over the past 5 years, about 200 thousand people have completed training in AI-related areas.
AI technologies are developing in the following directions:
- solving problems that make it possible to bring AI capabilities closer to human ones and find ways to integrate them into everyday life;
- development of a full-fledged mind, through which the tasks facing humanity will be solved.
At the moment, researchers are focused on developing technologies that solve practical problems. Until scientists come close to creating a full-fledged artificial intelligence.
Many companies are involved in the development of AI technologies. Yandex has been using them for several years in the work of a search engine. Since 2016, the Russian IT company has been engaged in research in the field of neural networks. The latter are changing the way search engines work. In particular, neural networks compare the query entered by the user with a certain vector number that most fully reflects the meaning of the task. In other words, the search is conducted not by a word, but by the essence of the information requested by a person.
In 2016 Yandex launched the service "Zen"that analyzes user preferences.
The company has Abbyy system recently appeared Compreno... With the help of it, it is possible to understand the written text in natural language. Other systems based on artificial intelligence technologies have also recently entered the market:
- Findo. The system is capable of recognizing human speech and searches for information in various documents and files, using complex queries.
- Gamalon. This company introduced a self-learning system.
- Watson. An IBM computer that uses a large number of algorithms to search for information.
- ViaVoice. Human speech recognition system.
Big commercial companies are not shying away from advances in artificial intelligence. Banks are actively introducing such technologies into their activities. With the help of AI-based systems, they conduct transactions on exchanges, manage property and perform other operations.
The defense industry, medicine and other spheres are introducing object recognition technologies. And companies that develop computer games are using AI to create another product.
Over the past several years, a group of American scientists has been working on a project NEIL, in which the researchers suggest that the computer recognize what is shown in the photograph. Experts assume that in this way they will be able to create a system capable of self-learning without external interference.
Company VisionLab presented its own platform LUNAwhich can recognize faces in real time by selecting them from a huge cluster of images and videos. This technology is used today by large banks and network retailers. With LUNA, you can match people's preferences and offer them the right products and services.
A Russian company is working on similar technologies N-Tech Lab... At the same time, its specialists are fed to create a face recognition system based on neural networks. According to the latest data, the Russian development copes with the assigned tasks better than the human.
According to Stephen Hawking, the development of artificial intelligence technologies in the future will lead to the death of humanity. The scientist noted that people will gradually degrade due to the introduction of AI. And in the conditions of natural evolution, when a person must constantly struggle to survive, this process will inevitably lead to his death.
In Russia, the issue of introducing AI is being viewed positively. Alexei Kudrin once said that the use of such technologies would reduce the cost of maintaining the state apparatus by about 0.3% of the WFP. Dmitry Medvedev predicts the disappearance of a number of professions due to the introduction of AI. However, the official stressed that the use of such technologies will lead to the rapid development of other industries.
According to experts from the World Economic Forum, by the beginning of the 2020s, about 7 million people in the world will lose jobs due to the automation of production. The introduction of AI is highly likely to cause a transformation of the economy and the disappearance of a number of professions related to data processing.
Experts McKinsey state that the process of automation of production will be more active in Russia, China and India. In these countries, in the near future, up to 50% of workers will lose their jobs due to the introduction of AI. Computerized systems and robots will take their place.
According to McKinsey, artificial intelligence will replace manual and information-processing professions: retail, hotel staff, and so on.
By the middle of this century, according to experts of the American company, the number of jobs worldwide will be reduced by about 50%. Places of people will be taken by machines capable of performing similar operations with the same or higher efficiency. At the same time, experts do not exclude an option in which this forecast will be implemented earlier than the specified period.
Other analysts point out the harm that robots can do. For example, McKinsey experts draw attention to the fact that robots, unlike humans, do not pay taxes. As a result, due to a decrease in budget revenues, the state will not be able to maintain the infrastructure at the same level. Therefore, Bill Gates proposed to introduce a new tax on robotic technology.
AI technologies improve the efficiency of companies by reducing the number of mistakes. In addition, they can increase the speed of operations to a level that a person cannot achieve.
What is artificial intelligence? Undoubtedly, many have heard of cars that can control their movement without human assistance, speech recognition devices such as Apple's Siri, Amazon's Alexa, Google's Assistant and Microsoft's Cortana. But these are far from all the possibilities of artificial intelligence (AI).
AI was first "discovered" in the 1950s. Over the years, ups and downs awaited it, but at the present stage of human development, artificial intelligence is seen as a key technology of the future. With advances in electronics and faster processors, more and more applications are using AI. Artificial intelligence is an unusual software technology that every engineer should become familiar with. In this article, we will try to briefly describe this technology.
Artificial intelligence is defined
AI is a subfield of computer science that involves smarter use of computers and electronic components by mimicking the human brain. Intelligence is the ability to acquire knowledge and experience and apply it to solve problems. AI is especially useful for analyzing and interpreting data sets and extracting really useful information from it. From the information comes understanding that can be applied to make decisions or some kind of action.
Research Areas
Artificial intelligence is a broad technology with many possible applications. Usually it is divided into sub-branches. Let's make a small overview of each of them:
- Solving general problems that do not have a specific algorithmic solution. Problems with uncertainty and ambiguity.
- Expert systems - software that contains a knowledge base of rules, facts and data obtained from several individual experts. The database can be requested to solve problems, diagnose diseases, or provide advice.
- Natural Language Processing (NLP) - Used for text analysis. Voice recognition is also part of (NLP).
- Computer vision - the analysis and understanding of visual information (photos, videos, and so on). An example is machine vision and face recognition. Used in "autonomous" vehicles and production lines.
- Robotics - making robots smarter, more adaptive, and “self-sufficient”.
- Games: The AI \u200b\u200bplays games well. Computers are already programmed to play and win at chess, poker, and Go.
- Machine learning is procedures that enable a computer to learn from input data and make sense of the results. Neural networks form the backbone of machine learning.
How artificial intelligence works
Conventional computers use algorithms to solve problems. A sequence of instructions leads you step by step through the steps to obtain results. Traditional forms of artificial intelligence are based on knowledge bases and inference engines that use various mechanisms to work with the knowledge base through the user interface. Useful results have been obtained by some of the following methods:
- Search: Search algorithms use a database of information collected in graphs or trees. Search is the main method of artificial intelligence.
- Logic: deductive and inductive reasoning is used to determine whether a statement is true or false. This includes both propositional logic and predicate logic.
- Rules: Rules are a series of “if” statements that can be found to determine the outcome. Rule-based systems are called expert systems.
- Probability and statistics: some problems can be solved and solutions are found by applying the standard mathematical theory of probability and statistics.
- Lists: Some types of information can be saved in lists, which become searchable.
- Other forms of knowledge are schemas, frames, and scripts, which are structures that encapsulate different types of knowledge. Search methods look for answers to related queries.
Traditional or legacy AI techniques such as search, logic, probability, and rules are considered the first wave of artificial intelligence. These methods are still used and are well accepted by knowledge and reasoning, especially for a narrow range of tasks. The first wave of AI lacks the human traits of learning and abstracting decisions. These qualities are now available in the second wave of artificial intelligence, thanks to neural networks and machine learning.
Neural networks
Most AI research and development today is based on the use of neural networks or artificial neural networks (ANNs). These networks are made up of artificial neurons that mimic the neurons in the human brain that are responsible for our thinking and learning. Each neuron is a node in a complex relationship that connects many neurons to others through synapses. ANN simulates this network.
Each node has several weighted inputs, as well as an output and threshold setting (picture above). Such nodes are usually implemented in software, although hardware emulation is also possible. A typical schematic consists of three layers - an input layer, a hidden (processing or training layer) and an output layer:
Some mechanisms use backpropagation to provide feedback that changes the input weights of some nodes as new information is received.
Machine Learning and Deep Learning
Machine learning is a method of teaching a computer to recognize patterns. The computer or device “learns” with the example and then runs special programs to compare the input with the trained value. Typically, training software requires huge amounts of data. Machine learning programs are designed to learn automatically as they gain more knowledge and experience from new materials.
Neural networks are commonly used for machine learning, however other algorithms can be used as well. The software can then change itself to improve recognizability based on the new input. Some machine learning systems can now recognize patterns on their own without training, and then modify themselves to further improve.
Deep learning is an advanced case of machine learning. It also uses neural networks called deep neural networks (GNNs). They include additional hidden layers of computation to further enhance your capabilities. Mass training required. Programmers can improve productivity by playing with interconnect weights. GNS also require matrix processing. However, it should be noted that GNSs use statistical weights, so the results in, say, visible recognition may not be 100%. In addition, debugging such systems is a very painstaking job.
Machine learning and deep learning are widely used to analyze big data sets, as well as in computer vision and speech recognition. They can also be applied in other areas such as medicine, law and finance.
Artificial intelligence software
Almost any programming language can be used for AI programming, but some languages \u200b\u200bhave certain advantages. Profiling languages \u200b\u200bdesigned specifically for AI include LISP and Prolog. LISP, one of the oldest higher-level languages, processes lists. Prolog is logic based. C ++ and Python are popular today. There is also special software for the development of expert systems.
Several large AI users provide development platforms, including Amazon, Baidu (China), Google, IBM, and Microsoft. These companies offer pretrained systems as a starting point for some common applications such as voice recognition. Processor vendors like Nvidia and AMD also offer some support.
Artificial intelligence hardware
Running artificial intelligence software on a computer usually requires high speed and a lot of memory. However, some simple applications can run on an 8-bit processor. Some of the modern processors are more than adequate, and multiple parallel processors may be ideal for certain applications. In addition, dedicated processors have been developed for some applications.
Graphics processing units (GPUs) are an example of an architecture and instruction set focusing on a given usage to optimize performance. For example, dedicated Nvidia processors for self-driving cars and AMD GPUs. Google has developed its own processors to optimize its search engines. Intel and Knupath also offer software support for their advanced processors. In some cases, special logic in an ASIC or FPGA can implement a specific application.
Activity and current status
Artificial intelligence was once considered an exotic piece of software designed for special needs. The requirement for high-speed computers with large amounts of memory limited its use. Today, thanks to super fast processors, multi-core processors, and cheap memory, AI has become more popular. The Google search engines that we all use on a daily basis are based on artificial intelligence.
To date, the focus is undoubtedly on neural networks and deep machine learning. While voice recognition and self-driving cars remain in the spotlight, other key applications are emerging such as facial recognition, self-driving navigation, robotics, medical diagnostics, and finance. Advanced military applications (such as autonomous weapons) are also in development.
The future of AI looks promising. According to Orbis Research, the global artificial intelligence market is expected to grow by 2022 with a compound annual growth rate of over 35%. The International Data Corporation (IDC) is also upbeat, saying AI spending is expected to rise to $ 47 billion in 2020, up from $ 8 billion in 2016.
Many people have a logical question - will artificial intelligence replace people of some professions, and what kind of professions they will be? The answer is as follows - "maybe only a few." Artificial intelligence computers are likely to help improve the productivity of certain professions by increasing productivity, efficiency, and speed of decision making. However, some industrial jobs will still be lost as robotics gains momentum, but replacing humans with machines will create new jobs associated with servicing these machines.
Another question asked by many people is whether artificial intelligence can be dangerous to humanity? AI is smart, but not that smart. Its main purpose will be data analysis, problem solving and decision making based on available information and distilled knowledge. People still dominate, especially when it comes to innovation and creativity. However, it is difficult to predict the future. At least, at this stage of development, there are no super smart robots, not yet ...
It is used almost everywhere: from high technology and complex mathematical calculations to medicine, the automotive industry and even smartphones. The technologies that underlie the work of AI in the modern view, we use every day and sometimes we may not even think about it. But what is artificial intelligence? How does he work? And is it dangerous?
BB will be everywhere soon!
First, let's define the terminology. If you imagine artificial intelligence as something capable of thinking independently, making decisions, and generally showing signs of consciousness, then we hasten to disappoint you. Almost all systems existing today do not even come close to this definition of AI. And those systems that show signs of such activity, in fact, still operate within the framework of predetermined algorithms.
Neural networks have been around since the 1950s (at least in the form of concepts). But until recently, they did not receive much development, because their creation required huge amounts of data and computing power. In the last few years, all this has become available, so neural networks have come to the fore, having received their development. It is important to understand that there was not enough technology for their full-fledged appearance. How they are not enough now in order to bring the technology to a new level.
Determination stages.
What is deep learning and neural networks used for?
There are several areas where these two technologies have helped make notable progress. Moreover, we use some of them every day in our life and do not even think about what is behind them.
- Is the ability of the software to understand the content of images and videos. This is one area where deep learning has made a lot of progress. For example, deep learning image processing algorithms can detect various types of cancer, lung disease, heart disease, and so on. And to do it faster and more efficiently than doctors. But deep learning is also ingrained in many of the applications you use every day. Apple Face ID and Google Photos use deep learning for facial recognition and image enhancement. Facebook uses deep learning to automatically tag people in uploaded photos and so on. Computer vision also helps companies to automatically identify and block questionable content such as violence and nudity. Finally, deep learning plays a very important role in making cars self-driving so they can understand their surroundings.
- Voice and speech recognition. When you speak a command for your Google Assistant, deep learning algorithms will transform yours. Several online applications use deep learning to transcribe audio and video files. Even when you “shazam” a song, neural networks and deep machine learning algorithms come into play.
- Internet search: even if you are looking for something in a search engine, in order for your request to be processed more clearly and the search results to be as accurate as possible, companies began to connect neural network algorithms to their search engines. Thus, the performance of the Google search engine has increased several times after the system switched to deep machine learning and neural networks.
The limits of deep learning and neural networks
Despite all their advantages, deep learning and neural networks also have some disadvantages.
- Data Dependency: In general, deep learning algorithms require huge amounts of training data to accurately perform their tasks. Unfortunately, to solve many problems, there is not enough high-quality training data to create working models.
- Unpredictability: Neural networks evolve in some strange way. Sometimes everything goes as planned. And sometimes (even if the neural network does a good job), even the creators struggle to understand how the algorithms work. The lack of predictability makes it extremely difficult to eliminate and correct errors in the algorithms of neural networks.
- Algorithmic bias: Deep learning algorithms are just as good as the data they are trained on. The problem is that training data often contains hidden or obvious errors or flaws, and algorithms inherit them. For example, a facial recognition algorithm trained primarily on photographs of white people will work less accurately on people with a different skin color.
- Lack of generalization: Deep learning algorithms are good for performing targeted tasks, but poorly generalize their knowledge. Unlike humans, the deep learning model will not be able to play another similar game: say, WarCraft. Plus, deep learning does a poor job of handling data that deviates from its training examples.
The future of deep learning, neural networks and AI
It's clear that the work on deep learning and neural networks is far from complete. Various efforts are being made to improve deep learning algorithms. Deep Learning is a cutting-edge technique in artificial intelligence. It has become more and more popular in the past few years due to the abundance of data and the increase in processing power. This is the core technology behind many of the applications we use every day.
Schemes and ways of solving problems will soon replace a lot.
But will consciousness ever be born on the basis of this technology? Real artificial life? Some of the scientists believe that at the moment when the number of connections between the components of artificial neural networks approaches the same indicator that exists in the human brain between our neurons, something like this can happen. However, this claim is highly questionable. For real AI to emerge, we need to rethink the way we build AI systems. All that is now is only applied programs for a strictly limited range of tasks. As much as we would like to believe that the future has already come ...
Artificial intelligence is the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. The term is often applied to a project for the development of systems endowed with human-specific intellectual processes, such as the ability to reason, generalize, or learn from past experiences. In addition, the definition of the concept of AI (artificial intelligence) is reduced to a description of a set of related technologies and processes, such as, for example, machine learning, virtual agents and expert systems. In simple terms, AI is a crude mapping of neurons in the brain. Signals are transmitted from neuron to neuron and, finally, are output - a numerical, categorical or generative result is obtained. This can be illustrated with the following example. if the system takes a picture of a cat and is trained to recognize whether it is a cat or not, the first layer can identify the general gradients that define the overall shape of the cat. The next layer can identify larger objects such as ears and mouth. The third layer defines smaller objects (such as a mustache). Finally, based on this information, the program will output "yes" or "no" to tell if it is a cat or not. The programmer does not need to "tell" the neurons that these are the functions they should be looking for. The AI \u200b\u200blearned them on its own, exercising on many images (with and without cats).
What is artificial intelligence?
Description of the artificial neuron
An artificial neuron is a mathematical function conceived as a model of biological neurons, a neural network. Artificial neurons are elementary units in artificial neural networks. An artificial neuron receives one or more inputs and sums them up to produce an output or activation, representing the action potential of the neuron, which is transmitted along its axon. Typically, each input is analyzed separately and the sum is passed through a non-linear function known as an activation function or transfer function.
When did AI research start?
In 1935 the British researcher A.M. Turing described an abstract computing machine that consists of limitless memory and a scanner that moves back and forth through memory, character by character. The scanner reads what it finds, writing down further characters. The actions of the scanner are dictated by an instruction program, which is also stored in memory as symbols. The earliest successful AI program was written in 1951 by Christopher Strachey. In 1952, this program could play checkers with a human, surprising everyone with its ability to predict moves. In 1953, Turing published a classic early article on chess programming.
The difference between artificial intelligence and natural
Intelligence can be defined as the general mental capacity for reasoning, problem solving, and learning. By its general nature, intelligence integrates cognitive functions such as perception, attention, memory, language, or planning. natural intelligence is distinguished by a conscious attitude to the world. Human thinking is always emotionally colored, and it cannot be separated from corporeality. In addition, a person is a social being, therefore society always influences thinking. AI has nothing to do with the emotional sphere and is not socially oriented.
How do human and computer intelligence compare?
It is possible to compare human thinking with artificial intelligence based on several general parameters of the organization of the brain and machine. The activity of a computer, like the brain, includes four stages: encoding, storing, analyzing data and issuing a result. In addition, the human brain and AI can self-learn depending on data obtained from the environment. Also, the human brain and machine intelligence solve problems (or tasks) using certain algorithms.
Do computer programs have an IQ?
Not. The IQ indicator is associated with the development of a person's intelligence depending on age. AI in some way exceeds some human abilities, for example, it can keep a huge number of numbers in memory, but this has nothing to do with IQ.
What is a Turing test?
Alan Turing has developed an empirical test that shows whether a program is capable of capturing all the nuances of human behavior to such an extent that a person cannot determine with whom he is communicating - with an AI or with a live interlocutor. Turing suggested that an outside observer evaluate the conversation between a person and a machine that answers questions. The judge does not see who is answering, but he knows that one of the interlocutors is AI. Conversation is limited to only the text channel (computer keyboard and screen), so the result is independent of the machine's ability to display words as human speech. In the event that the program manages to trick the person, it is considered that it coped with the test effectively.Symbolic approach
The Symbolic Approach to AI is a collection of all methods of researching artificial intelligence, based on high-level symbolic (human-readable) representations of tasks, logic and search. The symbolic approach was widely used in AI research in the 1950s and 1980s. One of the popular forms of the symbolic approach is expert systems that use a combination of specific production rules. Manufacturing rules link symbols into logical links that are similar to the If-Then algorithm. The expert system processes the rules to draw conclusions and determine what additional information it needs, that is, what questions to ask using human-readable characters.
Logical approach
The term "logical approach" implies an appeal to logic, thinking, solving problems using logical steps. Logicians back in the 19th century developed precise notation for all kinds of objects in the world and the relationships between them. By 1965, there were programs that could solve any logical problem (the peak of the popularity of this approach came in the late 1950s and 1970s). Supporters of the logical approach within the framework of logical artificial intelligence hoped to build intelligent systems on such programs (in particular, written in the Prolog language). However, this approach has two limitations. First, it is not easy to take informal knowledge and put it in the formal terms that are required for AI processing. Second, there is a big difference between solving a problem in theory and solving it in practice. Even problems with a few hundred facts can exhaust the computational resources of any computer if it doesn't have any guidance as to which reasoning to use first.
Agent-based approach
An agent is something that acts (from Latin agere, “to do”). Of course, all computer programs do something, but computer agents are expected to do more: work autonomously, perceive environmental signals (using special sensors), adapt to change, create goals and fulfill them. A rational agent is one who acts to achieve the best expected result.
Hybrid approach
It is assumed that this approach, which became popular in the late 80s, works most efficiently, as it is a combination of symbolic and neural models. The hybrid approach increases the cognitive and computational capabilities of the machine.
Artificial intelligence technology market
The market is expected to grow to $ 190.61 billion by 2025, at an annual growth rate of 36.62%. Market growth is driven by factors such as the adoption of cloud applications and services, the emergence of big data sets and the strong demand for intelligent virtual assistants. However, there are still few experts developing and implementing AI technologies, and this is holding back the market growth. AI systems need integration and technical support for maintenance.
AI processors
Modern AI tasks require powerful processors that can process huge amounts of data. Processors must have access to large amounts of memory, and the device also needs high-speed data links.In Russia
At the end of 2018, Russia launched a series of Elbrus-804 servers showing high performance. Each of the computers is equipped with four eight-core processors. With the help of these devices, you can build computing clusters, they allow you to work with applications and databases.
World market
Drivers and market leaders are two corporations - Intel and AMD, manufacturers of the most powerful processors. Intel has traditionally focused on making machines with higher clock speeds, AMD is focused on constantly increasing the number of cores and providing multi-threaded performance.
National Development Concept
Three dozen countries have already approved national strategies for the development of AI. In October 2019, the draft National Strategy for the Development of AI is to be adopted in Russia. It is assumed that a legal regime will be introduced in Moscow to facilitate the development and implementation of AI technologies.
AI Research
The questions of what artificial intelligence is and how it works have been of concern to scientists from different countries for more than a decade. The US state budget spends $ 200 million annually on research. In Russia, over 10 years - from 2007 to 2017 - about 23 billion rubles were allocated. The AI \u200b\u200bresearch support sections will be an important part of the national strategy framework. Soon, new research centers will open in Russia, and the development of innovative software for AI will continue.
AI standardization
The rules and regulations in the field of AI in Russia are in the process of constant revision. It is assumed that in late 2019 - early 2020 national standards will be approved, which are now being developed by market leaders. In parallel, the National Standardization Plan for 2020 and beyond is being formed. The world has the standard “Artificial Intelligence. Concept and terminology ”, and in 2019 experts began to develop its Russified version. The document must be approved in 2021.
Impact of artificial intelligence
The introduction of AI is inextricably linked with scientific and technological progress, and the scope of application is expanding every year. We face this every day in our life, when a large retail chain on the Internet recommends a product to us or, only opening a computer, we see an advertisement for a movie that we just wanted to watch. These recommendations are based on algorithms that analyze what the consumer has bought or viewed. Artificial intelligence is behind these algorithms.
Is there a risk to the development of human civilization?
Elon Musk believes that the development of AI can threaten humanity and the results can be worse than the use of nuclear weapons. Stephen Hawking, a British scientist, fears that humans could create artificial intelligence with superintelligence that could harm humans.Economy and business
The penetration of AI technology into all spheres of the economy will increase the volume of the global market for services and goods by $ 15.7 trillion by 2030. The United States and China are still leaders in terms of all kinds of AI projects. Developed countries - Germany, Japan, Canada, Singapore - are also striving to realize all the possibilities. Many countries whose economies are growing at a moderate pace, such as Italy, India, Malaysia, are developing strengths in specific areas of AI applications.
To the labor market
The global impact of AI on the labor market will follow two scenarios. First, the proliferation of certain technologies will lead to the layoff of a large number of people, since computers will take over many tasks. Secondly, in connection with the development of technological progress, AI specialists will be in great demand in many industries.
AI bias
AI system bias is likely to become an increasingly common problem as artificial intelligence moves out of the labs and into the real world. Researchers fear that without proper training in assessing the data and identifying the potential for bias in the data, vulnerable groups in society could be harmed or denied. Until now, researchers do not have data on whether systems built on the basis of machine learning will threaten humanity.
Applications
Artificial intelligence and its applications are undergoing transformation. The definition of Weak AI ("weak AI") is used when it comes to the implementation of narrow tasks in medical diagnostics, electronic trading platforms, and robot control. Whereas Strong AI ("strong AI") researchers define as intelligence, which is set to global tasks, as if they were set before a person.
Defense and military use
By 2025, global sales of related services, software and hardware will rise to $ 18.82 billion, and the market will grow by 14.75% annually. AI is used for data aggregation, bioinformatics, troop training, and the defense sector.In education
Many schools include AI introductory lessons in computer science curricula, and universities make extensive use of big data technologies. Some programs monitor student behavior, grade tests and essays, recognize spelling mistakes, and provide suggestions for corrections.
There are also online courses on artificial intelligence. For example, at the educational portal.
In business and trade
Over the next five years, leading retailers will have mobile apps that work with digital assistants such as Siri to make shopping easier. AI allows you to make huge amounts of money online. One example is Amazon, which is constantly analyzing consumer behavior and improving algorithms.
Where can you study on the topic #artificial intelligence
In the electric power industry
AI helps to predict the generation and demand for energy resources, reduce losses, and prevent resource theft. In the power industry, the use of AI to analyze statistical data helps to select the most profitable supplier or automate customer service.
In the production area
According to a McKinsey survey of 1,300 CEOs, 20% of businesses are already using AI. Recently, Mosselprom has introduced AI into its packaging workshop. The AI's ability to recognize the image is used. The camera records all actions of the employee by scanning the barcode on the clothes and sends the data to the computer. The number of transactions performed directly affects the employee's salary.
In brewing
Carlsberg uses machine learning to select yeast and expand its assortment. The technology is implemented on the basis of a digital cloud platform.In the banking sector
The need for reliable data processing, the development of mobile technologies, the availability of information and the proliferation of open source software make AI a technology in demand in the banking sector. More and more banks are attracting borrowed funds with the help of mobile application development companies. New technologies are improving customer service, and analysts predict that within five years AI in banks will make most of the decisions on their own.
By transport
The development of AI technologies is a driver of the transport industry. Road condition monitoring, pedestrian or object detection in the wrong places, autonomous driving, cloud services in the automotive industry are just a few examples of AI applications in transport.
In logistics
AI capabilities enable companies to more efficiently predict demand and build supply chains at minimal cost. AI helps to reduce the number of used vehicles required for transportation, optimize delivery times, and reduce the operating costs of transport and storage facilities.
In the market for luxury goods and services
Luxury brands have also turned to digital technology to analyze customer needs. One of the challenges faced by developers in this segment is managing and influencing customer emotions. Dior is already adapting AI to manage customer-brand interactions using chatbots. Luxury brands will compete in the future, and the level of personalization they can achieve with AI will be critical.
In public administration
The state apparatus of many countries is not yet ready for the challenges that are hidden in AI technologies. Experts predict that many of the existing government structures and processes that have evolved over the past several centuries are likely to become irrelevant in the near future.
In forensics
AI approaches are used to identify criminals in public places. In some countries, like Holland, the police use AI to investigate complex crimes. Digital forensics is an emerging science that requires the mining of huge volumes of highly complex data sets.In the judicial system
Developments in the field of artificial intelligence will help radically change the judicial system, make it fairer and free from corruption. China was one of the first AI in the judicial system. It can be assumed that over time, robotic judges will be able to operate on big data from the repositories of public services. Machine intelligence analyzes a huge amount of data, and it does not experience emotion like a human judge. AI can have a huge impact on information processing and statistics collection, as well as predict possible violations based on data analysis.
In sports
The use of AI in sports has become commonplace in recent years. Sports teams (baseball, soccer, etc.) analyze individual player performance data by considering different factors in matchmaking. AI can predict the future potential of players by analyzing game technique, physical condition and other data, as well as estimate their market value.
In healthcare medicine
This area of \u200b\u200bapplication is developing rapidly. AI is used in disease diagnosis, clinical research, drug development and health insurance. In addition, there is now a boom in investments in numerous medical applications and devices.
Analysis of citizens' behavior
Monitoring the behavior of citizens is widely used in the field of security, including tracking behavior on websites (in social networks) and in instant messengers. For example, in 2018, Chinese scientists managed to identify 20 thousand potential suicides and provide them with psychological assistance. In March 2018, Vladimir Putin ordered to step up the actions of state bodies to combat the negative impact of destructive movements on social networks.In the development of culture
AI algorithms are starting to generate works of art that are difficult to distinguish from those created by humans. AI offers creative people a variety of tools to make their visions come true. Right now, the understanding of the role of the artist in a broad sense is changing, since AI provides a lot of new methods, but also poses many new questions for humanity.
Painting
Art has long been considered the exclusive sphere of human creativity. But it turns out that machines can do a lot more creatively than humans can imagine. In October 2018, Christie’s sold the first AI painting for $ 432,500. A generative adversarial network algorithm was used that analyzed 15,000 portraits created between the 15th and 20th centuries.
Music
Several music programs have been developed that use AI to create music. As in other areas, AI in this case also mimics a mental task. A notable feature is the ability of the AI \u200b\u200balgorithm to learn from the information it receives, such as computer-assisted technology that is able to listen to and follow a human performer. AI also drives so-called interactive composition technology, in which a computer composes music in response to a live musician performing. In early 2019, Warner Music signed the first ever contract with a performer - the Endel algorithm. Under the terms of the contract, the Endel neural network will release 20 unique albums during the year.
The photo
AI is rapidly changing the way we think about photography. In just a couple of years, most advances in this area will be AI-driven, not optics or sensors as they used to. For the first time, advances in photography technology will not be associated with physics and will create a completely new way of photographing. Even now, the neural network recognizes the slightest changes in face modeling in photo editors.
Video: face swap
In 2015, Facebook began testing DeepFace technology on the site. In 2017, Reddit user DeepFakes came up with an algorithm to create realistic face swap videos using neural networks and machine learning.
Media and literature
In 2016, AI Google, after analyzing 11,000 unpublished books, began writing its first literary works. Researchers at Facebook AI Research in 2017 came up with a neural network system that can write poetry on any topic. In November 2015, the Russian company Yandex opened the direction of preparation of automatic texts.
Go games, poker, chess
In 2016, an AI beat a human in Go (a game with more than 10,100 variations). In chess, the supercomputer defeated the human player due to the possibility of storing the moves ever played by people in the memory and programming new ones 10 steps ahead. Bots are now playing poker, although it used to be thought that it was almost impossible to teach a computer to play this card game. Every year, developers are improving algorithms more and more.Face recognition
Face recognition technology is used for both photo and video streams. Neural networks build a vector, or "digital", face template, then these templates are compared within the system. She finds anchor points on the face that determine individual characteristics. The algorithm for calculating the characteristics is different for each of the systems and is the main secret of the developers.
For the further development and application of AI, it is necessary to train first of all a person
Sergey Shirkin
Dean of the Faculty of Artificial Intelligence
Artificial intelligence technologies in the form in which they are applied now have existed for about 5-10 years, but in order to apply them, oddly enough, a large number of people are required. Accordingly, the main costs in the field of artificial intelligence are the costs of specialists. Moreover, almost all basic artificial intelligence technologies (libraries, frameworks, algorithms) are free and are in the public domain. At one time, finding specialists in machine learning was almost impossible. But now, largely thanks to the development of MOOC (Massive Open Online Course, massive open online course), there are more of them. Higher educational institutions also supply specialists, but they often have to complete their studies on online courses.
Now artificial intelligence may well recognize that a person is planning to change jobs, and can offer him appropriate online courses, many of which can be taken with only a smartphone. And this means that you can study even while on the road - for example, on the way to work. One of the first such projects was the online resource Coursera, but later many similar educational projects appeared, each of which occupies a specific niche in online education.
You need to understand that AI, like any program, is primarily a code, that is, a text formatted in a certain way. This code needs development, maintenance and improvement. Unfortunately, this does not happen by itself, the code cannot "come to life" without a programmer. Therefore, all fears about the omnipotence of AI are groundless. Programs are created for strictly defined tasks, they do not have feelings and aspirations like a person, they do not perform actions that the programmer has not laid in them.
We can say that in our time, AI has only individual skills of a person, although it can outstrip the average person in the speed of their application. True, the development of each such skill takes many hours of efforts of thousands of programmers. The greatest thing that AI is capable of so far is to automate some physical and mental operations, thereby freeing people from routine.
Does the use of AI carry any risks? Rather, now there is a risk of not seeing the possibility of using artificial intelligence technologies. Many companies are aware of this and are trying to develop in several directions at once, hoping that some of them may "shoot". The example of online stores is indicative: now only those who realized the need to use AI, when it was not yet in the trend, remained afloat, although it was quite possible to “save money” and not invite the necessary mathematicians-programmers for no reason.
Artificial Intelligence Development Perspective
Computers can now do a lot of things that only humans could previously do: play chess, recognize letters of the alphabet, check spelling, grammar, recognize faces, dictate, speak, win game shows, and more. But skeptics persist. As soon as one succeeds in automating another human ability, skeptics say that this is just another computer program, and not an example of self-learning AI. AI technologies are just finding widespread use and have huge growth potential in all areas. Over time, humanity will create more and more powerful computers, which will increasingly improve in the development of AI.
Is the goal of AI to put the human mind into a computer?
There is only a rough understanding of how the human brain works. So far, not all properties of the mind can be imitated using AI.
Will AI be able to reach the human level of intelligence?
Scientists are striving to ensure that AI can solve even more diverse problems. But it is too early to talk about reaching the level of human intelligence, since thinking is not limited to only one algorithms.
When will artificial intelligence reach the level of human thinking?
At this stage of accumulation and analysis of information, which is now achieved by humanity, AI is far from human thinking. However, in the future, there may be breakthrough ideas that will influence a sharp leap in the development of AI.
Can a computer become an intelligent machine?
A part of any complex machine is a computer system, and here it is possible to speak only of intelligent computer systems. The computer itself has no intelligence.
Is there a connection between speed and the development of intelligence in computers?
No, speed is only responsible for some of the properties of intelligence. The speed of processing and analyzing information alone is not enough for intelligence to appear.
Is it possible to create a children's machine that can develop through reading and self-learning?
This has been discussed by researchers for almost a century. The idea will probably come true someday. Today, AI programs do not process or use as much information as children can.
How are computability and computational complexity related to AI?
Computational complexity theory focuses on classifying computational problems according to their inherent complexity and linking these classes to each other. A computational problem is a problem solved by a computer. The computation task is solvable by mechanical application of mathematical steps such as an algorithm.
Conclusion
Artificial intelligence has already had a huge impact on the development of our world, which was impossible to predict a century ago. Smart phone networks route calls more efficiently than any human operator. Cars are built in unmanned factories by automated robots. Artificial intelligence is integrated into the most common household items, such as a vacuum cleaner. The mechanisms of AI are not fully understood, but experts predict that the development of AI will get even closer to the development of the human brain in the coming years.