In this blog I will tell you about Artificial Intelligence Definition.
I guess you’ve heard the words, AI – artificial intelligence, ML – machine learning and maybe
also deep learning. As a starting point, let’s define those terms at a high level and how they are related to each other.
Artificial Intelligence Definition
Starting with AI. Artificial Intelligence was founded a long time ago as an academic discipline back in 1956. The fundamental idea that some complex intellectual tasks that are performed by humans on a daily basis can be also performed by machines.
After that, those machines can mimic or simulate human cognitive functions such as learning and complex problem solving.
Nobody will argue that today machines or computers are already very good when performing some tasks. They can calculate things at an amazing speed, they can be used in a production line to perform the automated repeated tasks with great accuracy but this is not AI.
Those machines and computers are pre programmed with the knowledge to perform those repetitive tasks.
For instance, recognizing an object in a picture, something that our mind is doing all day long with an amazing speed and flexibility.
We can easily scan a picture with our eyes and in a second identify objects in that pictures.
This kind of task will trigger billions of neurons in our minds that will be used to identify those objects based on patterns we learned over the years. Trying to mimic such a task in a computer like identify objects in the picture is very complex. And until recently it was impossible to achieve a usable use case with standard programming.
So, when a machine can mimic complex cognitive functions like identify an object in the picture or recognize a human voice and many other very complex tasks, it is often described as artificial intelligence.
Let’s take for example the scenario of playing a chess game between a computer and some person.
I don’t know if you played chess before, but it is quite a complex collective task.
However, it is very hard to consider all the options in a chess game.
Taking into account the future moves that may be performed by the other player. Looking at history, the dominating approach for creating an application that can play a chess game was by creating a huge amount of explicit rules that will mimic that task.
The computer will be programmed with the needed rules, the needed knowledge for selecting the best next move while leveraging of course the horsepower of a fast computer.
A computer can perform many complex calculation while checking a very large space of options and strategies.
Such kind of program can easily win many professional chess players, so it will be considered as AI entity.
And for many years such complex programs running in fast computers that are simulating very complex tasks were considered as AI. Now, there are two main downsides to this approach. The approach of trying to mimic complex human thinking by creating a huge amount of explicit rules.
The first one is that someone needs to think and program all the game logic and strategies into the program which can be very hard work.
You probably need the team of professional chess players to design such logic and then translate it into lines of code, many lines. Secondly,
The second downside is that it will be as good as it was initially programmed. Unlike the human brain, this program is not learning anything. Above all, if it will be defeated by some human players, it will not learn anything from that experience.
There is no closed feedback loop.
However, it will not create a new set of rules that will help to win the next game.
Someone should open the program code and make it better by changing or adding more rules. So, something is missing here.
And maybe you can already guess by now,
What is it? Well the missing part of a AI is the flexibility to learn.
The human mind has amazing flexibility to learn, to adapt.