Location: Home > News > Industry news



Industry news

The 10 most important milestone in the field of artificial intelligence: AlphaGo

[Tencent editor of science and technology] published an article in the industry media TechRadar, calling artificial intelligence (AI) the most popular buzzword in science and technology field. After decades of research and development, many technologies in science fiction have been transformed into scientific reality in recent years. This article summarizes the ten milestones in the AI field. The following is the content of the original text:

AI technology has become a very important part of our life: AI determines our search results, transforms our voice into computer instructions, and even helps us to classify cucumbers. In the next few years, we will drive the car with AI, respond to the customer's inquiries, and deal with countless other things.

But how do we get to this stage? How does this powerful new technology come from? Here is a look at the ten major milestones in the development of AI technology.

The idea of Descartes

The concept of AI is not sudden. Until now, AI is still a theme of philosophical debate: can machines really think like humans? Can a machine be a human being? One of the first people to think of this problem was Descartes in 1637. In a book called Discourse on the Method, Descartes summed up the key problems and challenges that today's scientists and technicians must overcome.

If for all practical purposes, machines move closer to humans and imitate human behaviors, we should still have two very definite ways to identify them as human beings. Descartes said, in his opinion, the machine can never use words, or "put together" identity "to express ideas to others, even if we can imagine such a machine, but the combination of text for a machine, in the words of others make sense, even if the level and the most stupid about the answer, it is unthinkable." He also mentioned a challenge that we are facing now: create a generalized AI instead of a narrow AI -- and how the limitations of AI will expose it to humans.

"Even though some machines can do well or even better in some things, other machines will inevitably fail, which indicates that their actions do not come from understanding of things, but a simple response."

Mimic games

The second main philosophical benchmarks of AI came from the computer science pioneer (Alan Turing). In 1950, he put forward the "Turing test", which he called "imitation Games". This test measures when we can announce the emergence of an intelligent machine. The test is very simple: if the judge doesn't know which side is human or which side is the machine, such as reading text dialogue between two sides, can the machine deceive the judge and make him think he is human?

Interestingly, Turing made a bold forecast for future calculations - he estimated that by the end of twentieth Century, the machine could be tested through Turing. He said: "I believe that in about 50 years, people may use a storage capacity of the 1GB computer programming, let them play through the imitation game, play enough vivid, so that the general judge after 5 minutes of conversation, make correct judgement of the possibility of less than 70%...... I believe that by the end of this century, there will be great changes in the use of words and ideas of general education. When you talk about machine thinking, you usually do not cause conflict.

Unfortunately, his prediction is not very accurate. We are now really beginning to see some of the really bright AI systems, but in 2000s, AI technology was still in a relatively primitive stage. But the average hard disk capacity is about 10GB at the turn of the century, which is far more than Turing's prediction.

The emergence of the first neural network

The neural network is actually a trial and error method, which is the key concept of modern AI. In essence, when you train a AI system, the best way is to let the system guess, receive feedback, and then continue to guess -- constantly adjusting the probability, so that the AI system can get the correct answer.

Surprisingly, the first neural network was actually built in 1951 by Marvin Minsky (Marvin Minsky) and Dean Edmonds (Dean Edmonds), called "SNARC", which means the random neural analog enhanced computer. It is not made of microchips and transistors, but by vacuum tubes, motors and clutches. This machine can help a virtual mouse solve the labyrinth problem. The system sends instructions, allowing virtual mice to walk in the maze, each time the effect of their behavior back to the system - using a vacuum tube to store the results. This means that the machine can learn and adjust the probability to improve the opportunity for virtual mice to pass through the maze.

In essence, Google is currently used to identify a very, very simple version of the same process of objects in a photo. Google is currently using the same process to identify objects in a photo, but it's far more complex than it is.

The appearance of the first autopilot

Now when we mention autopilot cars, we may think of Google Waymo and so on. But what's amazing is that in 1995, Mercedes Benz rebuilt a car, and drove from Munich to Copenhagen. Most of the roads were self driving. It's a total of 1043 miles, and it has 60 crystal computer chips on the modified car. That's the most advanced technology in parallel computing field at that time, which allows it to process large amounts of driving data quickly and provide a guarantee for the responsiveness of the self driving vehicle. The car has a speed of 115 miles an hour and is almost the same as today's autopilot, because it can overtake and read the road sign.

Turning to a "statistics based" approach

Although neural network has emerged as a concept for some time, until the late 1980s, AI researchers began to turn from "rule based" approach to "statistics based" method, namely machine learning. This means that we should not try to imitate the rules of human behavior to imitate the system, but rather adopt trial and error methods to adjust the probability according to feedback, which is a good way for the church machine to think. This is very important, because it is this concept that has made some surprising things in today's AI. "Forbes" Jill Price (Gil Press) believes that this change is from the beginning of 1988, when the IBM TJ Watson research center published an article entitled "statistics method" of language translation papers, with particular reference to the use of machine learning to language translation.

IBM trained the system in 2 million 200 thousand pairs of French and English sentences - all of which came from the bilingual records of the Canadian Parliament. 2 million 200 thousand this number sounds a lot, but Google can make use of the whole Internet - so the effect of Google translation can be said to be pretty good now.

"Deep blue" defeats the chess champion

Although AI's focus has shifted to the statistical model, but also the rule based model is still in use in 1997 held a chess game, IBM computer Deep Blue defeated world chess champion Gary Kasparov, to show how powerful the machine can. It was not the first match between the two sides. In 1996, Kasparov beat deep blue with 4-2. And by the 1997, the machine got the upper hand.

To a certain extent, the intelligence of deep blue is a bit false. IBM itself thinks that deep blue doesn't use AI, because it uses the law of brute force to handle thousands of games of chess per second. IBM has injected thousands of data before the game for the system. Every time the opponent walks, dark blue will copy the reaction of the former chess masters in the same situation. As IBM says, deep blue is only the ghost of the masters of chess before playing.

Whether it's a real AI or not, it is an important milestone, which makes people not only care about computing power of computers, but also interest in the whole AI field. Since the defeat against Kasparov since, in the game of human game player have become the main way of machine intelligence benchmarks in 2011, we again see, IBM Watson system easily defeated two human opponents, become the American quiz show "jeopardy" winners.

Siri and Natural Language Processing

Natural Language Processing is a major topic in the field of AI. To imagine issuing commands through voice to Star devices, we need to have strong Natural Language Processing capabilities, like we want to imagine Trek. So, the Siri created by statistical methods is very bright. It was developed by SRI International, and even launched in the iOS application store as an independent app. Soon, the company was acquired by Apple Corp and deeply integrated into iOS. Now, it has become one of the most notable achievements in machine learning, which has been the most notable achievement of machine learning, such as Google assistant, Microsoft Xiao Na and Amazon Alexa.