HyperAI

Behind the Turing Award: They Chose the Right Track and the Right Scenario 30 Years Ago

6 years ago
Hall of Fame
Headlines
Dao Wei
特色图像

One of the necessary conditions for winning the Turing Award is to have made significant research breakthroughs in the field of computers and to have made important contributions to society. The three predecessors deserved the award because they not only made great academic breakthroughs in scientific research, but also took the lead in solving practical problems in real scenarios.

Since its establishment in 1966, the Turing Award has mostly been awarded to individuals who have made important contributions to the computer industry.

Three leaders in the field of deep learning have won the highest honor in the computer industry in 2018 - the Turing Award. They deserve the award. From the time when deep learning was not understood and valued to its current widespread application, they have played an indispensable role in the innovation and promotion of this technology. 

We will not go into details about their reports and achievements here. Instead, we will take a look at a few small scenes to see how these three top researchers overcame all the obstacles along the way. 

LeCun's signature technology originated from bank checks

It was actually a project based on chance that led LeCun to join the handwriting recognition project. At that time, LeCun, who was in his prime, was working as a team leader at Bell Labs (which was then owned by AT&T). As the top communications technology company at the time, AT&T planned to cooperate with major banks in the United States to carry out new research projects.

The biggest headache for banks was how to recognize a large number of handwritten checks and bills. At that time, recognizing handwritten characters was a difficult challenge, and the traditional method was slow and the recognition rate was not high. 

Lecun integrated the back-propagation algorithm into the convolutional neural network (CNN) and used nearly 10,000 handwritten digit samples provided by the U.S. Post Office to train the system. In the end, during the actual test, the error rate was only 5 %. 

Transform handwritten characters into regular images

Subsequently, this innovative technology was used in the check recognition systems of ATMs of many banks. In the late 1990s, this system handled the recognition of 10% to 20% of checks in the United States. 

Lecun's research first proposed CNN, which achieved commercial-level accuracy under the environment at that time, proving that deep neural networks have a natural advantage in image processing. 

However, the development of deep networks still had many drawbacks in terms of technology and hardware at the time, such as insufficient computing power. Although the algorithm achieved great success, it took up to three days to train on the dataset. 

In the period that followed, AI was again coldly received. Coupled with the ease of use of support vector machines (SVM), deep learning was not taken seriously and was instead regarded as a side technology.

Paper address:http://yann.lecun.com/exdb/publis/pdf/lecun-89e.pdf

LeCun is looking for new scenarios for autonomous driving

It was not until 2006 that, with the long-term persistence and promotion of Yoshua Bengio, Yann LeCun, Geoffrey Hinton and others, and the incorporation of some new ideas and methods, this technology, which was not originally valued by most people, slowly began to shine. 

2006 is also known as the first year of deep learning. Hinton solved the problem he had encountered before: the vanishing gradient problem in deep networks. 

Jumping to 2009-2010,Lecun and New York University collaborated on an experiment to identify buildings, skies, roads, pedestrians, and vehicles from images using deep learning technology. 

Achievements

The key step in image recognition is scene parsing, which involves labeling each element in the image with its corresponding category, followed by region division and labeling. The challenge of this step is that it combines traditional detection problems, segmentation, and multi-label recognition processing. 

In order to achieve good visual classification and accuracy, they used convolutional neural networks. In the study, they demonstrated a feedforward convolutional network that extracted multiple scales of raw pixels from large-size images through supervised end-to-end training, achieving the most advanced level at the time on standard scene parsing datasets. 

Achievements

It is worth mentioning that this model does not rely on feature engineering, but uses supervised learning, trained from fully labeled images to properly learn low-level and mid-level features. 

Paper address:http://yann.lecun.com/exdb/publis/pdf/farabet-pami-13.pdf

Google's best PR: Hinton and diabetic retinopathy

If we say that deep learning gradually attracted more researchers since 2006, then after 2012 its development officially entered a rapid mode.

In 2012, the team led by Hinton used deep neural network methods to achieve a clear victory in image recognition in the ImageNet competition.

In 2016, AlphGo, based on deep learning, defeated Lee Sedol, and deep learning made AI well known to many people. After years of silence, deep learning officially entered a period of explosive growth, and its potential in many fields such as visual processing and speech recognition was fully demonstrated. 

A very small example:In 2017, Geoffrey Hinton led Google Brain to use a new classification method to assist medical diagnosis.By modeling individual labels to improve classification capabilities, we also demonstrated that this labeling approach improves the accuracy of computer-aided diagnosis of diabetic retinopathy. 

Sample images of different categories

This innovative method is used to process huge amounts of real-world data that require expert labeling. 

At that time, the task of labeling a data set was usually divided among many experts, each of whom only labeled a small part of the data and the same data point contained labels marked by multiple experts. 

Such an approach helps reduce individual workload and also helps uncover hard-to-find truths in the data. When experts disagree on the label of the same data point, the standard approach is to take the label that has more expert support as the correct label, or to model the correct label to obtain the distribution state. 

However, this approach ignores potentially useful information about which experts have labeled which labels. For example, the findings of experts with unique expertise may be ignored by the algorithm because others are unaware of them. 

The Google Brain team proposed to model the experts individually and then learn average weights to combine this information, perhaps in a sample-specific way. In this way, more weights can be assigned to more reliable experts and the unique strengths of individual experts can be used to classify certain types of data. 

Schematic diagram of different neural networks

By applying deep neural networks, they used this classification method to improve the diagnosis of diabetes through the retina, and the algorithm performance they provided was also better than other methods. 

Choose the right track and get closer to the Turing Award

March 27, 2019The 2018 Turing Award was announced, and three long-term deep learning practitioners Yoshua Bengio, Geoffrey Hinton and Yann LeCun won the award, the reason is: they have played a crucial role in the development of deep neural networks. 

Indeed, the contributions of the three of them to the development of deep learning are too numerous to mention, and the three scenarios listed in this article are just one of the opportunities for their success. We can see the glory after winning the award, but for them, the most precious thing should be their decades of dedication and enthusiasm for the track and technology they believe in.

Click to read the original article