A computer won over a human in chess for the first time in the 1980s, but computers still can’t recognize a cat. Or can they? Quoc Le is a researcher at Google Brain and his job is to teach computers to recognize images, speech and languages with deep learning. The work Mr Le and his colleagues have done at Google Brain is now being used in over 1,200 applications. If you have an Android smartphone, you’ve likely used it already.
“The error rate in understanding images has gone down from 28% in 2012 to 5% today,” says Mr Le.
Having computers recognizing and understanding images is essential in several of Google’s products. It’s key in the development of self-driving cars and for healthcare applications. Deep learning is also an important component in the development of artificial intelligence.
Lately influentials like Elon Musk and Stephen Hawking have expressed concerns about AI, but Mr Le, who is working for the company with probably the most resources and expertise in the area, is not concerned.
“AI will become very open. It’s central to get more people to work on the technology. I think we will have one AI that many people will have access to.”
Chinese search giant Baidu open-sourced its deep learning algorithms earlier this month, following both Google and Facebook, partly because deep learning requires big data sets. The more people working on it, the better.
The biggest challenge Mr Le and the team are working on now is to crack unsupervised learning, letting the computer learn by itself.
“Our understanding of deep learning is still very limited. Technology gives a lot of surprises. I’m surprised by the fast improvements of image recognition, but robotics is still very far away.” Mr Le says.