Jeffrey ‘Jeff’ Dean, Senior Fellow, Google Research Group is responsible for the conception, design, and implementation of much of Google’s software infrastructure. He leads Google Brain, a machine intelligence team focused on deep learning and is the recipient of the ACM Infosys Foundation Award in Computing. This year, ACM celebrates 50 years of the ACM Turing Award. Dean answers five quick questions on AI below.
Does the misappropriation of the term AI annoy you?
A lot of people who work on machine intelligence have resisted the term “AI” because it’s so broad and nebulous, and it carries all those misconceptions from pop culture. But it can be useful as an overall umbrella for describing this new approach to computing – that these systems can learn, adapt, and behave effectively in an intelligent way. We are building at least parts of that sci-fi dream – the useful, practical parts that could make people’s lives better.
What are the most important examples of AI in mainstream society today?
There are many, but often they are “under the covers” so that people often don’t realise they are using machine learned systems. Examples include language understanding systems present in products like Google Search, Google Translate, and Gmail’s Smart Reply feature, speech recognition on people’s phones, recommendation systems in sites like Amazon and Netflix, and image understanding systems that underlie products like Google Photos.
Our research group also did a Reddit AMA on /r/MachineLearning recently. Readers might find that discussion interesting as well. See “AMA: We are the Google Brain team. We'd love to answer your questions about machine learning”.
What have been the biggest breakthroughs in AI in recent years and what impact is it having in the real-world?
The biggest breakthrough in the last five or so years has been the use of deep learning, a particular kind of machine learning that uses neural networks. Stacking the network into many layers that learn increasingly abstract patterns as you go up the layers seems to be a fundamentally powerful idea, and it’s been very successful in a surprisingly wide variety of applications – from speech recognition, to image recognition, to language understanding. What’s interesting is we don’t seem to be near the limit of what deep learning can do; we’ll likely see many more powerful uses of it in the coming years.
How well prepared is the AI community and society to deal with the deep ethical issues that come with using AI approaches in life-critical areas such as health and transportation?
I believe there’s important work to do here, but it’s important to actually focus on what’s practical and feasible in the next decade or so – not the really far-flung hypotheticals that seem to consume too many headlines, and often don’t have much basis in a technical understanding of machine learning. For an example of a rigorous multi-organisation collaboration including researchers from our group, see “Concrete Problems in AI Safety”, by Amodei et al.
Much has been made of the potential for AI in pop culture. What are some of the biggest myths you’ve seen? Can you think of examples where science fiction is getting close to reality?
Probably the biggest myth is that AI is one singular thing that you can just “flip on” like a switch, and suddenly you’ve got human-style intelligence. In fact, AI is a huge field involving many techniques, only very loosely inspired by human intelligence. The good news is these techniques are already quite practical for some kinds of real-world applications today – this is why you can talk to Google on your phone, and it understands what you mean and can give you good answers. It’s not magic, but it already works well enough that it’s really impressive compared to what we could do just a few years ago.
NEXT ARTICLEC-Suite Talk Fav Tech: Madhusudan Therani, Near»
Mark Chillingworth on IT leadership
Phil Muncaster reports on China and beyond
Kathryn Cave looks at the big trends in tech