4 takeaways from Re:Work's Deep Learning Summit

Probability and efficiency behind the scenes, emotional awareness and omnipresence at the front.

At last year’s Re:Work Deep Learning Summit, many spoke about the data problem within Deep Learning: most of the high quality data from which these intelligent algorithms can learn is held by the big names such as Apple, Facebook, and Google, who are often unwilling to share.

At this year’s event, held this week in London, there was less talk about data and more focus on not only making these models more efficient, but more human.

Here are four takeaways from this year’s event.


1: Deep Learning needs to be more efficient

Learning is still intensive. Even training on relatively simple Atari games need thousands or even millions of data points, said DeepMind researcher Marta Garnelo, while something as complex as Grand Theft Auto would break many such learning models. Shubho Sengupta of Facebook’s FAIR unit said that it can take tens of exaflops to train models on things such as text-to-speech. What’s needed is not only models that learn quicker on less data, but also in a more parallel and less linear way.


2: We need more uncertainty in our learning

Andreas Damianou, a ML scientist at Amazon said that adding levels of probability and certainty would help decision-making within AI systems. In his words a “system that knows what it doesn’t know” is likely to make more informed decisions. For example, the behaviour of a driverless car should be very different if it’s 99% sure a road is clear or only 55% sure. Adding levels of probability helps prevent “overfitting” your data.


3: AI is getting more emotional so it can be more personal

This year’s event also featured a stream focused on AI Assistants in their different guises. Some of the speakers emphasised the power of emotional understanding in chatbots and other assistants.

Anne Hsu of the University of London highlighted research where chatbots were made to act in a more emotional way. These “Psychologically aware chatbots” would parse information about the context of a scenario and how someone is feeling, and then provide a response that is “sensitive” to the user’s wants and needs.

Hsu said such chatbots would be ‘very domain specific’ and its creators would ‘need to understand the psychology of the space it was designed for’. In the example shown, a chatbot designed to help people lose weight praises people if they avoided overeating in a particular scenario, or uses certain motivational methods if told that the user wants to eat unhealthy food. So if a person told the bot that they were tired and just wanted to eat some chips, it would remind them about a similar time recently when they were tired but still chose to eat healthily.

Similarly, Fabian Ringeval from the University of Grenoble Alpes outlined research focused on how AI can be trained to understand more than just what we say but also how we say it. Currently machines can’t understand our responses beyond the expressed message, but understanding how we say it; detecting the emotions with which we ask for things would allow for far more personal services.


4: From in our screens to on all our surfaces

With current assistant technology we’re either limited to chatbots on our screens, or voice assistants on expensive devices. And they are perfectly good for now, at least until you fall out of their decision tree. However, agues Adi Chhabra, Vodafone’s Chief of Product for AI, the future is “beyond devices, beyond any of the screens.”

The future is all about “surface interactions”, where instead of existing within websites or screens, interactions will be on walls, windows, our glasses, and any other surface imaginable.


Bonus: Nerds are cool

Stickers proclaiming “I’m interested in facial recognition” and “I <3 Neural Networks” are the height of cool.