Intel Fellow on Stephen Hawking and maximising human potential

Lama Nachman is using AI and other technology to augment the brain with digital assistance.

Lama Nachman is an Intel fellow and director of the company's Anticipatory Computing Lab and is perhaps best known for her work with Professor Stephen Hawking and so called "human cyborg" Peter Scott-Morgan. Her specialism lies in creating contextually-aware experiences via algorithms and applications that understand users through sensing to help people live richer lives. Due to a packed schedule on her side we conducted the following interview over email and the following is a lightly edited version of our exchange.

 

You've been quoted as saying: "The theme that cuts across all of the different research that I'm doing is really how do you amplify human potential and reduce inequity in the society." When did you realise that this was your goal and what was there in your background that you think may have fed into this?  

I think I have always been over sensitised to fairness and it is something that is core to my character. It might come from my early experiences being exposed to injustice in the Middle East as a woman and a Palestinian in diaspora, living in Kuwait. As I grew up, this fairness value became more prominent and started to have more of an impact on my actions.

At the same time, I have always been interested in math and science in school because it all made sense, and had an affinity and curiosity towards technology. Over time, I started to see the potential in technology to improve equity at scale, amplify human impact, in health care, education, citizen science, assistive computing, all areas I have worked on in my research. As AI started getting deployed at scale due to the advancement in deep learning, I saw the narrative shifting towards human AI competition, automation, getting people out of the loop. This got me more interested in focusing on Human/AI collaboration and working towards a different path of AI amplifying human potential. 

 

You're well known for your work on behalf of Stephen Hawking. Can you tell me a little about how that relationship came about and how closely you worked with him?

I started my work with Prof. Hawking in 2011, when he approached Intel looking for a way to upgrade his older communication system, which he had already been using for a couple of decades. Stephen's relationship with Intel goes back decades when Intel's co-founder Dr. Gordon Moore promised Stephen that Intel will support all his computing needs throughout his life. There was a team at Intel that worked on upgrading his computer on a regular cadence. So, Stephen reached out to Dr. Moore and asked him if Intel can help improve his communication system. Due to the progression of the disease, his word-per-minute rate has gone down dramatically. Since our research at Intel Labs was focused on sensing, machine learning and user experience, our CTO asked us to go explore if there is something that we can do to help.

So, we went out there and started observing him, understanding his unmet needs, and trying to figure out what we can leverage from existing technology to help, especially given the many options out there from gaze tracking to Brain Computer Interface, etc. Over time we realised that we really needed to build a software platform from scratch that would enable him to be independent in performing many of his day-to-day tasks including research, teaching, communicating with people, etc.  

We needed an open and configurable software platform that could facilitate all his tasks from a simple trigger that he could control with a cheek movement. Nothing out there provided this capability, so we figured building such a platform would not only enable Stephen, but many across the world suffering from ALS [amyotrophic lateral sclerosis, a motor neurone disease] and being left out. 

Stephen was very aligned with this vision and adamant about open sourcing the solution. So, we started working together on understanding the requirements of the platform, designing the solution, testing it out and iterating over time. Stephen was very active participating in all of these activities. As a result, I was coming out to Cambridge multiple times a year and spending about a week each time observing him, identifying unmet needs, proposing solutions, watching him test (and break) the system, etc.

In late 2014, we had a version that was good enough for him to fully transition to ACAT (Assistive Context Aware Toolkit), and over the next four years I continued to visit him, work with him, discover more ways to improve the system, build these capabilities into this platform and get it out to open source.

 

How did you get on with Prof. Hawking?

By far this has been the most precious experience of my entire professional life. Stephen has always been someone that I admired and thought the world of from a distance. So, imagine my feeling when I got to meet him in person and work with him. I was in awe. As I started working closely with him, I started to admire him even more. I realised the level of hardship that he had to go through to get a single thought communicated and the patience he had as he struggled to get the system to work for him. His focus was on helping others by spending his time and effort to help us get this system designed well for people with disabilities and ensuring it is open sourced to ensure access for all.

I always joked with him that he was not only a designer on this project but also our validation engineer. We would literally spend weeks testing each version of ACAT, and we would put it in front him and he would break it in minutes. He used to give me this smile like "I got you again" every time he found a bug with the system. On a personal level, this relationship started to grow into a friendship, talking about science, AI, politics in the Middle East, the Syrian refuge problem, many topics that we discussed on each of my trips. This eight-year journey was the most meaningful and truly showed me up close and personal the role that technology can play to improve people's lives and not leave anyone behind. It also cemented my belief in the need to ensure that technology can personalise to people's unique and specific needs and building it with an eye towards that. There is no one solution that fits all, so creating technology that can be configurable, understand the context and adapt as needed is paramount. This is why we continued to work in this area at Intel Labs, adding different types of capabilities, sensors, languages, etc. 

We are currently working on adding Brain Computer Interface to ACAT to ensure that people who are not able to move any muscle can still communicate. By providing such an open platform in open source, we hope to reduce the effort needed by developers and researchers to innovate and provide solutions for people with disabilities.

 

Another association is with Peter Scott-Morgan. Can you tell me about how that came about and how your work with him is progressing?  

I was approached by the team working on this project due to my previous experience with Stephen and the work we have done on ACAT. However, as I started discussing with Peter what he wanted out of technology, it became very obvious that he was on the other end of the control spectrum. While Stephen wanted to control every letter and word (including his word predictor), Peter had a different approach. He was focused on improving the spontaneity in communication, removing the silence gap that occurred while formulating his response in an ongoing conversation. 

This meant that he was willing to give up some of this control and embracing an AI system to help him respond quickly with a reasonable response rather than be a 100 per cent faithful to what he would have intended to say word for word. This opened up an opportunity to explore using a speech recognition system to listen to the conversation and developing a response generation system that could be controlled with minimal user input. We are currently working on such a system and leveraging a lot of the current innovation in deep learning and innovating on top of it to enable controlling the answers with minimal input, personalisation with limited user data and continuously learning in situ to improve the system. 

However, while we didn't initially consider ACAT since Peter was able to use gaze tracking and many systems out there already provided this capability, we realised that there was still a need to have an open platform for innovation whether to integrate the response generation capability or the other capabilities that the rest of teams are working on (avatar, personalised voice, robotics, etc). 

So once again the need for an open platform like ACAT became apparent. We integrated gaze tracking into ACAT and developed a different interaction design, working with [user experience design studio] Fjord, to support Peter's needs. This provided an opportunity to learn with Peter how to optimise ACAT for gaze interaction and he has been using ACAT for communication since his surgery. We continue to improve on the system and we plan to release this new version to open source as well.

People often use the word ‘cyborg' with reference to your work. How do you feel about that?

I know that is how Peter refers to himself and it is an interesting term because ultimately it is really about augmenting human capability with mechanical and electronic innovation. With Peter's operations, he is definitely transitioning down this path on the physiological level and with all of the innovation on the verbal spontaneity, personality retention and mobility research threads, it is a really exciting direction for people with disabilities. 

However, I think there is a wide perspective to what it means to be human and how much control people want to have, and I think the need here is to ensure that people have access to such wide range of options to make their own choices. In my research at Intel Labs, I focus on what a collaborative (not adversarial) relationship between humans and AI could look like and explore the many dimensions of that work, which touch on privacy and ethical issues as well.

Ultimately, I see AI as something that complements the human condition and can impact it positively in some very powerful ways (even for populations that are not disabled) but getting to that sweet spot involves understanding if/where/how much control we cede to AI to avail of its tremendous benefits but in ways that feel empowering, not marginalising, to the human.

 

Can you tell me about Intel's investment here and how the company benefits from this work?

As a leader in computing innovation, Intel's mission is really about driving technological progress forward in ways that enrich human lives. As Intel's research organisation, Intel Labs also carries the banner on that same mission.

We're all about driving technological breakthroughs with our research, which spans several domains of computing, but always with an eye toward driving broader societal impact. My domain — Anticipatory Computing — is a key area of research within Intel Labs, with a multi-disciplinary team of researchers that explore new user experiences, sensing systems, algorithms and applications and transfer these capabilities to business units to impact future Intel products.

Do you see the work you're doing in assistive computing affecting mainstream user experiences?

I, along with my team at Intel's Anticipatory Computing Lab, am thinking of the way that gestures, touch, voice and other inputs are already changing the way we interact with digital devices.

In fact, many of our projects are in mainstream areas like education, manufacturing, enterprise and many other domains.  For example, we have been working on a smart environment experience for early childhood education since children in preschool and early elementary really benefit from learning with their whole bodies, touching objects, moving around, etc.

We created an experience with a virtual agent (a teddy bear we call Oscar) that can interact with kids and help them in math learning, as they use physical objects (for example, flower pots) to practise their 10s and 1s and see the digital and the physical world blend in magical ways. This agent needs to be able to understand the scene, listen to what they are saying and interact with them accordingly. 

Kids interact in unexpected ways, they shake their heads, point, nod instead of formulating perfect sentences for a smart assistant to understand. The agent also needs to comprehend implicit signals like frustration and confusion to be able to help and respond appropriately. 

Another example is in manufacturing, where we are researching technology to help people perform their tasks which requires it to understand the scene and actions performed, converse with them, and learn with them over time to continue to evolve.

 

People have talked about brain-computer interfaces for a long time. How far do you think we have come and how do you see this progressing?

BCI is a complex technological endeavour and we're making slow but steady progress. There is a lot of potential in more intrusive technologies by directly connecting to the brain, which will increase the fidelity of communication, however, I have been really focused on capturing the EEG [electroencephalogram, brain wave tracking] data from electrodes that people can wear in a cap.  

1 2 Page 1
Page 1 of 2