Deep learning in diagnostic healthcare: The future?
Healthcare

Deep learning in diagnostic healthcare: The future?

The accumulation of X-rays, CT scans and MRIs means that doctors face an enormous task of sifting through this medical data in order to reach diagnoses. But now, thanks to advances in machine learning, this task is becoming much easier. Enlitic, is one such machine learning startup that is using deep learning techniques to revolutionise diagnostic healthcare.

We catch up with Jeremy Howard, Founder and CEO of Enlitic, and Ahna Girshick, Enlitic’s Senior Data Scientist to find out how Enlitic is using deep learning to assist doctors.

How does Enlitic use deep learning to assist doctors?

Our technology works by first investigating from the existing vast archives of medical data. Let's say a patient gets a lung CT screening. Enlitic's software could "read" the CT scan and determine the probability that the patient has lung cancer, find clinically similar patients, and show their treatments and outcomes. The clinician could use this information to decide whether or not to perform a biopsy. By immediately giving the clinician the most accurate information, the patient is less likely to have a missed diagnosis or an unneeded biopsy. Overall, this leads to better patient outcomes because of the importance of detecting lung cancer as early as possible. 

IBM’s Watson also analyses masses of medical information to assist doctors. What’s the difference between what you are doing and what Watson is doing?

In many ways we are kindred spirits with similar goals: To use computers to improve medicine to better peoples' lives. However we go about it from [the] opposite direction. Watson consumes as much as it can from medical textbooks and journal articles and then attempts to construct meaningful relationships. Enlitic consumes as much as it can from medical data: patient records, histories, reports, medical images, and ultimately genetic data. Enlitic learns what normal bodies look like, and what diseased bodies look like, by seeing many examples of them.

What are some of the challenges in using deep learning methods?

Existing deep learning methods have not been developed for large 3D medical images, such as CTs or MRIs. We've had to develop the core technology for that. Machine learning in general relies on having either good data, or lots of data (ideally both!)

What if an algorithm goes wrong? Is this not potentially dangerous for the patient?

Even though we know our doctors and healthcare system are dedicated to helping us, many of us have often questioned their credibility or have been frustrated with the system. Imagine how perceptions would change if we had insight into a specific doctor's or hospital's performance on breast cancer diagnosis, as compared to the performance of an algorithm that did as well or better than the best doctor in the world? What if an algorithm was available immediately and anywhere whereas a doctor had a two-month wait? Enlitic can improve the quality of healthcare by helping medical experts diagnose more accurately and more quickly.

How much inspiration do you take from the human brain in the work you do?

We are very inspired by the human brain and its astonishingly powerful ability to make sense of information. Deep learning algorithms are based on neural networks which, like the human brain, create deeper and deeper abstractions of visual input. This means that early neurons are sensitive to simple features such as edges or colours, but later neurons are sensitive to complex features such as faces, objects or scenes. While deep learning is inspired from the biological vision, it is a massive oversimplification of the brain, and thus much more limited in its capabilities.

What is the state of data science in the healthcare field today? Has it made enough progress or do we still have a long way to go?

Data science in healthcare has a long ways to go! Medicine has been slow to digitise, for cultural and regulatory and cost issues. For example, in Europe and Canada, digital pathology is widely used. In the US, it isn't approved so pathologists still look at glass slides through microscopes that need to be physically shipped around the country for second opinions.

Many people in the developing world still don’t have access to medical diagnostics. Will your technology be able to meet those needs in the developing world? How?

Absolutely. There are four billion people in the world who don’t have access to modern medical diagnostics, and it’ll take hundreds of years to train enough medical experts. We’re creating flexible and powerful tools so that these people don't have to wait for diagnoses, which is very exciting!

How many medical images have you used so far? What has been your success rate?

Deep learning is known for improving with access to more data because, when looking at photos of the world, the algorithm needs to build representations that are invariant to camera angle, lighting, and the variety of our world. Medical images actually have a lot less variability, since they are generally constrained: We know the camera angle, there are fixed known colours to each type of scan, and the structures of human anatomy are constrained as well.

So deep learning for medical imaging requires fewer images than traditional deep learning, although we have used as many images as we can get our hands on, and our database is continually growing.

As humans, most of our learning comes from environments we are exposed to. If in time, self-learning in machines becomes possible, is there not a danger of machines learning bad things as well as good things?

Enlitic's machine learning reads medical images and medical data and learns from the past wise judgements of human doctors. So in this context, you are really asking if we know that all the doctors' judgements we use are correct, and came from a benevolent intent. It's a good question, and the way we protect ourselves from the variation of human doctors is to use a lot of high-quality data, and to check our results against those of trusted experts. 

Humans have the capacity to reflect and change the way we think about solving problems. Will machines be capable of doing that?

There are two ways that you can get computers to do things for you. The first is to tell it the steps, traditionally called programming. The second is to let the computer figure out the steps for itself, called machine learning. By definition, machine learning is already capable of figuring out the optimal way to solve a well-specified problem. "Reflecting" and "thinking" are terms that I would not attribute to even the very best machine learning algorithms though, because they imply not only outstanding problem-solving skills but also a complex human-like consciousness.

PREVIOUS ARTICLE

«Interview Mocha: A ‘Slumdog entrepreneur’

NEXT ARTICLE

Crowdsourcing Innovation: Daniel Sandvik, Moggles»
author_image
Ayesha Salim

Ayesha Salim is Staff Writer at IDG Connect

  • twt
  • Mail

Add Your Comment

Most Recent Comments

Our Case Studies

IDG Connect delivers full creative solutions to meet all your demand generatlon needs. These cover the full scope of options, from customized content and lead delivery through to fully integrated campaigns.

images

Our Marketing Research

Our in-house analyst and editorial team create a range of insights for the global marketing community. These look at IT buying preferences, the latest soclal media trends and other zeitgeist topics.

images

Poll

Will Kotlin overtake Java as the most popular Android programming language in 2018?