Why and how to fix biotechnology

Tim Fell CEO at Synthace looks at the future of biotechnology

This is a contributed piece by Tim Fell, CEO at Synthace

Biotechnology offers potential solutions to food security, antimicrobial resistance, breaking our reliance on the oil economy and more, but it has yet to deliver the goods. Bringing a new drug, crop trait or industrial bioprocess to market can easily take over a decade and a billion dollars. Simply put there is a productivity crisis in biotechnology.

Yet there is so much opportunity to unlock. 3D printing may be a game changer for making many physical objects but when it comes to constructing really complex products from simple molecular building blocks then biology is the ultimate distributed manufacturing system. Look around you, it is everywhere.

Crops, livestock, fermentation products have all been produced locally for thousands of years, with new varieties created by selective breeding or more recently genetic intervention. In truth though this has been a sculpting of nature rather than design in an engineering sense.

Why? Because when it comes to understanding and harnessing the exquisite complexity of biology we are hampered with old-fashioned and largely manual ways of working.

An aptitude for dealing with complexity is one of our defining attributes as human beings. Abstracting general rules from specific examples has lifted us to the top of the evolutionary tree, but this pattern recognition ability only stretches so far. Biology is so challenging because its patterns are too complex for us to understand, at least without very sophisticated help.

In the recent Agenda article ‘Has medicine neglected theory in favour of big data?’ Peter Coveney and Edward R Dougherty articulate the problem very well when they say: “We need deeper conceptualisation, but the prevailing view is that the complexities of life do not easily yield to theoretical models.” In order to build these essential models what biotechnology needs most is better data, not just bigger data. The fundamental problem is today’s experimental tools and working practises are outdated, and the information they produce unstructured and difficult to reproduce.

But here is the good news. Digital technologies and processes which have previously transformed how we work with complexity in other industries are equally applicable to biotechnology.

As an example consider the semiconductor industry. Back in the 1960s and 70s integrated circuits were designed by hand, and manually laid out. That was fine when chips only contained dozens of transistors but when that number reached 10,000 the complexity started to become overwhelming and clearly no human mind laid out the 2.6 billion transistors on today’s i7 processor (coincidentally about the same number as proteins in a human cell).

The advance that enabled chip designers to move on from this manual, almost artisanal approach was the advent of Electronic Design Automation first introduced by Carver Mead and Lynn Conway in their 1980 ground breaking book ‘Introduction to VLSI Systems’ [PDF]. They advocated the use of high level languages that would allow chip designers to specify desired behaviour in a textual programming language which then derived the detailed physical design and crucially automated the chip manufacture process itself. In the parlance of the industry these languages compile to silicon.

[image_library_tag d2be5e17-741b-4e06-909b-19136a0ab0c6 640x430 alt="biotech-1" title="biotech-1" width="640" height="430"class="center "]

The immediate result was a massive increase in the complexity of the chips that could be designed. They also became more likely to function correctly, as logic based verification tools simulated the designs more thoroughly prior to construction. Electronic Design Automation remains the basis of integrated circuit design today. Without it the digital revolution that has defined so many aspects of our lives would not have occurred.

DNA sequencing technologies are rapidly digitising our knowledge of nature. The deluge is swamping us in big data but without it leading to a better understanding of the underlying biological systems our ability to go from this digital representation back to the physical world is disconnected. Without better linkage we cannot properly design and engineer biology to heal, feed and fuel our population.

[image_library_tag ea589e3a-a34b-43a5-9751-65a763be8ad9 457x277 alt="biotech-2" title="biotech-2" width="457" height="277"class="center "]

Thankfully biotechnology’s Electronic Design Automation moment is close at hand with the arrival of new platforms such as Antha that allow us to abstract away many of the complexities of working with biology by linking software and hardware with laboratory working practises. These include high level languages for describing biological processes; building robust computational models of these systems; the advent of flexible low-cost robots for physically performing experiments, which collectively are turning biology into a fully-fledged engineering discipline.

Most exciting of all, these advances generate reproducible and structured biological information upon which we can apply machine learning to benefit from the exquisite complexity of living systems. In turn these make sophisticated biomanufacturing local to the point of use and for some products literally into people’s own back-yard.