Moore’s Law – the philosophy of incremental improvement the semiconductor industry has lived by for over 40 years, is under threat.
Prophesies have predicted that its premise – that the size of transistors halves every two years – would come to an end for a while, but all the data released recently is suggesting the end is finally nigh. The 2015 International Technology Roadmap for Semiconductors – the biannual roadmap created jointly by an association of America’s major chipmakers – says shrinking transistors will be a thing of the past by 2021.
Why? Aside from Silicon’s limitations at high temperatures, eventually the scales become so small that, aside from the extreme cost of such miniaturisation, once you reach chip designs approaching almost a single atom-wide, electrons start to behave differently and the traditional laws of how microchips work start to go out of the window. Other predictions beyond the 2015 roadmap say the end of Moore’s Law could come as soon as 2022.
So what next? Some researchers are suggesting looking for efficiencies in other areas, creating chips specific to individual use cases. Other scientists, however, are looking at ever more interesting and radical materials and techniques to ensure that if we can’t stick to Moore’s Law, semiconductors don’t come to a standstill.
We’ve looked at some of the ways scientists are looking to extend or replace Moore’s Law and change computing forever.
Extreme Ultraviolet (EUV) lithography
Currently, lithography is used to etch chip designs onto silicon wafers. However, the wavelengths of today’s technology are too wide, making continuing miniaturisation increasingly difficult and expensive. Extreme Ultraviolet (EUV) Lithography uses smaller waves of light and so can create higher density chips, but is difficult to make scalable due to dim light sources. Recent breakthroughs mean EUV could become a fully viable process by 2020.
Though long since outdated and replaced, some researchers are returning to the notion of using vacuum tube technology. Caltech’s Nanofabrication Group are developing microscopic tubes that could be used instead of traditional transistors. The tubes offer a way to avoid the unpredictable behaviours seen by silicon once you start reaching low nanometre measurements. Boeing is one of the organisations currently funding the research, but there’s currently no telling if or when vacuum tubes will make their big comeback.
Probably the most well-known super material, Graphene promises to change the world. Made up of a single layer of carbon atoms, this 2D material is stronger than steel, harder than diamond, flexible, transparent, and possesses excellent conductive qualities.
Graphene could lead to flexible screens and make the foldable phone a reality, better solar panels, more efficient batteries, and a host of other novel applications. But it’s potential use within circuits has many within technology excited.
Currently mass production of Graphene – originally found by scientists in Manchester using little more than graphite pencils and scotch tape – is difficult, and finding ways to incorporate the material’s qualities into actual electronics are still being worked on.
There’s no shortage of research into the material; Japan’s Rice Labs and Berkeley Lab are working on making it more scalable, MIT are working on a way to use graphene to create light-based chips, the Georgia Institute of Technology are working on ways to create band gaps on par with silicon, and IBM has invested billions of dollars into its graphene development.
Though currently still a while off from mass application, graphene potentially offers huge improvements thousands of times ahead of the incrementalism of Moore’s Law.
Discovered in 1991, carbon nanotubes (CNT) are microscopic hollow cylinders made up of a single layer of carbon atoms (aka graphene rolled up like a newspaper). Incredibly strong and conductive, some researchers see it as a potential replacement for silicon. The first CNT processor was built in 2013 using 178 transistors and had the processing power of a computer from the early 70s.
This month, researchers at McMaster University found a way to create nanotubes more efficiently, potentially leading the way towards making it a more scalable technology. Stanford University and IBM Research are other institutions looking at CNT, the latter recently devising a way to use the nanotubes for encryption purposes.
“We feel that CNTs have a chance to possibly replace silicon transistors sometime in the future - if critical problems are solved,” Supratik Guha, director of physical sciences at IBM Research, told ComputerWorld last year. Like many current technologies, making something that works in a lab actually into a viable and scalable product is still some years off.
Stanene and 2D friends
Graphene research has barely been going on for decade, but there’s already newer, hotter 2D materials in town. Stanene is made of a single layer of tin atoms, arranged in a hexagonal formation. In theory – creating the material has been difficult thus far - this new material can transfer electricity without heat loss, a very exciting prospect for chip-makers.
ButsStanene is just one of a whole new family of single-atom layer materials. There’s also: Silicene (a single layer of silicon), Germanene (germanium), White Graphene (boron nitride), Phosphorene (phospherus), Molybdenum Disulfide, and Tin Monoxide. Each offer their own thermal and electrical properties, and could all lead to higher-performing and longer-lasting electronics. Graphene may be the trendy material with all the research budget, but it likely won’t be the only 2D material in the computers of the future.
No, we’re not talking about Virtu’s garish crystal-studded phones. Akhan Semiconductor says that using (synthetic) diamond-based transistors, capacitors, and resistors instead of silicon could remove much of the worries around over-heating, allowing for better performance and the removal of heatsinks in devices. Production has only just begun so it could be a couple of years before diamond-based devices become a reality.
MIT are also looking at using diamonds within quantum computing (see below).
There’s no shortage of people getting excited about Quantum computing. The idea – that instead of merely having bits that are 1s or 0s, but also both at the same time using qubits – could improve our current computing power by magnitudes.
AI could be smarter, encryption unbreakable, Big Data would be bigger and data modelling pin-point accurate. Weather reports would never be wrong, medicine more effective, self-driving cars would never crash, our robot overlords never wrong.
But despite the promise, quantum computing may never end up being used outside large data crunching. The expense and infrastructure required to create such machines are astronomical now, and even over the next couple of decades – the sort of timeline many think it will be before we really get to grips with the technology – it might never be feasible to make them small enough to put in your standard server rack or desktop.
When it comes to computing power, the human brain is a pretty nifty thing. It’s compact, powerful, efficient, and extremely adaptable. And as we approach the limits of traditional computing, more and more time is being spent looking at how we can take a more brain-like approach to computing. Aside from creating machine learning algorithms that learn in a similar way to how humans do, there’s a growing breed of neuroelectronics designed to function more like our own neurons. IBM is working on artificial neurons that can fire and carry an electric pulse in a similar fashion to our own organic ones, meaning machines will be able to think more like we do. This will allow for better real-team pattern analysis, for example image recognition.
Another project from IBM, 5D Electronic Blood aims to help with the power delivery and cooling issues that arise from trying to increase the density of circuits within our electronics. Based on the company’s liquid cooling and microfluidics research and a rough extension of how our own brains use fluid to cool down all those firing neurons, it could lead to multiple CPUs and GPUs being stacked on top of another. The concept is still in the very early stages but does demonstrably work; last year IBM was able to deliver 100 milliwatts of power to a chip while dissipating all of its generated heat.
Some scientists, however, aren’t just looking at ways to mimic the computation power of biology. Some are looking to actually create organic computers. Researchers at McGill University have created a biological supercomputer the size of a book which uses Adenosine triphosphate (ATP), a molecule found within human bodies, to carry proteins (in lieu of electrons) along the chip. The use of proteins means less heat and energy.
Harvard, meanwhile, has developed a technique to write code into bacteria. The information, created using the organism’s own immune system and then passed onto its descendants, can then be read back by genotyping the bacteria. While impressive, it’s not a particularly useful technique yet; the process is only good up to 100 bytes.
Researchers at MIT are working on what they call “Gene circuits” which “integrate both analogue and digital computation in living cells, allowing them to form gene circuits capable of carrying out complex processing operations”. Although interesting, the focus of research in this area is aimed towards how it can treat diseases in the gut.
DNA data storage
What if, instead of storing photos on hard drives, clouds, USBs, SD cards and whatever else, you could fit all of the world’s data onto a device that could fit on your desk and survive for millions of years? The University of Washington and Microsoft are working on using synthetic DNA – the same building blocks that make up all biological life – for data storage.
Earlier this year the researchers released a paper [PDF] explaining how the team had achieved the first complete system to encode, store and retrieve digital data using DNA molecules. The team successfully managed to encode four images into DNA snippets and successfully retrieve and reconstruct them from a larger pool of DNA without any information loss.
DNA can store far more information and more density than any current digital storage medium, and if contained correctly, can preserve it for as long as required without degradation. It works by converting 1s and 0s into the basic building blocks of DNA – adenine, guanine, cytosine and thymine – and then synthesising the resulting patterns in DNA. Although promising, it may end up only being useful for long-term archiving as it can take the best part of a day to retrieve the information once sequenced.
CNF – biodegradable microchips
Though it doesn’t offer the same jump in computing power, the concept of environmentally friendly, biodegradable chips is one that could have an equally large impact on people. The eWaste farms in Africa and Asia are filled with carcinogenic materials, harming both the land and the workers reclaiming gold and other valuable metals from discarded electronics.
Cellulose Nanofibril (CNF) replaces the support layers of semi-conductors (also called the substrate and made from harmful materials such as gallium arsenide) with a thin layer of wood crystals bonded together with epoxy resin. The resulting materials share the same conductive properties, but dissolves harmlessly. Work on the material is currently being conducted in the University of Wisconsin-Madison.
There are numerous other breakthroughs coming through all the time; spintronics, atom-wide storage, and phase change memory, just to name a few. Of course, most of these discoveries and breakthroughs have been done in labs, often at costs far larger than an entire lifetime’s worth of pay checks. Some may never go beyond mere experiments. But some may end up in your computer, data centre, smartphone, or in the AI robots that will someday take over the world.
PREVIOUS ARTICLE«Quotes of the week: “Apple is the NSA of AI.”
Kathryn Cave looks at the big trends in tech
Rupert Goodwins’ unique angle on tech change
Phil Muncaster reports on China and beyond