Cambridge Engineers Build Brain-Like Memristor Chip That Could Cut AI Energy Use by 70%
A new hafnium oxide device behaves like a synapse, switching at currents one million times lower than today's silicon — and could ease the AI industry's looming power crisis.
Engineers at the University of Cambridge have built a new kind of computer chip that mimics the way neurons in the human brain pass signals to one another and can run artificial-intelligence calculations at switching currents roughly one million times lower than today's silicon, results that could cut the energy demands of large AI systems by as much as 70 percent. The work, published April 22 in the journal Nature Electronics, is the most stable and energy-efficient demonstration yet of a 'memristor' — a circuit element first theorized in 1971 — and is being described as a credible alternative to the conventional architectures that have dominated computing for half a century.
The Cambridge team, led by Dr. Babak Bakhit and Professor Manish Chhowalla in the university's Department of Materials Science and Metallurgy, fabricated the device from a thin film of hafnium oxide, a material already common in commercial semiconductor manufacturing. By precisely controlling the structure of the film at the atomic scale, the researchers were able to create a stable, reliable switch whose conductance can take on a continuous range of values, rather than the binary on-or-off states that dominate conventional logic chips. That property allows a single device to behave like a synapse — the connection between brain cells whose strength can grow or weaken with use.
In benchmark tests reported in the paper, the device exhibited spike-timing dependent plasticity, the same biological learning rule that allows real neurons to strengthen connections when one cell fires shortly before another. The team trained a small simulated neural network using arrays of the new memristors and showed that it could classify handwritten digits and identify simple images with accuracy comparable to a conventional graphics processing unit, while consuming a small fraction of the power. 'The energy advantage is not incremental,' Bakhit said in a Cambridge briefing. 'We are talking about orders of magnitude. That is the kind of step change you need to make models like the ones running today's chatbots viable on a phone or in a satellite.'
The research arrives as the energy footprint of artificial intelligence is becoming a serious global concern. A May report from the International Energy Agency projected that data centers running AI workloads could consume more than 1,500 terawatt-hours of electricity by 2030, roughly equivalent to the entire current electricity demand of Japan. Hyperscalers including Microsoft, Google and Amazon have signed long-term contracts for nuclear power and have warned investors that energy availability — not chip supply — may become the binding constraint on continued AI growth. Hardware that can perform machine learning at 1/1,000th of current power demand could ease that pressure substantially.
The Cambridge devices remain a laboratory demonstration. Scaling them from individual cells to wafer-scale arrays compatible with existing semiconductor fabs is the next major engineering challenge, and Chhowalla estimated that commercial deployment is still 'three to five years away in best case.' Several startups, including Mythic AI in Texas and Rain AI in San Francisco, are racing toward similar in-memory computing architectures, and a partnership between the Cambridge group and the British semiconductor firm Pragmatic Semiconductor is already exploring early commercial pathways. If the technology delivers on the laboratory results, the era of brute-force GPU scaling that has defined the current AI boom may be approaching an inflection point.
Originally reported by University of Cambridge.