Tufts Researchers Build Robotic AI That Uses 100 Times Less Energy and Outperforms Conventional Models
A neuro-symbolic system from Tufts University trained in 34 minutes versus 36 hours for a standard baseline, used 1% of its energy, and solved the Tower of Hanoi with 95% accuracy against 34% for deep-learning rivals — pointing to a more efficient path for AI.
Researchers at Tufts University have demonstrated a robotic AI system that uses 100 times less energy than conventional deep-learning approaches while dramatically outperforming them on complex planning tasks — a result that challenges assumptions driving a global data-center building boom and points toward a fundamentally different path for efficient artificial intelligence.
The system, developed in the laboratory of Matthias Scheutz, Karol Family Applied Technology Professor at Tufts' School of Engineering, combines traditional neural networks with symbolic reasoning — a hybrid architecture known as neuro-symbolic AI. Rather than relying on brute-force trial-and-error learning that requires massive computational resources, the system applies explicit logical rules to break problems into structured steps, mirroring the way humans approach novel challenges.
The team tested its approach on Tower of Hanoi puzzles, a classic planning problem that requires moving stacked rings between pegs in a specific sequence without ever placing a larger ring on a smaller one. The neuro-symbolic model solved the three-ring version with a 95 percent success rate — compared with just 34 percent for the best-performing conventional visual-language-action baseline. When presented with an unseen four-ring variant the system had never encountered during training, the neuro-symbolic model still succeeded 78 percent of the time, while both conventional baselines failed to complete a single episode.
The energy savings were equally striking. Training the conventional baseline required more than 36 hours of GPU computation; the neuro-symbolic model trained in 34 minutes — a 63-fold reduction in training time. Energy consumption during training fell to just 1 percent of the conventional model's requirement, and operational energy during execution dropped to 5 percent. Scheutz noted that "a neuro-symbolic VLA can apply rules that limit the amount of trial and error during learning and get to a solution much faster."
The research arrives as global electricity demand from AI data centers is projected to double between 2026 and 2030, straining power grids in the United States, Europe, and Asia. The International Energy Agency estimates that AI-related electricity consumption could rival the entire output of Japan by the end of the decade. Technology companies are spending hundreds of billions of dollars on new data centers, and nuclear power plant restarts are being fast-tracked specifically to meet AI electricity demand. Against that backdrop, the Tufts findings offer a tantalizing possibility: that architectural innovation, not just more hardware, could be a viable route to scaling AI capability without a proportional energy cost.
Important caveats apply. The comparison was conducted in simulation rather than on physical robots, and the Tower of Hanoi problem is a highly structured domain that may not reflect the full complexity of real-world AI tasks. The paper itself acknowledges the approach is not a universal solution for all AI workloads. Nonetheless, independent AI researchers said the energy-efficiency margins demonstrated are significant enough to warrant serious follow-up research. The work will be presented at the International Conference of Robotics and Automation in Vienna in May 2026.
Originally reported by Tufts University.