THOR AI Framework Solves 100-Year Physics Problem 400 Times Faster Than Supercomputers
A system built by University of New Mexico and Los Alamos researchers uses tensor networks and machine learning to directly evaluate atomic interaction calculations that previously took weeks of supercomputer time.
A new artificial intelligence framework called THOR has solved one of computational physics' most persistent and expensive problems — the evaluation of configurational integrals — performing calculations more than 400 times faster than advanced supercomputer simulations while preserving scientific accuracy. The system, developed by researchers at the University of New Mexico and Los Alamos National Laboratory, uses tensor network mathematics combined with machine learning to directly compute how atoms interact inside materials, bypassing the months-long indirect simulations that have bottlenecked materials discovery for decades.
The core challenge THOR addresses is known as the "curse of dimensionality." Configurational integrals describe the statistical behavior of atoms in a material across all possible arrangements and thermal states. As the number of variables grows, the computational complexity increases exponentially, making direct calculation impractical. Traditional approaches — including molecular dynamics simulations and Monte Carlo methods — work around this by sampling many random configurations over time, eventually building up an approximation of the true answer. But even these indirect methods, run on the world's most powerful supercomputers, can take weeks to converge on results for moderately complex materials.
THOR, which stands for Tensors for High-dimensional Object Representation, sidesteps this approximation by applying a mathematical technique called tensor train cross interpolation to compress the enormous multidimensional data landscape into a series of connected, manageable tensors. Rather than sampling the space randomly, the framework identifies a sparse set of configurations that collectively capture the full structure of the configurational integral, then evaluates it directly. Machine learning potentials — neural network models trained on quantum mechanical calculations — provide the underlying energy landscape that the tensor framework then integrates over.
Testing on real materials demonstrated the framework's power concretely. For copper, argon, and tin — benchmark materials with well-characterized physical properties — THOR reproduced established thermodynamic results including free energies, heat capacities, and phase transition temperatures with accuracy matching state-of-the-art simulations, but in a fraction of the time. In the most demanding test cases, THOR outperformed comparable Los Alamos simulations by a factor of more than 400. The framework was found to be particularly effective at computing properties across a range of temperatures and pressures, a capability important for applications in metallurgy, battery materials, and pharmaceutical design.
"This is the kind of breakthrough that changes what questions you can even ask," said one of the lead researchers, noting that the speed improvement effectively opens up whole classes of materials that were previously too computationally expensive to model from first principles. The full THOR codebase has been released on GitHub under an open-source license, giving research groups worldwide immediate access. The Los Alamos and New Mexico teams are now working on extending the framework to more complex materials including alloys, oxides, and molecular crystals, where the configurational space is even larger and the payoff for fast, accurate calculation even greater.
Originally reported by ScienceDaily.