Machine learning the geometry of the universe
Using machine learning to search the vast space of 10-dimensional geometries for ones that predict the Standard Model from string theory.
String theory is the most promising candidate for combining gravity with the Standard Model. It predicts 10 dimensions, four of which are space-time. The rest are folded up at a tiny scale, and their geometry determines the properties of our universe. However, there are a vast number of such manifolds, and finding the right one is hard. There is no theoretical selection principle, and exhaustive search is computationally infeasible.
In this project, we use techniques from machine learning to find a geometry which gives our observed universe. The search is informed by recently compiled data, such as the complete intersection Calabi-Yau manifolds, the Kreuzer-Skarke reflexive polytopes, as well as G2 manifolds. One advantage of these datasets is that they are expressed in the tensor language of algebraic and combinatorial geometry, which are well suited for deep neural networks, especially convolutional and graph networks.
This work will uncover new patterns in the space of manifolds and suggest principles for selecting the right one, ultimately bringing us closer to a Theory of Everything. It will also generate new benchmarks for machine learning, and could lead to new conjectures in pure mathematics by finding simpler or unexpected formulae.
Papers in this project
A neural network learns to classify different types of spacetime in general relativity according to their algebraic Petrov classification.
Neural networks find efficient ways to compute the Hilbert series, an important counting function in algebraic geometry and gauge theory.
Unsupervised machine-learning of the Hodge numbers of Calabi-Yau hypersurfaces detects new patterns with an unexpected linear dependence.
Neural networks find numerical solutions to Hermitian Yang-Mills equations, a difficult system of PDEs crucial to mathematics and physics.