# AI-assisted maths discovery

## 10 am – 10 pm, 28 Oct 2022

The London Institute hosts a one-day symposium on using machine intelligence to assist and automate mathematical conjecture formation.

Good conjectures can inspire new branches of mathematics. They usually come from spotting patterns and applying instinct. Because mathematics is exact and there are no equivalence coincidences, automated pattern detection is immune from the bias normally found in high dimensional search. Can machines help identify candidate conjectures and speed up theoretical research?

There has been a recent surge of interest in using machine learning to study various aspects of geometry, topology, knot theory and representation theory. Theoretical physics has helped drive these developments, either through the percolation of ideas, or by providing an effective testbed for machine learning methods, such the classification of Calabi—Yau geometries in string theory.

In this one-day symposium, we discuss how to assist and automate new mathematical discovery by exploiting novel computation and deep learning, as well as insights into the theory of learning itself. Several approaches in the fields above have relied on the deep learning architectures of neural networks and their remarkable property of acting as universal function approximators. Number theory, normally resistant to such approximations, is now proving susceptible to certain forms of learning. Well-known feature attribution techniques were recently used to find dramatic new results in representation theory and knot theory.

As we develop better theories of deep learning, profound connections to well-studied areas of mathematics are becoming apparent. The Kolmogorov-Arnold representation theorem seems to be structurally similar to neural networks, with deep connections to Hilbert’s 13th problem. More recently, researchers established that training in neural networks, viewed geometrically, is a variant of Ricci flow. Leveraging machine learning algorithms with deep insights such as these will lead to new mathematical discoveries as well as a better understanding of deep learning itself.

·

PROGRAMME

·

This one-day symposium will take place from 10am to 6pm at the London Institute, followed by drinks and dinner at 6:30 at Davy’s Wine Bar nearby.

·

VENUE

·

We will meet in the large Bragg room of the London Institute, which is on the second floor of the Royal Institution.

·

FINANCIAL SUPPORT

·

This symposium is funded by the Henderson Institute, Boston Consulting Group. For those outside of London, we can cover travel to London and overnight accommodation.

·

CHALK TALKS

·

Chalk talks are a step back from the usual technical seminar towards a more interactive way of communicating. In a chalk talk, one researcher leads a discussion at the blackboard. The format is flexible—it can be anything from a high level overview of results to a panel discussion on a topic that needs rethinking. Chalk talks are a chance to leverage the tremendous expertise we have gathered and to form long-term interactions.

## Speakers

Dr Challenger Mishra is a Fellow at Cambridge and a Stipendiary Lecturer Oxford. His research includes machine learning, Calabi-Yau manifolds and string compactifications. He was previously a Rhodes Scholar at the Rudolf Peierls Centre for Theoretical Physics in Oxford.

Prof Yang-Hui He is a Fellow at the London Institute, Professor at City, University of London, Chang-Jiang Chair at NanKai University and Lecturer at Merton College, Oxford. He studied at Princeton, Cambridge and MIT and works at the interface of string theory, geometry and machine learning.

Dr Thomas Fink is the founding Director of the London Institute and Charge de Recherche in the French CNRS. He studied physics at Caltech, Cambridge and École Normale Supérieure. His work includes statistical physics, combinatorics and the mathematics of evolvable systems.