Converging futures: AI and finance
2 pm, 25 Jul 2023
The London Institute brings together experts from the worlds of finance and AI to discuss the potential and the pitfalls of AI-driven markets.
Mathematical theories have informed our understanding of financial markets since the stochastic modelling of Black, Scholes and Merton in the 1960s—and also helped to shape them. Generative AI, which can identify and mimic patterns in data, is already gaining traction in risk assessment, portfolio optimisation and trading strategies. Will AI challenge the belief that global financial markets are fundamentally unpredictable? Are we comfortable with the idea of it steering those markets? Can we extend the principles behind large language models to create large market models, which will predict market trends better than any human?
In this half-day event, the London Institute brings together experts in finance, mathematical physics, machine learning and large language models to discuss the potential and pitfalls of AI-driven markets. There will be six talks, then a panel discussion, followed by a drinks reception.
This event will take place on Tuesday 25 July at the London Institute for Mathematical Sciences, which is on the second floor of the Royal Institution. Coffee will be served from 13:30. Talks take place in Tyndall’s Parlour from 14:00. Afterwards, from 18:00, there will be drinks and further discussion in the Old Post Room. To register to attend, email firstname.lastname@example.org.
- 13:30 Arrival
- 14:00 Nikolaos Kaplis: Geometry of language
- 14:30 Hao Ni: Generating high-fidelity synthetic time series via the path characteristic function
- 15:00 Guido Caldarelli: The physics of financial networks
- 15:30 Coffee break
- 16:00 Mikhail Burtsev: Beyond attention: breaking the limits of transformer context length with recurrent memory
- 16:30 Challenger Mishra: Mathematical conjecture generation and machine intelligence
- 17:00 Jon Hammant: Machine learning for finance in the cloud
- 17:30 Panel discussion: Challenger Mishra, Yang-Hui He, Bharat Kasturi, Mikhail Burtsev, Nikolaos Kaplis
- 18:00 Drinks reception
This workshop is funded by Arabesque AI. Arabesque AI was born out of a desire to tackle key challenges faced by the asset management industry. The company operates through its cloud-based technology platform, Aether. Aether leverages AI and machine learning to deliver hyper-customisable investment strategies at scale.
Geometry of language
We discuss initial results on characterising the geometry of sentence embeddings, a task in natural language processing. This is based on the work of Daniel Platt, Moritz Platt, Rodrigo Barbosa, Christopher Evans, et al: "The Effect of Active and Passive Voice on Sentence Embeddings”.
Dr Nikolaos Kaplis is Chief Technology Officer of Arabesque AI. He studied physics at the National and Kapodistrian University of Athens and then Cambridge, before doing his PhD in theoretical physics at Oxford. He researches string theory and the geometric aspects of AI.
Generating high-fidelity synthetic time series
Synthetic time series generation is gaining attention, particularly in finance where sensitive time series data is abundant. We present PCF-GAN, a state-of-the-art generative adversarial network inspired by the path characteristic function. This work is joint with Hang Lou and Siran Li.
Prof. Hao Ni is a Professor of financial maths at UCL and a Fellow of the Turing Institute. She has held research positions at ICERM, Brown and the Oxford-Man Institute. Her research includes non-parametric modelling effects of data streams and the applications of rough path theory.
The physics of financial networks
As the total value of global financial markets outgrew that of the real economy, a web of interactions to model systemic risk was built, demanding new theoretical approaches. Through statistical physics, we develop new ways to study the network’s structure, dynamics, and stability.
Prof. Guido Caldarelli is a professor at Ca' Foscari University of Venice and an Associate Fellow of the London Institute. He was President of the Complex Systems Society and is a founder of Società Italiana di Fisica Statistica. His recent research is on digital twins of cities.
Breaking the limits of transformer context length
The complexity of transformer-solvable problems scales quadratically with input size. Investigating recurrent memory augmentation of pre-trained transformer models, we find scope for storage of up to 2 million tokens, exploring further with language modelling task experiments.
Dr Mikhail Burtsev is a Landau AI Fellow at LIMS. He did his PhD at the Keldysh Institute, and has worked at the Anokhin and Kurchatov Institutes and Cambridge. He set up and led the Neural Nets and Deep Learning Lab at MIPT, which created the award-winning framework, DeepPavlov.
Mathematical conjecture generation
Good mathematical conjectures epitomise milestones in discovery, and crafting them is a pattern-recogntion task, for which machine learning is tailor-made. We present a method for machine-learning a space of conjectures, and generate new ones in number theory and group theory.
Dr Challenger Mishra is a Research Fellow at Cambridge and a Fellow of Queens’ College. His research includes machine learning, Calabi-Yau manifolds and string compactifications. He was previously a Rhodes Scholar at the Rudolf Peierls Centre for Theoretical Physics in Oxford.
Machine learning for finance in the cloud
How can we use machine learning for finance in the Cloud? We discuss how high performance computing provides a quicker cycle time, increasing the progression cadence of technology. We also discuss increases in heterogeneous computing that are helping shape the future of technology.
Jon Hammant leads Compute Business UK & Ireland at Amazon Web Services, including Autonomous Computing, Quantum Computing, and Framework AI. He is a Fellow of the British Computer Society and was Computing Magazines 2020 DevOps Manager. He previously led DevOps teams at Accenture.
Challenger Mishra, Yang-Hui He, Bharat Kasturi, Mikhail Burtsev, Nikolaos Kaplis
Prof. Yang-Hui He is a Fellow at the London Institute, Professor at City University, Chang-Jiang Chair at NanKai University and Lecturer at Merton College, Oxford. He studied at Princeton, Cambridge and MIT and researches string theory, algebraic geometry and machine learning.
Bharat Kasturi is a quantitative researcher at Arabesque AI. He studied at the Indian Institute of Technology before doing his masters degree in the mathematics of finance at Columbia. His interests include perspectives on the practicalities of applying AI to portfolio construction.