From physics into maths

NAUTILUS, 4 SEP 2024

Why are physical insights from the real world proving so useful for solving abstruse problems in pure mathematics, Ananyo Bhattacharya asks.

A real triumph for mathematics. That was how Albert Einstein hailed his own general theory of relativity of 1915. What he meant was that the warped fabric of spacetime in his theory of gravity, one of the 20th century’s greatest breakthroughs in theoretical physics, was perfectly described by the purely mathematical work of Carl Friedrich Gauss, Bernhard Riemann, and others from more than half a century earlier.

Einstein was still marvelling at the phenomenon years later. “How can it be,” he mused, “that mathematics, being after all a product of human thought which is independent of experience, is so admirably appropriate to the objects of reality?”

Others too have pondered the mystery. In 1960, the Nobel prize-winning physicist Eugene Wigner even devoted a whole essay to what he called the “Unreasonable Effectiveness of Mathematics in the Natural Sciences.” One example he cited was Newton’s law of gravity, which states that the force of attraction between two particles is inversely proportional to the square of the distance between them. Although Newton proposed the law “based on very scanty observations,” in the words of Wigner, the rule has since been found to be accurate to less than a ten thousandth of a percent. “It is difficult,” he concluded, “to avoid the impression that a miracle confronts us here.”

Recently, though, the tables have turned. Now it’s insights and intuitions from physics that are unexpectedly leading to breakthroughs in mathematics. After going their own way for much of the 20th century, mathematicians are increasingly turning to the laws and patterns of the natural world for inspiration. Fields stuck for decades are being unstuck. And even philosophers have started to delve into the question of why physics is proving “unreasonably effective” in mathematics.

Take Hugo Duminil-Copin, who in 2022 won the Fields Medal, an accolade awarded every four years to mathematicians under the age of 40, and often described as the Nobel Prize of mathematics. Duminil-Copin worked in an area called “percolation theory.” Exactly as it sounds, this is mathematics rooted in how a fluid, such as hot water, flows through a porous medium, like freshly ground coffee.

Among other things, Duminil-Copin used the theory to understand phase transitions—abrupt changes of behavior in complex systems that come at a critical point, like water freezing into ice or a magnet losing its magnetism when heated. In percolation theory, gaps and channels in the medium have some probability of being linked to each other. When that probability exceeds a certain threshold—the critical point—there’s guaranteed to be an unbroken path from top to bottom. There’s a phase transition—fluid can suddenly flow freely through the medium—and, in the case of coffee, the aromatic liquor begins to drip.

An important unsolved problem in the mathematics of phase transitions is that no one has proved that systems on the cusp of phase change are conformally invariant, meaning they display rotational, translational, and scale symmetry. This trio of symmetries means that turning, moving, and expanding or shrinking part of the system will not affect the properties of the whole. An example would be spinning a circle—it’d look the same after rotation as before. Physically this means that exactly how coffee grains are oriented in the filter doesn’t change how the water flowing through them behaves. Conformal invariance is a key property of theories that hope to accurately describe phase transitions, since in the real world there are often dramatic shifts and swirls within materials close to their critical points.

In 2020, Duminil-Copin and his collaborators were able to prove that a well- studied and important two-dimensional model of percolation, in which fluid “flows” through a sort of network or gridIn 2020, Duminil-Copin and his collaborators were able to prove that a well- studied and important two-dimensional model of percolation, in which fluid “flows” through a sort of network or grid, is rotationally invariant. This is a key step toward proving conformal invariance, which has only been demonstrated to be true for a handful of special cases. Their long-term goal is to widen the net to include all models and so prove the phenomenon is a universal property of all systems at their critical points—a result the community has been chasing for over half a century.), is rotationally invariant. This is a key step toward proving conformal invariance, which has only been demonstrated to be true for a handful of special cases. Their long-term goal is to widen the net to include all models and so prove the phenomenon is a universal property of all systems at their critical points—a result the community has been chasing for over half a century.

But if the mathematicians do finally crack that problem, they will not surprise physicists. That’s because, in 1970, the Russian theoretical physicist Alexander Polyakov predicted But if the mathematicians do finally crack that problem, they will not surprise physicists. That’s because, in 1970, the Russian theoretical physicist Alexander Polyakov predicted that all substances that switch phase—suddenly or “discontinuously” like water, or gradually or “continuously” like magnets—are conformally invariant at their critical points. Though Polyakov’s prediction lacks the rigor of a mathematical proof—his claim is a bit too hand-wavy for mathematicians—he was able to leap ahead to his conclusion by drawing on his intuitions about the way real materials behave. And, in turn, Polyakov’s work gives mathematicians like Duminil-Copin confidence that the problem can be solved. “It’s very important to me,” Duminil-Copin has said, “to keep trying to discover new mechanisms in percolation in order to build the most general and robust theory possible.”) that all substances that switch phase—suddenly or “discontinuously” like water, or gradually or “continuously” like magnets—are conformally invariant at their critical points. Though Polyakov’s prediction lacks the rigor of a mathematical proof—his claim is a bit too hand-wavy for mathematicians—he was able to leap ahead to his conclusion by drawing on his intuitions about the way real materials behave. And, in turn, Polyakov’s work gives mathematicians like Duminil-Copin confidence that the problem can be solved. “It’s very important to me,” Duminil-Copin has said. But if the mathematicians do finally crack that problem, they will not surprise physicists. That’s because, in 1970, the Russian theoretical physicist Alexander Polyakov predicted that all substances that switch phase—suddenly or “discontinuously” like water, or gradually or “continuously” like magnets—are conformally invariant at their critical points. Though Polyakov’s prediction lacks the rigor of a mathematical proof—his claim is a bit too hand-wavy for mathematicians—he was able to leap ahead to his conclusion by drawing on his intuitions about the way real materials behave. And, in turn, Polyakov’s work gives mathematicians like Duminil-Copin confidence that the problem can be solved. “It’s very important to me,” Duminil-Copin has said, “to keep trying to discover new mechanisms in percolation in order to build the most general and robust theory possible.”, “to keep trying to discover new mechanisms in percolation in order to build the most general and robust theory possible.”

“Physicists are much less concerned than mathematicians about rigorous proofs,” says Timothy Gowers, a mathematician at the Collège de France and a Fields Medal winner. “So if a mathematical statement is obviously correct—for example, because it is backed up by heuristic arguments and computational evidence—that may well be good enough for physicists’ purposes. And since finding proofs is hard, that sometimes allows physicists to explore mathematical terrain more quickly than mathematicians, who like to stop and prove results before building on them.”

“But,” Gowers adds, “that isn’t really an answer to the question of why the process of trying to understand the physical world leads to so much interesting mathematics.”

This pattern of discovery—in which physicists’ hunches, often prompted by empirical evidence, drive mathematical progress—is arguably a more puzzling phenomenon than the one that inspired Wigner’s essay. After all, mathematics was invented for reckoning. In Mesopotamia, for example, the Sumerians developed a counting system and wrote multiplication tables on clay tablets. Their purpose? To tally and divide goods and wares.

What began as a tool to grease the wheels of government and commerce took on a life of its own, slowly expanding into areas of abstraction so obscure they can only be grasped after years of training. That mathematics should still be fit for its original purpose of surveying, quantifying, and understanding the physical world, albeit often in quite unexpected ways, is only really surprising if one ignores its origins.

But why should the opposite be true?

That is, why should physics—rooted in making sense of real things in the world like apples and electrons—provide such good leads for solving some of the toughest problems in mathematics, which deal with intangible stuff, like functions and equations?

That it does so, is a truth as old as science itself. In a letter to his friend, the polymath Eratosthenes, the ancient Greek mathematician and inventor Archimedes described how the laws of mechanics had spurred some of his most important mathematical discoveries. He showed his friend various theorems about the relative areas and volumes of shapes, like triangles, spheres, and parabola. But as well as proffering rigorous proofs for his results, Archimedes gave Eratosthenes physical arguments, too, using the science of levers to show how these shapes would balance each other on a beam.

Then of course there’s Newton, who (alongside Leibniz) famously developed a new kind of math—calculus—while trying to understand the motion of falling objects. Calculus made it possible to mathematically describe continuous change. During the 1700s, Leonhard Euler, one of the greatest mathematicians in history (he has his own number, e), was often driven by trying to solve real- world engineering problems. His talent for number and abstraction allowed him to roam freely through astronomy, optics, hydrodynamics, and mechanics, creating new mathematical tools as he went.

In the early 20th century, new mathematics spun off the development of quantum mechanics, the theory of the subatomic world, even as physicists struggling to make sense of the perplexing behavior of particles like electrons re-discovered half-forgotten mathematics, such as matrix theory. For instance, one of the pioneers of the new science, theoretical physicist Paul Dirac, invented a handy new mathematical device in 1930 that radically simplified calculations in the new theory. His “delta function” appeared to give answers in line with experimental observations (of microscopic phenomena quantum mechanics was trying to explain) like the fact that electrons orbiting the atomic nucleus could only assume certain fixed values of energy. But, as a spike that would be both infinitesimally thin and infinitely high if plotted on a graph, Dirac’s function outraged the sensibilities of mathematicians. For example, the Hungarian-American mathematician John von Neumann, regarded by his peers as a genius among geniuses, found the function so reprehensible—“improper,” “impossible,” and a “mathematical fiction,” he said—that he took pains to rework quantum mechanics to avoid Dirac’s dubious math.

Still, both avenues would lead to a wealth of mathematical pickings. Laurent- Moïse Schwartz cleaned up the math of Dirac’s delta function so that physicists could use it with a clean conscience (not that his problematic math had stopped them). Along the way, Schwartz pioneered a vital new field that deals with the theory of distributions, and won himself a Fields Medal for the feat in 1950.

Meanwhile, galvanized by his own work on quantum mechanics, von Neumann would make his most profound contribution to pure mathematics by inventing a new kind of algebra, now named after him, that describes relationships between mathematical operations in weird, impossible-to-picture spaces, including those with infinite or fractional dimensions. A half-century later, in 1990, Vaughan Jones would be awarded a Fields Medal for work applying von Neumann algebras to the mathematics of knots.

None of this was inevitable. There was no reason to believe, for instance, that Dirac’s delta function would turn out to be mathematically rigorous. “An interesting way physics has helped advance mathematics is when physical intuitions like Dirac’s lead to mathematically impossible results,” says Mark Colyvan, a philosopher at the University of Sydney who has studied the relationship between math and physics. “The physics urges the mathematician to come up with the relevant mathematical theory to support the physics but there is no guarantee, in advance, that there should be any such theory.”

Dirac, for his part, was convinced that physics would continue to be a fertile source of new math. “As more and more of the reasons why Nature is as it is are discovered,” he said during a talk he gave in his early 20s, “the questions that are of most importance to the applied mathematicians will become the ones of most interest to the pure mathematician.”

Von Neumann too recognized how closely math and physics cross-pollinated each other. “It is undeniable,” he said, speaking at the University of Chicago in 1946, “that some of the best inspirations in mathematics—in those parts of it which are as pure mathematics as one can imagine—have come from the natural sciences.” A mathematical subject that moves too far from “its empirical source,” he warned, “is in danger of degeneration.” And once an area of math shows “signs of becoming baroque,” he added, “the only remedy” lay in “the re- injection of more or less directly empirical ideas.”

Yet a separation between mathematicians and physicists was precisely what was developing. By the time von Neumann gave his lecture, relations between the two cohorts had soured. The leading lights of each field had lost interest in what was exciting the other.

In mathematics, a new movement had taken hold. Named after a Napoleonic general, the Bourbakis were a group of young French mathematicians who, starting from the mid-1930s, sought to get “back to basics” by laying down new foundations for mathematics. A generation of brilliant French mathematicians had died in the First World War. The task the Bourbaki collective set themselves, in a series of textbooks, was to create building blocks, or structures, for the next generation.

“Rigor consisted in getting rid of an accretion of superfluous details,” wrote Catherine Chevalley about her father, Claude, a founding member of the Bourbakis. “Once that filth was taken away, one could get at the mathematical object, a sort of crystallized body whose essence is its structure ... which could thereafter remain unchanged.” He was one of the more extreme adherents of the collective’s principles. Her father, Chevalley wrote, “thought of mathematics as a way to put objects to death for aesthetic reasons.”

Physicists found little of value in this austere vision of mathematics. Their interest in math did not run deep. Many had contributed to weapons and defense research during the Second World War and had an engineering mindset—math was a tool to get them where they needed to go. They were developing ideas like the Standard Model, physicists’ best current theory of the atomic and subatomic world, which tells us how particles like protons and electrons interact and what they’re made of; and quantum electrodynamics (QED). Iinitially formulated by Dirac, QED for the first time united special relativity, Einstein’s theory of space and time, with quantum physics, the rules of the subatomic world.

Richard Feynman, who helped give birth to QED, once declared, “If all mathematics disappeared, it would set physics back precisely one week.” Physicists, he implied, had no need for fancy maths and what they did need, they could quickly make up off the cuff. In this view, there’s no special role for math in physics or vice versa—math is just a convenient, concise language for communicating physics.

This view persisted as late as the mid 1980s, wrote historian Peter Gallison, who studied both physics and the history of science at Harvard University. “Mathematics, it was taught—I was taught—was to physics a kind of cleanup squad that came after the parade had passed.”

The indifference to math sometimes bordered on contempt. When John Wheeler, Feynman’s PhD advisor, asked him to attend a conference for both physicists and mathematicians, his former graduate student’s response was withering: “Dear John, I am not interested in what mathematicians find interesting. Sincerely yours, Dick.”

The animosity became so dire that during a lecture in 1972, Freeman Dyson, decried that “the marriage between mathematics and physics, which was so enormously fruitful in past centuries, has recently ended in divorce.” Dyson made clear that the loss was to his field: theoretical physics.

Yet a reconciliation was afoot, spearheaded by the late British-Lebanese mathematician Michael Atiyah. Like many of his generation, born between the World Wars, Atiyah, who died in 2019, admired the sparse elegance of the Bourbakis. But his interests were too broad to be constrained by the sort of rigor and focus the group espoused.

“He was at heart a geometer,” says Nigel Hitchin, a mathematician and emeritus professor at the University of Oxford who collaborated with Atiyah, “but in the mid 1970s he became convinced that theoretical physics was by far the most promising source of new ideas. From that point on he became a facilitator of interactions between mathematicians and physicists, attacking mathematical challenges posed by physicists, using physical ideas to prove pure mathematical results, and feeding the physicist community with the parts of modern mathematics he regarded as important but were unfamiliar to them.”

Over the next few years, Atiyah again and again brought mathematicians and physicists together in the same room. That he was already respected by other mathematicians—he had been awarded the Fields in 1966 for work in topology—certainly helped. His efforts were boosted when one of his graduate students, Simon Donaldson, began to use Yang-Mills theory, the math underlying the Standard Model, to produce astonishing mathematical insights about four-dimensional spaces or “manifolds”—including a new class of “exotic” spaces that only exists in four dimensions.

One of Atiyah’s long-term collaborators was the mathematical physicist Edward Witten, who he met at MIT in 1977. More than 20 years Atiyah’s junior, Witten later became a pioneer of string theory, in which tiny one-dimensional vibrating strings are mooted to be the fundamental building blocks of the universe, rather than the particles of the Standard Model.

Initially hailed as a possible “theory of everything” that would unite quantum theory with Einstein’s theory of gravity, string theory has to date arguably had a bigger impact on some of the most abstract fields of mathematics, such as algebraic geometry and differential topology, than in physics. In these areas, Witten and other string theorists have been able to produce precise conjectures that mathematicians have later proved.

Encouraged by Atiyah, for instance, Witten was in 1989 able to use string theory to get to Vaughan Jones’ earlier Fields-Medal winning result in knot theory. Jones had discovered a powerful algebraic expression called a knot polynomial that can be used to determine whether one type of knot can be turned into another one without cutting the loop of string used to tie it. But while Jones had used von Neumann algebras to derive his eponymous polynomial, Witten did so by reimagining knots as the tangled-up paths of a fundamental particle through spacetime.

In quantum theory, a particle travelling between two points can get from A to B via any number of different routes, some more probable than others, and physicists calculate the average of all these to work out, for example, the probability of the particle being in a particular place. What Witten found was that this average is equivalent to calculating the value of the Jones polynomial. And two years later, string theorist Philip Candelas and his colleagues also applied the theory to a decades-old puzzle in ‘enumerative geometry’, an ancient branch of mathematics dedicated to counting the number of solutions to geometrical problems. At its simplest, this means questions such as “How many lines can pass through two points on a plane?” (one), or “How many circles can be drawn that touch (are tangent to) three given circles?” The answer to the latter problem –eight-- was deduced over two thousand years ago by the ancient Greek geometer Apollonius.

Candelas and his collaborators were able to use tools from string theory to solve a particularly sticky problem in enumerative geometry, counting the number of certain kinds of curves in Calabi-Yau manifolds--strange six-dimensional shapes that are central to the theory. Their result connected two kinds of geometry, “symplectic” and “complex,” that mathematicians had studied in isolation from each other for decades, thinking they were unrelated.

Then, in 1995, Witten proposed that five different versions of string theory, each requiring ten dimensions, were all different aspects of a single 11-dimensional theory he called “M-theory.” Though M-theory remains unproven, mapping the correspondences between the different theories has led to startling mathematical discoveries. “It feels like every month string theory is giving new structures to mathematicians in an unprecedented way,” says mathematical physicist Yang-Hui He of the London Institute for Mathematical Sciences.

That string theory is a rich source of such unexpected relationships, or “dualities,” between two mathematical worlds, continues to excite mathematicians today. Physicist He and his collaborator string theorist Federico Carta, also of the London Institute, were studying the simplest type of Calabi- Yau manifold, called a K3 surface, when they stumbled across a relationship between the surface’s “homotopy groups,” which are used to classify shapes in topology, and a symmetry group, called “Matthieu 24.” The pair’s discovery reveals a completely unanticipated connection between two disparate fields of pure mathematics—topology, the study of shapes, and an area of modern algebra called group theory, which concerns the types of symmetry that objects possess.

Why physics should give rise to interesting mathematics, says He, is a “deep question.” There are an infinite number of patterns and structures that mathematicians could study, he says. “But the ones which come from reality are ones which we have an intuition about at some level.”

Hitchin, the emeritus Oxford mathematician, agrees. “Mathematical research doesn’t operate in a vacuum,” he says. “You don’t sit down and invent a new theory for its own sake. You need to believe that there is something there to be investigated. New ideas have to condense around some notion of reality...or someone’s notion maybe.”

This raises the question of whether physics feeds maths merely by providing a keener motivation for exploring it and a focus for mathematicians’ energies. Guided by intuitions about how the world should work, and a plausible endpoint, mathematicians can sometimes make speedier progress on a problem.

While this doesn’t address Dirac and von Neumann’s conviction that physics points to particularly rich insights in mathematics, it would explain a curious fact—“bad” physics can sometimes lead to good math.

Vortex theory, for instance, was an early attempt by British mathematical physicist William Thomson (Lord Kelvin) to explain why atoms came in a relatively small number of varieties. He pictured atoms as spinning rings which could be tied in intricate knots, with each knot corresponding to a different chemical element. The theory was abandoned after the discovery of the electron—but the mathematics led to the development of knot theory, the field Jones and Witten have advanced with their work.

Physics often galvanizes mathematicians in much the same way nature inspires artists, who might be driven to paint or sculpt by an awe-inspiring vista or the contours and textures of a landscape. Atiyah even co-authored a brain imaging study in 2014 that concluded the experience of mathematical beauty, for those able to appreciate it, excites the same parts of the brain as beautiful music, art, or poetry.

For Atiyah, the answer to Wigner’s question (why is mathematics so effective?) came down to the human brain. “Humans are a product of long evolution, in which powerful brains were an advantage. Such brains evolved in the physical world, so evolutionary success was measured by physical success,” he explained in a 2018 interview. “Hence human brains evolved to solve physical problems and this required the brain to develop the right kind of mathematics.” Wigner was only surprised by the effectiveness of mathematics, Atiyah concluded, “Because he focused on mathematics and physics, but ignored biology.”

In a 2010 paper with Hitchin and Robbert Dijkgraaf, Atiyah went on to highlight the successful use of physics in mathematics. Since then, however, there has been scant work to try to understand the phenomenon.

One philosopher who has recently looked at the issue is Daniele Molinini, at the University of Bologna. His 2023 paper, published in The British Journal for the Philosophy of Science, flips Wigner’s question to address “The Unreasonable Effectiveness of Physics in Mathematics.” Molinini’s surprising answer is that some laws of physics may be as incontrovertible as a mathematical theorem. “There are some principles about the world that we have to take as fundamental,” he says.

Philosophers broadly agree that mathematical truths are “necessary,” in that they have to be true in all possible worlds. Truths about nature, empirical facts, are different—they’re contingent. Light travels at a certain speed, but arguably it could have been otherwise in a differently set up universe. That is, mathematical truths have been, and will always be, true, no matter what.

Might there be certain laws of physics that are also “necessary” in the same way? In his paper, Molinini argues that the principle of conservation may be one such law. In physics, some properties of a system, like energy or momentum, can’t change. A bicyclist freewheeling down a hill, for example, is converting her gravitational potential energy into movement energy but the total amount of energy she and her bike have stays the same.

One direct consequence of energy conservation are the laws of mechanics that govern how a beam balances—the clockwise and anticlockwise turning forces (moments) acting around the pivot must cancel out if the beam is to remain stationary. This, Molinini contends, explains how Archimedes is able to successfully infer the truth of geometric proofs by mechanical considerations, which is otherwise puzzling. The physics and the math, in this case, are two sides of the same coin: Both are true because they draw on the same fundamental principle.

Another view, famously articulated in the early 17th century by Galileo Galilei and often championed by mathematicians, is that the universe is written in the language of mathematics. That idea has ancient origins, going back to at least Pythagoras and his followers, but a more recent, and extreme, version is Max Tegmark’s mathematical universe hypothesis, in which the universe itself is not just described by mathematics but is made of mathematics.

In Tegmark’s telling, our universe is just one of an infinite number of parallel universes and all the infinite possibilities of mathematics—every theorem, every proof—are realized somewhere in this multiverse. So, no wonder physics inspires new discoveries in math—the reality physics describes is at bottom mathematical anyway. “There’s an intimate connection between empirical science and mathematics,” says Colyvan. “One conclusion one could draw is that somehow the world itself is mathematical.”

In both cases, however, the mathematics of known physics is just a tiny fraction of all the maths out there (nearly all of which will be dull), so this view doesn’t really explain why math emerging from physics should be unusually rich.

Molinini’s now taking on a popular philosophical explanation for math’s applicability, “mapping,” which he believes can’t account for why good math can flow from physics. Mapping suggests that maths is applied to physics by turning physical concepts, like mass or separation, into mathematical entities, such as the equation for Newton’s law of gravitation, which can then be used to calculate something that is then mapped back into a physical property—for instance, the attraction between two objects. But Molinini argues that that process of mapping breaks down when one tries to reverse it to explain how math can emerge from physics.

There is burgeoning interest in this question from philosophers, he says, who have until now focused on the converse problem of why math can be applied to the empirical sciences. Whether it is because we see the world through mathematical eyes, as Atiyah thought; or because physics offers particularly rich pickings for mathematicians, as Dirac and von Neumann maintained; or if it is simply a convenient aid to the mathematical imagination, a long-standing mystery may finally be starting to get the attention it deserves.

Main image
Main image
Main image
Main image
Main image
Main image
Main image
Main image
Main image
Main image
Main image
Main image
Main image
Main image
Main image
Main image