Our papers are the official record of our discoveries. They allow others to build on and apply our work. Each paper is the result of many months of research, so we make a special effort to make them clear, beautiful and inspirational, and publish them in leading journals.
The ability of deep neural networks to generalize can be unraveled using path integral methods to compute their typical Boolean functions.
Statistical methods that normally fail for very high-dimensional data can be rescued via mathematical tools from statistical physics.
Bayesian networks describe the evolution of orthodontic features on patients receiving treatment versus no treatment for malocclusion.
We generalise neural networks into a quantum framework, demonstrating the possibility of quantum auto-encoders and teleportation.
Memristive networks preserve memory and have the ability to learn according to analysis of the network’s internal memory dynamics.
Exact equations of motion provide an analytical description of the evolution and relaxation properties of complex memristive circuits.
A local model of preferential attachment with short-term memory generates scale-free networks, which can be readily computed by memristors.
A less developed country has to learn simple capabilities in order to start a stable industrialization and development process.
Short loops (cycles) in real networks are a theoretical challenge for modeling.
Entropies of tailored random graph ensembles: bipartite graphs, generalized degrees, and node neighbourhoods
Ensembles of tailored random graphs allow us to reason quantitatively about the complexity of system.
An intriguing analogy exists between neural networks and immune networks.
Our approach gives a rigorous quantitative method for prioritising network properties.