















Why AI works
Statistical physics
Why AI works: the iteration of simple rules in a fully-connected architecture imposes Occam's razor
Draft (2026)
We show that the repeated application of logics in a globally connected architecture gives rise to an exponential bias towards simple output functions. It suggests an explanation for why neural networks and other learning methodologies are biased towards simplicity in the models that they generate.
Draft (2026)