The Geometry of Thinking: A Mathematical Programme Connecting Brain Signals to the Structure of Uncertainty
A walkthrough of four papers on how the mathematics of uncertainty, the physics of spin, and the rhythm of the heart might be more deeply connected than anyone expected.
A signal that shouldn’t be there
In 2022, my colleague David Pérez López and I published a short paper reporting something we couldn’t fully explain.
We had been running magnetic resonance experiments on people’s brains between 2013 and 2015 (yes, science can be so slow) — not the usual kind of brain scan that produces those familiar grey images, but a more specialised measurement designed to detect a particular type of coherence between pairs of proton spins in brain water. The technique is sensitive to correlations between spins — whether they are doing something coordinated rather than just fluctuating independently.
What we found was unexpected. There was a signal. It was locked to the cardiac cycle — appearing during the phase when blood perfuses the brain tissue. It was present when people were awake and absent when they were asleep. And its complexity correlated with short-term memory performance.
None of this was supposed to happen. In the conventional picture, proton spins in warm biological tissue at body temperature are overwhelmed by thermal noise. They shouldn’t be doing anything coordinated enough to produce the kind of signal we were seeing. The standard expectation was: thermal noise destroys any delicate quantum correlations almost instantly. End of story.
But the signal was there. And it kept showing up and it was easy to reproduce in many volunteers.
The question was: what could possibly explain it?
Over the past three years, I’ve been developing a mathematical framework that, I believe, provides an answer — or at least the outline of one. This post walks through that framework. It spans four papers, each building on the last. I’ll try to explain what each one does, why it matters, and where the whole programme currently stands.
---
Paper 1: The mathematics of shrinking uncertainty
The story begins not with brains or spins, but with a question about geometry.
Imagine you have a system — any system — and you’re uncertain about its state. You can represent that uncertainty as a cloud of probability: a distribution over the possible states the system could be in. If you’re very uncertain, the cloud is wide. If you’re quite sure, it’s narrow.
Now imagine the uncertainty is shrinking — the system is converging toward a decision, or a measurement is becoming more precise, or an inference is becoming more confident. The cloud is getting smaller. The question is: what is the geometry of that process? As the cloud of uncertainty compresses, what mathematical structure governs how it can and cannot shrink?
This turns out to be a rich question. The space of probability distributions has a natural geometry — the Wasserstein geometry, which comes from optimal transport theory. You can think of it as measuring the cost of reshaping one probability distribution into another. When the distributions are Gaussian (bell-shaped), this geometry simplifies: it becomes a geometry on the space of covariance matrices, which are the mathematical objects that encode how spread out and correlated a distribution is.
That simplified geometry has a name. It’s called the Bures metric. And it has a remarkable property.
The first paper establishes what happens when uncertainty compression reaches a critical regime. Every covariance matrix can be decomposed into independent “modes” — think of them as independent axes along which uncertainty can shrink. Each mode has a minimum size, set by the physics of the system. This minimum is called the Casimir floor (the name comes from the analogous concept in the theory of Lie groups, but the physical picture is simple: there’s a limit to how tightly you can squeeze each mode).
Here’s the key result: once one or more modes hit their floor, they can’t shrink any further individually. But the total uncertainty can still decrease — it just has to do so by developing *correlations between modes*. The compression overflows from the individual modes into the connections between them.
This is a mathematical theorem, not a physical claim. It says: in the geometry of covariance matrices, there’s a regime change. In the bulk, each mode can shrink independently. At the floor, continued compression forces cross-mode structure.
The paper proves this, characterises the geometry, and identifies the group-theoretic structure of the overflow. The modes are classified by a mathematical group called Sp(2,ℝ), which is isomorphic to another group called SU(1,1). That identification will become important later.
---
Papers 2 and 3: A model of the inference–action boundary
The second and third papers (now merged into one) ask: what if the brain actually does this?
The idea is that the brain, in its normal operation, is constantly performing inference — building and updating an internal model of the world. This process can be described as a flow on a space of probability distributions: a gradient flow that moves the system toward lower free energy, which in this context means better predictions.
Most of the time, this flow lives in the “bulk” — the wide-open space of distributions far from any boundary. The geometry there is Wasserstein geometry. But when the system approaches a decision — when inference needs to become action — the uncertainty has to compress to a point where a commitment can be made. And as it compresses, the geometry changes.
Near the boundary, the distributions become approximately Gaussian (this is a consequence of a classical mathematical result called Laplace’s method). On the Gaussian submanifold, the Wasserstein geometry becomes Bures geometry. And if the compression continues past the Casimir floor of the individual modes, the cross-mode overflow mechanism of Paper 1 kicks in.
The merged paper develops this into a full dynamical model. It includes a “substrate” — the physical medium in which the inference is implemented — and shows how the geometry of the inference flow is constrained by the geometry of the substrate. The brain doesn’t compute in a vacuum; it computes in tissue, with blood flow, with metabolic constraints. The paper derives how those constraints shape the gradient flow and, in particular, how the cardiac cycle interacts with the boundary regime.
The specific proposal is that the cardiac perfusion phase — the moment when oxygenated blood surges through the brain tissue — acts as a resetting event. It drives the collective covariance scale of the relevant spin system below the Casimir floor, triggering the overflow into cross-mode structure. Between heartbeats, thermal relaxation pushes the system back above the floor. The result is a periodic, cardiac-locked window during which cross-mode structure exists.
This paper also makes a prediction about wakefulness. The mechanism requires two things simultaneously: the vascular reset (the heartbeat) and an active cortical inference process (wakefulness). During sleep, the heartbeat continues but the cortical inference is reorganised into different modes — slow-wave consolidation, REM — that don’t engage the same boundary transition. So the signal should disappear during sleep. And it does.
---
Paper 4: The bridge between geometry and physics
At this point in the programme, there’s a mathematical framework and a physical prediction. But there’s also an obvious question: how can an abstract geometric theory about probability distributions have anything to do with actual physical spins in brain tissue?
This is the question the companion paper addresses, and its answer is, I think, the most conceptually important result in the programme.
The Bures metric — the geometry that governs the boundary regime — was not invented for this purpose. It was discovered twice, independently, by people working on completely unrelated problems.
In 1976, Armin Uhlmann, working on the mathematical foundations of quantum mechanics, showed that the Bures metric is the natural geometry on the space of quantum density matrices — the mathematical objects that describe the states of quantum systems. He wasn’t thinking about transport or probability distributions. He was asking: what’s the most natural way to measure how different two quantum states are?
In 1982, David Dowson and Brian Landau, working on statistics, showed that the Bures metric is also the Wasserstein distance between Gaussian distributions — the minimum cost of transporting one bell curve into another. They weren’t thinking about quantum mechanics. They were working on a problem in multivariate statistics.
Two derivations. Two completely different starting points. The same formula. Why?
Because both derivations concern the same mathematical object: the space of positive-definite matrices. Covariance matrices are positive-definite. Density matrices are positive-definite. The Bures metric is the unique natural geometry on that space, characterised by invariance and monotonicity properties that both the transport and quantum descriptions independently require.
This resolves what might seem like the biggest conceptual gap in the whole programme. The transport framework doesn’t “reach down” to interact with the spins. The spins don’t “reach up” to connect with the transport geometry. Both descriptions independently identify the Bures metric as the natural geometry of the space they’re working in — and they happen to be working in the same space.
An analogy helps. A geographer studies the Earth’s surface as a navigation problem: distances, transport costs, shortest paths. A geologist studies the same surface as a physical substrate: stress, seismic waves, rock composition. Both use the same Riemannian geometry — the intrinsic geometry of the manifold. It would be strange to ask how a navigation metric “interacts with” seismic waves. It doesn’t interact with them. It *is* the geometry of the space in which both navigation and seismology take place.
The paper builds a complete logical chain from the geometric framework to the physical spin system, with each step identified as either a mathematical theorem or a physical identification. There are exactly two places where you have to make a physical commitment (rather than just following the mathematics): the identification of the boundary covariance with the quantum spin covariance, and an architectural assumption about how the bipartition is distributed across brain regions. Everything else in the chain is proven mathematics.
The paper also addresses the decoherence objection head-on. Any proposal involving quantum effects in warm tissue has to deal with Tegmark’s argument: thermal decoherence destroys quantum correlations almost instantly. The framework’s response is structural, not evasive. In the standard framing, thermal noise is the enemy — it destroys quantum correlations. In this framework, thermal noise is the *precondition*. The Casimir floor that drives the cross-mode overflow is set by the thermal noise floor. If there were no thermal noise, there would be no floor; without a floor, single-mode compression could continue indefinitely; without exhaustion of single-mode compression, there would be no cross-mode structure. The mechanism exists *because of* thermal noise, not despite it.
This doesn’t make the decoherence problem vanish. It reframes it. The question becomes: can the cardiac perfusion event drive the collective covariance below the Casimir floor *faster* than thermalisation restores it above the floor? That’s a quantitative question about competing macroscopic rates, not a blanket impossibility argument.
---
What is actually “quantum” here?
This is a good moment to pause and address a question that’s probably forming in your mind. If the framework is about information geometry — probability distributions, covariance matrices, transport costs — then what exactly is quantum about it? And how does that relate to the decoherence problem?
The Bures convergence gives a precise answer, and it’s not the one most people expect.
When people hear “quantum effects in the brain,” they picture something like Schrödinger’s cat: a fragile physical superposition that has to be shielded from the thermal environment or it collapses. In that picture, “quantum” is a property of a physical state — a particular configuration of particles that is delicate, exotic, and easily destroyed. The decoherence objection then follows immediately: thermal noise at body temperature destroys such states in femtoseconds.
But the Bures convergence says something different. It says that the space of positive-definite matrices — the space in which both covariance matrices and quantum density matrices live — has an intrinsic geometry, and that geometry is the same whether you arrive at it from information theory or from quantum physics. What’s “quantum” is not a fragile state sitting inside the space. What’s quantum is the *structure of the space itself*.
Think of it this way. The space of possible states of the spin system has a shape — a geometry. That geometry has certain properties: curvature, boundaries, floors below which certain kinds of compression can’t continue. Those geometric properties don’t depend on which particular state the system is in at any given moment. They’re properties of the arena, not of any particular player on the field.
Decoherence changes which state the system occupies — it pushes the state toward the maximally mixed centre of the space. But it doesn’t change the geometry of the space. The Casimir floor is still there. The overflow mechanism is still there. The cross-mode channels are still there. A state that has been thermalised and pushed back toward the centre can be driven back toward the boundary by the next heartbeat, and when it gets there, it encounters the same geometric structure as before.
This is why the framework can accommodate thermal noise rather than fighting it. The relevant “quantum” structure is not a delicate physical configuration that noise destroys. It is the geometric structure of the information space in which the system’s states are represented — and that structure is permanent, intrinsic, and independent of any particular state’s survival.
The physical spins participate because they are the degrees of freedom whose states populate this space. The Bures metric governs their state geometry not because someone applied a quantum theory to them from outside, but because their state space *is* the space of positive-definite matrices, whose intrinsic geometry *is* the Bures metric. The “quantum” content of the framework is therefore not that individual spins are in exotic superpositions. It is that the information geometry governing the boundary regime — the regime where inference becomes action — has the structure of quantum state space, with all the constraints (Casimir floors, cross-mode overflow, non-compact pair structure) that entails.
This distinction matters enormously for the decoherence question. If you’re trying to protect a fragile state from noise, you face Tegmark’s bounds and you lose. If instead the relevant structure is geometric — a property of the space rather than of any state within it — then the question is entirely different. You don’t need the state to survive. You need the system to be *driven back into the relevant geometric regime* periodically. And that’s exactly what the cardiac cycle does.
---
What the experimental signal actually is
The final step in the programme takes a completely different approach. Instead of working from the geometry down to the physics, it works from the measured signal up.
The question is: what kind of spin dynamics could produce the signal we observe? And what can we say about the quantum state of the spin system based on that signal?
The next thing the paper establishes is an algebraic correction. In the original 2022 paper, we described the signal as “zero-quantum coherence” — a standard NMR term for correlations between spins that don’t change the total magnetic quantum number. But a careful algebraic analysis reveals something more specific.
The two-spin system has two natural subalgebras. One is compact — it generates bounded, oscillatory dynamics (the familiar exchange coupling of NMR). The other is non-compact — it generates unbounded, hyperbolic dynamics (squeezing and pair creation). The compact algebra lives in the zero-quantum sector (coherence order zero). The non-compact algebra — SU(1,1) — lives in the *double-quantum* sector (coherence order ±2).
The SU(1,1) pair operators connect the “both up” and “both down” states of the spin pair. They create and destroy *pairs*. And they carry double-quantum coherence order, not zero-quantum.
So how does a double-quantum coherence produce a detectable signal through a readout that filters for zero-quantum? The answer is a specific coherence-transfer pathway. The first 45° pulse converts part of the DQ pair coherence into a zero-quantum intermediate. The gradient filter preserves that intermediate (while destroying the original DQ component). The second 45° pulse converts the surviving zero-quantum intermediate into detectable single-quantum magnetisation.
The paper derives this pathway explicitly and shows that the detected signal is real — but the transfer coefficient is smaller than it would be for a directly zero-quantum coherence. This means that for a given measured signal amplitude, the underlying pair coherence is *larger* than a naive calibration would suggest.
The paper then assembles the experimental evidence discriminating between the two algebras. Eight independent features of the measured signal — preparation dependence, time dependence, refocusing frequency, echo parity, amplitude scaling, coherence pathway, magic-angle behaviour, and reservoir coupling structure — are tabulated against the predictions of compact SU(2) and non-compact SU(1,1). Every feature favours SU(1,1).
Finally, the entanglement question — and why it’s harder than it looks
With the signal identified as SU(1,1) pair coherence, the natural next question is: does this mean the spins are entangled?
The paper’s answer is carefully layered. It identifies three levels of interpretation, each requiring stronger assumptions than the last.
Level 1: Metric-regime witness. The signal is inconsistent with compact SU(2) exchange. This alone establishes that the system has entered a non-compact dynamical regime — the regime that, in the companion geometric paper, corresponds to the deep boundary where cross-mode structure becomes necessary.
Level 2: MQC/squeezing witness. The detected signal is a collective double-quantum coherence intensity. In the framework developed by Gärttner, Hauke, and Rey, such multiple-quantum coherence intensities can themselves serve as many-body entanglement witnesses — observable quantities whose values, if they exceed a separable bound, certify that the system cannot be a mixture of unentangled states.
Level 3: Formal entanglement witness. Under the additional assumption that the pathway calibration and separable bound can be quantitatively evaluated, the signal becomes a testable entanglement criterion.
But here’s the critical subtlety. A naive attempt to evaluate entanglement by looking at a single spin pair fails catastrophically in room-temperature NMR. The reason is thermodynamic. At body temperature, the density matrix of any spin system is overwhelmingly dominated by the identity — the completely mixed state. The thermal polarisation is about one part in a hundred thousand. Any pair coherence, no matter how strong relative to the equilibrium magnetisation, is tiny in absolute terms compared to the uniform background. A bipartite entanglement criterion compares the off-diagonal coherence (tiny) against the anti-aligned populations (approximately 1/4 each). The coherence loses by five orders of magnitude. This is the pseudopure-state obstruction, well known in liquid-state NMR quantum information.
The paper identifies this honestly and shows that the resolution is to abandon the bipartite picture entirely. The signal is not the coherence of a single pair against a thermal background. It is a collective coherence of a macroscopic ensemble — and the correct framework for evaluating it is the MQC witness, which compares the detected collective coherence intensity against the separable bound for the *entire thermal ensemble*, not for an isolated pair.
The formal witness takes the form of a ratio: detected DQ coherence intensity divided by the maximum achievable by a fully separable thermal state. If this ratio exceeds one, entanglement is certified.
The paper does not yet evaluate this ratio numerically. Two things are needed: a full simulation of the transfer coefficient for the actual pulse sequence, and computation of the separable bound for non-compact SU(1,1) generators (the existing theory was developed for compact generators). Both are concrete, well-defined problems. The paper identifies them as the next steps.
---
Where things stand
Let me be direct about what this programme has and hasn’t accomplished.
What it has:
A mathematical framework in which the transition from inference to action is accompanied by a metric regime change. A proof that at the deep boundary, single-mode compression hits a floor and cross-mode structure becomes algebraically necessary. A resolution of the apparent conceptual gap between transport geometry and quantum spin physics (they share the same intrinsic geometry). A signal-level analysis that identifies the detected spin coherence as non-compact SU(1,1) pair dynamics, distinguishable from compact exchange by eight independent features. A formal many-body entanglement witness whose structure is derived but whose numerical evaluation awaits two specific calibrations. And a set of falsifiable predictions: cardiac-phase dependence, wakefulness dependence, magic-angle behaviour, sensitivity to SU(1,1) preparation.
What it doesn’t have:
A closed numerical entanglement certification. The formal witness is in place but the separable bound for non-compact generators hasn’t been computed. The transfer coefficient hasn’t been fully simulated. And the measurements themselves — while consistent with the framework — were not designed to test it. They preceded it.
---
The bigger picture
I want to close with what I think is the most important conceptual point, because it’s easy to miss amid the technical details.
The standard framing of “quantum effects in the brain” asks whether fragile quantum states can survive the hostile thermal environment of warm tissue. The answer to that question, as Tegmark showed, is almost certainly no — if you’re asking about the kind of quantum states that require isolation from noise.
But that’s the wrong question. The framework developed here doesn’t require quantum states to survive thermal noise. It doesn’t even require “quantum” to mean what most people think it means in this context.
What the Bures convergence shows is that the geometry governing the boundary regime — the regime where inference becomes action — is quantum state geometry. Not because someone applied quantum mechanics to the brain from outside, but because the information space in which uncertainty is represented has, intrinsically, the same mathematical structure as the space of quantum states. The “quantum” part is the geometry of the arena, not a fragile property of any particular state on the field.
The thermal noise floor sets the Casimir bound. The Casimir bound forces the overflow into cross-mode structure. The cross-mode structure is what produces the signal. And the cardiac cycle periodically drives the system back into the geometric regime where this structure exists — not by protecting a delicate state from noise, but by pushing the collective covariance past a geometric threshold that is itself *defined by* the noise.
The thermal environment is not the obstacle. It is the engine.
Whether that engine actually runs in the living brain — whether the cardiac cycle really does drive the collective spin covariance below the Casimir floor, whether the resulting cross-mode structure really constitutes many-body entanglement in the technical sense — these are open empirical questions. The framework makes them askable. The measurements make them approachable. The next step is to answer them.
---
The papers discussed in this post (have changed and swopped content a bit):
1. ”Metric regime change at the Gaussian boundary of Wasserstein space” — the mathematical foundation.
2. ”The inference–action boundary as a geometric regime change” + ”Gradient flows on Riemannian manifolds with conformal substrate constraint” — the cognitive model.
3. ”Cardiac-Phase-Dependent Spin Coherence as a Probe of Boundary Covariance Geometry in Neural Tissue” — the bridge between geometry and measurement.
4. ”SU(1,1) Pair Dynamics and an Entanglement Witness in Brain Proton Spin Ensembles” — the signal-level analysis.
*The original experimental observation:*
5. C. Kerskens and D. Pérez Lopez, “Experimental indications of non-classical brain functions,” *J. Phys. Commun.* **6** (2022), 105001.
