← Library

Spectral Dynamics in E/I Networks

DH and IN

A derivation of Hopfield dynamics through the lens of linear operators and spectral stability.

The Hopfield network stores memories as fixed points of a recurrent dynamical system. The classical treatment builds a weight matrix from correlations and checks that target patterns are stable. We take a different approach: treat the weight matrix as a linear operator, decompose it spectrally, and use the eigenstructure to reason about both memory stability and the effects of structural perturbations.

The notes are in two parts.

Part I develops the operator framework: memories as kets, learning as a sum of projectors, retrieval as an inner product, and Amari's stability numbers as the quantitative measure of basin size.

Part II introduces biological sign constraints (Dale's Law) as low-rank perturbations of the Hebbian operator, culminating in a four-block parameterisation that allows independent control of excitatory and inhibitory coupling. The central question is: how does the balance of excitation and inhibition reshape the attractor landscape?

Part I: The Operator Framework

1. The Memory as a Ket

In a network of $N$ neurons, we represent a specific memory pattern as a vector in an $N$-dimensional Hilbert space, denoted as a ket $|\xi^{(\alpha)}\rangle$.

$$|\xi^{(\alpha)}\rangle = \begin{pmatrix} \xi_1^{(\alpha)} \\ \xi_2^{(\alpha)} \\ \vdots \\ \xi_N^{(\alpha)} \end{pmatrix}$$

Each component corresponds to the state of an individual neuron, constrained to the bipolar discrete state space $\xi_i \in \{-1, 1\}$. Geometrically, these kets point to the vertices of an $N$-dimensional hypercube.

2. The Hebbian Matrix as an Operator

Learning is the construction of a linear operator $\hat{J}_{\mathrm{Hebb}}$ that maps the state space back onto itself. In Dirac notation, the Hebbian rule is a sum of projection operators:

$$\hat{J}_{\mathrm{Hebb}} = \frac{1}{N} \sum_{\alpha=1}^{p} |\xi^{(\alpha)}\rangle \langle \xi^{(\alpha)}|$$

Expanding this into component form, we recover the classic correlation-based weight matrix:

$$J_{ij} = \frac{1}{N} \sum_{\alpha=1}^{p} \xi_i^{(\alpha)} \xi_j^{(\alpha)}$$

In this spectral view, each target memory becomes an eigenstate of the network. If the patterns are orthogonal, the operator identifies the subspace of stored memories and ensures they are fixed points of the system dynamics.

3. Retrieval as an Inner Product

When the network is in a noisy state $|\sigma\rangle$, the retrieval mechanism calculates the overlap (or correlation) between the current state and each stored ket:

$$m_{\alpha} = \frac{1}{N} \langle \xi^{(\alpha)} | \sigma \rangle$$

The value $m_{\alpha}$ acts as a "similarity score" between -1 and 1. The total input or "local field" $h_i$ acting on neuron $i$ is then the projection of the current state through the Hebbian operator:

$$h_i = \langle i | \hat{J}_{\mathrm{Hebb}} | \sigma \rangle = \sum_{\alpha=1}^{p} \xi_i^{(\alpha)} \left( \frac{1}{N} \langle \xi^{(\alpha)} | \sigma \rangle \right)$$

This reveals the competitive nature of recall: the network effectively "weights" each memory ket by its similarity to the input $|\sigma\rangle$, pulling the state toward the ket with the highest overlap.

5. Type I and Type II Networks

The $\langle \text{bra} |$ in our notation defines the receptive field of each neuron. Amari (1972) distinguished between two fundamental ways to pair bras and kets:

  • Type I (Auto-association): The bra matches the ket. $$\hat{J}_{\mathrm{Type I}} = \sum_{\alpha=1}^p |\xi^{(\alpha)}\rangle \langle \xi^{(\alpha)}|$$ The network is symmetric, causing the state to relax into static attractors.
  • Type II (Hetero-association): The bra of one pattern is paired with the ket of the next pattern in a sequence. $$\hat{J}_{\mathrm{Type II}} = \sum_{\alpha=1}^p |\xi^{(\alpha+1)}\rangle \langle \xi^{(\alpha)}|$$ This breaks symmetry and allows the network to recall temporal sequences.

5. Stability

For a Type I network with orthogonal patterns, each stored memory $|\xi^{(\alpha)}\rangle$ is an eigenvector of the Hebbian operator with eigenvalue $\lambda^{(\alpha)}$. The eigenvalue determines not just that the memory is a fixed point, but how robust it is to noise. Amari (1972) showed that the stability number — the maximum number of neurons that can be flipped without disrupting recall — is

$$s\bigl(|\xi^{(\alpha)}\rangle\bigr) = \left\lfloor \tfrac{\lambda^{(\alpha)}}{2} \right\rfloor$$

The stability domain is the Hamming ball of radius $s$ around the memory:

$$D\bigl(|\xi^{(\alpha)}\rangle\bigr) = \bigl\{ |\sigma\rangle : \mathrm{dis}\bigl(|\sigma\rangle,\, |\xi^{(\alpha)}\rangle\bigr) \leq s \bigr\}$$

Any initial state inside this ball converges to $|\xi^{(\alpha)}\rangle$ in a finite number of steps. Larger eigenvalues mean wider basins and more robust recall. This is the key link between Parts I and II: when a perturbation changes the eigenstructure, the stability numbers — and therefore the basins — change with it.

(The derivation, via Amari's $u_i$ functions, is given in the Appendix.)

Part II: Biological Constraints

5. Rank-1 Perturbation (Excitatory Neurons Only)

To transform a Hebbian matrix into a purely excitatory network where all weights $J_{ij} \geq 0$, we apply a rank-1 perturbation using the "all-ones" ket $| \mathbf{1} \rangle$:

$$\hat{J}_{\mathrm{Exc}} = \hat{J}_{\mathrm{Hebb}} + c | \mathbf{1} \rangle \langle \mathbf{1} |$$

For the memory attractors to remain spectrally stable, the perturbation must be "invisible" when the operator acts on a memory ket:

$$\hat{J}_{\mathrm{Exc}} |\xi^{(\alpha)}\rangle = |\xi^{(\alpha)}\rangle + c \langle \mathbf{1} | \xi^{(\alpha)} \rangle | \mathbf{1} \rangle$$

This requires the Global Balance Constraint: $\langle \mathbf{1} | \xi^{(\alpha)} \rangle = 0$. If the memory patterns have a mean of zero, the weight shift disappears during recall, and the memories remain fixed points.

6. Rank-2 Perturbation (Balanced E/I Networks)

To enforce Dale's Law with distinct populations of excitatory and inhibitory neurons, we use a sign-alternating "bra" $\langle \mathbf{1}_{+-} |$ which acts as a population map (assigning +1 to E-cells and -1 to I-cells):

$$\hat{J}_{\mathrm{EI}} = \hat{J}_{\mathrm{Hebb}} + c | \mathbf{1} \rangle \langle \mathbf{1}_{+-} |$$

Merely having a global mean of zero is insufficient here. If the memory $|\xi^{(\alpha)}\rangle$ has any overlap with the E/I partition, the operator will inject a global bias. We require the Strict Null-Space Constraint:

$$\langle \mathbf{1}_{+-} | \xi^{(\alpha)} \rangle = 0$$

This implies the memory must be balanced within the E and I pools independently. Under this structural symmetry, the biological constraints are spectrally decoupled from the memories.

8. Four-Block Generalisation

Sections 6–7 enforce Dale's Law with a single parameter. To study how independent variation of excitatory and inhibitory coupling affects memory, we decompose the perturbation into four blocks. Define indicator kets for the two populations:

$$|\mathbf{1}_E\rangle = \begin{pmatrix} 1 \\ \vdots \\ 1 \\ 0 \\ \vdots \\ 0 \end{pmatrix}, \qquad |\mathbf{1}_I\rangle = \begin{pmatrix} 0 \\ \vdots \\ 0 \\ 1 \\ \vdots \\ 1 \end{pmatrix}$$

where the first $N_E$ components correspond to excitatory neurons and the remaining $N_I$ to inhibitory. The general rank-4 perturbation is:

$$\hat{J} = \hat{J}_{\mathrm{Hebb}} + c_{EE}\, |\mathbf{1}_E\rangle\langle \mathbf{1}_E| + c_{EI}\, |\mathbf{1}_E\rangle\langle \mathbf{1}_I| + c_{IE}\, |\mathbf{1}_I\rangle\langle \mathbf{1}_E| + c_{II}\, |\mathbf{1}_I\rangle\langle \mathbf{1}_I|$$

Each coefficient independently scales one block of the weight matrix: $c_{EE}$ governs recurrent excitation, $c_{EI}$ the excitatory response to inhibitory input, $c_{IE}$ the inhibitory response to excitatory input, and $c_{II}$ recurrent inhibition.

Action on a memory

Applying $\hat{J}$ to a stored pattern $|\xi^{(\alpha)}\rangle$ gives:

$$\hat{J}\, |\xi^{(\alpha)}\rangle = |\xi^{(\alpha)}\rangle + \bigl( c_{EE}\, \langle \mathbf{1}_E | \xi^{(\alpha)} \rangle + c_{EI}\, \langle \mathbf{1}_I | \xi^{(\alpha)} \rangle \bigr) |\mathbf{1}_E\rangle + \bigl( c_{IE}\, \langle \mathbf{1}_E | \xi^{(\alpha)} \rangle + c_{II}\, \langle \mathbf{1}_I | \xi^{(\alpha)} \rangle \bigr) |\mathbf{1}_I\rangle$$

The overlaps $\langle \mathbf{1}_E | \xi^{(\alpha)} \rangle$ and $\langle \mathbf{1}_I | \xi^{(\alpha)} \rangle$ measure the imbalance of the memory within each population. The perturbation vanishes — and the memory remains an exact fixed point — if and only if both overlaps are zero:

$$\langle \mathbf{1}_E | \xi^{(\alpha)} \rangle = 0 \qquad \text{and} \qquad \langle \mathbf{1}_I | \xi^{(\alpha)} \rangle = 0$$

That is, the memory must be independently balanced within the excitatory and inhibitory populations. This recovers the constraints from §6–7:

$$\langle \mathbf{1}_E | \xi^{(\alpha)} \rangle = 0,\; \langle \mathbf{1}_I | \xi^{(\alpha)} \rangle = 0 \;\;\Longleftrightarrow\;\; \langle \mathbf{1} | \xi^{(\alpha)} \rangle = 0 \;\;\text{and}\;\; \langle \mathbf{1}_{+-} | \xi^{(\alpha)} \rangle = 0$$

When balance breaks

When $\langle \mathbf{1}_E | \xi^{(\alpha)} \rangle$ or $\langle \mathbf{1}_I | \xi^{(\alpha)} \rangle$ is nonzero, the perturbation injects a bias that is visible to the memory. The stability functions (§5) become neuron-dependent, and the stability number is set by the worst-case neuron (see Appendix - but I'm not sure this framing is correct). Crucially, the four coefficients contribute differently:

  • Reducing $c_{IE}$ or $c_{II}$ (disinhibition): weakens the restoring force on excitatory neurons, inflating their stability values while deflating those of inhibitory neurons. The basin narrows asymmetrically.
  • Increasing $c_{EE}$ (excess recurrent excitation): raises the global activity level. Stored patterns remain attractors but spurious high-activity states can emerge as new fixed points.
  • Asymmetric $c_{EI}$ and $c_{IE}$: breaks the symmetry of the weight matrix. The energy function is no longer well-defined, and the dynamics can exhibit limit cycles rather than convergence to fixed points.

The E/I sweep experiments (TODO) map out these regimes numerically, measuring stability numbers and basin sizes across the four-dimensional parameter space.

Appendix

Appendix A: The Stability Number

Amari (1972) defined

$$u_i\bigl(|\sigma\rangle\bigr) = \sigma_i \left( \sum_j J_{ij}\, \sigma_j \right)$$

which is positive when neuron $i$ agrees with the dynamics at state $|\sigma\rangle$, and negative when it does not. A state is a fixed point when every $u_i$ is positive; the smallest $u_i$ controls how many neurons can be flipped before the network leaves that state.

When $|\xi^{(\alpha)}\rangle$ is an eigenvector with eigenvalue $\lambda^{(\alpha)}$, every $u_i$ is identical:

$$u_i\bigl(|\xi^{(\alpha)}\rangle\bigr) = \xi_i^{(\alpha)} \cdot \lambda^{(\alpha)} \xi_i^{(\alpha)} = \lambda^{(\alpha)}$$

The stability number — the basin radius in Hamming distance — is then

$$s = \left\lfloor \tfrac{\lambda^{(\alpha)}}{2} \right\rfloor$$

Under a perturbation with matrix entries $P_{ij}$, the uniformity breaks:

$$u_i\bigl(|\xi^{(\alpha)}\rangle\bigr) = \lambda^{(\alpha)} + \xi_i^{(\alpha)} \sum_j P_{ij}\, \xi_j^{(\alpha)}$$

The second term depends on $i$, so different neurons now have different margins. The basin radius is set by the weakest — the neuron with the smallest $u_i$. When the null-space constraints from §6–8 hold, this term vanishes and the basins are unchanged. When they are violated, the basins shrink.