Efficient & Geometrically-Smart: Linear Memory SE(2)-Invariant Attention Explained

In many real-world tasks—like forecasting the paths of cars at a busy intersection, coordinating fleets of delivery robots, or simulating pedestrian movement—models must reason about not just where things are, but how they face or rotate relative to each other. That’s the SE(2) geometry: 2D position + heading. Traditional Transformer models that account for rotation and translation invariance (SE(2)-invariant) need to compute relative poses between every pair of objects. If you have $n$ objects, this leads to memory cost growing like $O(n^2)$—which becomes prohibitively expensive when $n$ is large. ...

July 25, 2025

Modern Methods in Associative Memory

Associative memory is the ability to store patterns and retrieve them when presented with partial or noisy inputs. Inspired by how the human brain recalls memories, associative memory models are recurrent neural networks that converge to stored patterns over time. The tutorial ‘Modern Methods in Associative Memory’ by Krotov et al. offers an accessible overview for newcomers and a rigorous mathematical treatment for experts, bridging classical ideas with cutting-edge developments in deep learning. ...

July 9, 2025