
I am a 4th-year physics PhD candidate and NSF GRFP fellow at Princeton University. My advisors are William Bialek (Princeton) and David Schwab (CUNY). My research focus is studying the science of AI, drawing on my background in physics and complex systems.
My current research in machine learning/AI includes projects in mechanistic interpretability, in-context learning, and LLM multi-agent interactions.
I’m also investigating chain-of-thought injections as a control method for AI safety in the MARS 3.0 research program. More information on our project can be found on the Geodesic Research website.
I’m happy to chat via email at lindsay.smith@princeton (dot) edu.
Papers
ALICE: An Interpretable Neural Architecture for Generalization in Substitution Ciphers
(Under review)
Project page + demo
When can in-context learning generalize out of task distribution?
(ICML 2025)
Model Recycling: Model component reuse to promote in-context learning
(NeurIPS 2024 SciForDL Workshop)
Specialization-generalization transition in exemplar-based in-context learning
(NeurIPS 2024 SciForDL Workshop)
Learning continuous chaotic attractors with a reservoir computer
(Chaos journal, 2022)
Selected as an Editor’s Pick and publicized with a Scilight press summary.
Background
Previously, I was an undergrad at the University of Pennsylvania where I majored in physics (with honors), minored in mathematics and French and Francophone studies, and graduated cum laude. My undergrad research advisor was Dani Bassett, and I researched human white matter brain networks, the human perception of the stars in the night sky, and abstraction in reservoir computers (a type of RNN).
Before undergrad, I worked on a project at WSU designing a flux spectrometer for the DUNE collaboration at Fermilab, advised by Holger Meyer.