About

I am a 4th-year physics PhD candidate at Princeton University. My advisors are William Bialek (Princeton) and David Schwab (CUNY). I study learning and abstraction in artificial neural networks through the lens of physics and complex systems. I’m happy to chat via email at lindsay.smith@princeton (dot) edu.

My current research in machine learning/AI includes projects in mechanistic interpretability, meta-learning, in-context learning, and LLM multi-agent interactions.

 

Papers

When can in-context learning generalize out of task distribution? (ICML 2025)

Model Recycling: Model component reuse to promote in-context learning (NeurIPS 2024 SciForDL Workshop)

Specialization-generalization transition in exemplar-based in-context learning (NeurIPS 2024 SciForDL Workshop)

Learning continuous chaotic attractors with a reservoir computer (Chaos journal). Selected as an Editor’s Pick and publicized with a Scilight press summmary.

 

Background

Previously, I was an undergrad at the University of Pennsylvania where I majored in physics (with honors), minored in mathematics and French and Francophone studies, and graduated cum laude. My undergrad research advisor was Dani Bassett, and I researched human white matter brain networks, the human perception of the stars in the night sky, and abstraction in reservoir computers (a type of RNN).

Before undergrad, I worked on a project at WSU designing a flux spectrometer for the DUNE collaboration at Fermilab, advised by Holger Meyer.

 

CV

Link to CV