Portrait

Hello!

I've just completed a PhD in the Department of Physics at MIT, supervised by Max Tegmark. My PhD work was aimed at understanding deep neural networks—understanding the internal mechanisms that networks learn and how and why they learn them. My most recognizable work was on neural scaling: The Quantization Model of Neural Scaling, though I also did work on grokking (here, here) and on the structure of neural network representations (here, here). My PhD thesis, Decomposing Deep Neural Network Minds into Parts, is available here.

Before my PhD, I studied math at UC Berkeley. During my undergrad, I worked with radio astronomers on SETI, Erik Hoel on deep learning theory, and Adam Gleave at CHAI on AI safety.

I am currently taking some time to think broadly about my next steps. If you'd like to chat about research or life, feel free to schedule something here.

My email is eric.michaud99@gmail.com. I am on Twitter @ericjmichaud_. Here also is my GitHub and a CV. And here is my Google Scholar page.

Insignia

Selected Works

Podcasts

Selected Talks

Papers