I am broadly interested in questions related to the nature of intelligent life and its future. Such questions include: "how frequently does intelligent life arise in the Universe?" (the SETI question), "can simple organizing principles account for the emergent complexity of intelligent behavior?" (which I approach by studying deep learning and reinforcement learning), and "how do we align advanced machine intelligence with human values?" (the AI Safety question).
- Examining the Causal Structures of Deep Neural Networks Using Information Theory.
Scythia Marrow, Eric J. Michaud, Erik Hoel. Entropy, 22(12):1429, 2020. Code. Videos.
- Understanding Learned Reward Functions.
Eric J. Michaud, Adam Gleave, Stuart Russell. Deep RL Workshop, NeurIPS 2020. Code.
- Lunar Opportunities for SETI.
Eric J. Michaud, Andrew Siemion, Jamie Drew, Pete Worden, 2020.
This past summer, I interned with Stuart Russell's AI safety group, the Center for Human-Compatible AI. With Adam Gleave, I worked on a paper exploring the use of machine learning interpretability techniques on learned reward functions. We presented the paper at the Deep RL Workshop at NeurIPS 2020.
Over the last year, I've also been working with the neuroscientist Erik Hoel. We've been measuring effective information and integrated information in deep neural networks. The corresponding paper was published in the journal Entropy. The code is available here. I am broadly interested in the theory of deep learning.
Previously, I worked with the Berkeley SETI Research Center (the Breakthrough Listen Initiative), and wrote a paper on the idea of doing radio-frequency SETI searches from the far side of the Moon. More info on the project, with some more links, can be found here. This work was the subject of a lovely article on supercluster.com.