About me
I am currently a research scientist at Google Deep Mind. I came to Google via the AI Residency, cohort year 2019. Previously, I was a physics PhD student at Stanford University advised by Daniel S. Fisher. Before that, I studied math and physics as an undergraduate at Swarthmore College.
My PhD research area was, broadly speaking, theoretical biology. During my PhD, I primarily studied evolution - trying to understand how different dynamical processes combine to give evolutionary dynamics, and to characterize things like the speeed and predictability of evolution. I worked on understanding theoretical models of evolution on random fitness landscapes to understand the complexities of evolution in various biologically plausible scenarios. Towards the end of my PhD, I worked on the intersection of ecology and evolution - using modeling and simulation to show that diversity can be maintained dynamically through interactions between host and pathogens.
In addition to my theoretical work, I’ve worked on analyzing data from experimental evolution, and developed robust methods of inferring fitness from abundance data (code here). I used my code to understand the nature of fitness gains in glucose limited yeast (in collaboration with experimentalists from the Petrov and Sherlock labs at Stanford).
For more on my PhD work, check out this interview.
I’m currently a research scientist at Google Deep Mind, where my primary focus is on understanding optimization and generalization in machine learning models through the lens of dynamical systems. A majority of my work focuses on understanding the feedback between choices made during the learning process (optimizer choice, hyperparameters, data preprocessing, batch size) and the evolution of the local loss landscape geometry. I tend to focus on phenomena which show up robustly across models and datasets, with an eye towards understanding them in the high-dimensional limit (both model size and dataset size). I employ a mix of theoretical modeling inspired by simple dynamical systems and numerical experiments to gain insights which are robust and relevant at scale. I have a secondary interest in understanding how optimization choices are linked with feature learning and other types of inductive bias.
I also maintain an active interest in the intersection of machine learning and theoretical biology. Previously at Google I worked on creating benchmarks for protein design. This work involved generating synthetic, data-inspired fitness landscapes using insights from statistical physics and bioinformatics. I’m currently developing ML methods for use in theoretical biology.
Links: