me


Bio

Brando Miranda is currently a Computer Science AI/ML Ph.D. student at Stanford University under the supervision of Professor Sanmi Koyejo in the Stanford Trustworthy AI Research (STAIR) group. Previously, Miranda was a graduate researcher at University of Illinois Urbana-Champaign, a Research Assistant at the Massachusetts Institute of Technology (MIT)’s Center for Brain Minds and Machines (CBMM), and a graduate student at MIT. Miranda’s research interests focuses in data-centric machine learning for Frontier Models (FMs), Transformative Artificial Intelligence (TAI), and machine learning for mathematics and verified code. Miranda earned his Master of Engineering in Electrical Engineering and Computer Science at MIT, conducting research on Deep Learning Theory under the guidance of Professor Tomaso Poggio. Miranda has been the recipient of several awards; including the presitgious NeurIPS Outstanding Main Track Paper Award (top 0.4% and only 2 papers selected), the Most Cited Paper Certificate awarded by International Journal of Automation & Computing (IJAC), two Honorable Mention with the Ford Foundation Fellowship, Computer Science Excellence Saburo Muroga Endowed Fellow, Stanford School of Engineering fellowship, and is currently an EDGE Scholar at Stanford University. Miranda is more than a researcher: he is an innovator, a communicator, and deeply passionate about the future. Therefore, he collaborated and worked as a Machine Learning advisor at the AI start up Morph Labs. He was a key contributor to the Morph Prover, a frontier model for mathematics and verified code in Lean 4, and Moogle.ai, a search engine for verified code in Lean 4. Aligned with his passion for innovation and start ups, he has also advised Wise, a Stanford based startup focused on transforming sales performance with AI.


Shorter Bio

Brando Miranda is currently a Computer Science AI/ML Ph.D. student at Stanford University under the supervision of Professor Sanmi Koyejo in the Stanford Trustworthy AI Research (STAIR) group. Miranda’s research interests focuses in data-centric machine learning for Frontier Models (FMs), Transformative Artificial Intelligence (TAI), and machine learning for mathematics and verified code. Miranda has been the recipient of several awards; including the presitgious NeurIPS Outstanding Main Track Paper Award (top 0.4% and only 2 papers selected). Miranda is more than a researcher: he is an innovator, a communicator, and deeply passionate about the future. Therefore, he collaborated and worked as a Machine Learning advisor at the AI start up Morph Labs where he was a key contributor to the Morph Prover, a frontier model for mathematics and verified code, and Moogle.ai, a search engine for verified code. In addition, he advised Wise, a Stanford based startup focused on transforming sales performance with AI.


Selected Publications [Full List]

Why Has Predicting Downstream Capabilities of Frontier AI Models with Scale Remained Elusive? Rylan Schaeffer, Hailey Schoelkopf, Brando Miranda, Gabriel Mukobi, Varun Madan, Adam Ibrahim, Herbie Bradley, Stella Biderman, Sanmi Koyejo. ICML Outstanding Paper TiFA Trustworthy Multi-modal Foundation Models and AI Agents (TiFA) Workshop award. [arxiv][ICML Award]

Are Emergent Abilities of Large Language Models a Mirage? Rylan Schaeffer, Brando Miranda, Sanmi Koyejo. [NeurIPS Outstanding Main Track Paper Award 2023 & NeurIPS Oral] [OpenReview] [Stanford IEEE Invited Talk 2023]

Morph Prover v0 7b: The 1st Frontier Model for the Lean 4 Formal Verification Programming Language [blog] [HF Model Card]

Is Pre-training Truly Better Than Meta-Learning? Brando Miranda, Patrick Yu, Saumya Goyal, Yu-Xiong Wang, Sanmi Koyejo. ICML Data-Centric Machine Learning Workshop 2023. [Poster] [arxiv] [ICML PDF] [Code Coming Soon]

Beyond Scale: the Diversity Coefficient as a Data Quality Metric Demonstrates LLMs are Pre-trained on Formally Diverse Data. Alycia Lee*, Brando Miranda*, Patrick Yu, and Oluwasanmi Koyejo. ICML Data-Centric Machine Learning Workshop 2023 & ICML Challenges in Deployable Generative AI Workshop 2023. [Poster] [arxiv] [ICML PDF] [Code]

The Curse of Low Task Diversity: On the Failure of Transfer Learning to Outperform MAML and Their Empirical Equivalence. Brando Miranda, Patrick Yu, Yu-Xiong Wang, Oluwasanmi Koyejo. NeurIPS Meta-Learning Workshop 2022, Contributed Talk. [arXiv] [Poster] [5 minute video] [15 minute video Contributed Talk] [Code Coming Soon]

Does MAML Only Work via Feature Re-use? A Data Centric Perspective Brando Miranda, Yu-Xiong Wang, Oluwasanmi Koyejo. Preprint, Best research project award for graduate course CS 598 “Learning to Learn” by professor Y. Wang UIUC (December 2020). [arXiv] [5 minute video]

Weight and Batch Normalization implement Classical Generalization Bounds. Tomaso Poggio Andrzej Banburski, Qianli Liao, Brando Miranda, Lorenzo Rosasco, Jack Hidary. ICML Workshop 2019 [PDF]

High-performance and scalable on-chip digital Fourier transform spectroscopy. Derek M Kita, Brando Miranda, David Favela, David Bono, Jérôme Michon, Hongtao Lin, Tian Gu, Juejun Hu. Nature Communications 2018. [PDF]

Why and when can deep-but not shallow-networks avoid the curse of dimensionality: a review. Tomaso Poggio, Hrushikesh Mhaskar, Lorenzo Rosasco, Brando Miranda, Qianli Liao. International Journal of Automation and Computing 2017, Most Cited Paper Certificate awarded by International Journal of Automation & Computing (IJAC). [PDF] [Award]


Media Coverage

Below are selected links showcasing media coverage of some of my work:

Economic Report to the White House Washington: our work was cited in the 2024 Economic Report of the president. Direct quote: “The Report presents an overview of the nation’s economic progress and makes the case for the Biden-Harris Administration’s economic policy priorities.[Report][ScreenShot]

The New York Times: Silicon Valley Confronts the Idea That the ‘Singularity’ Is Here.

Y Combinator News: Are emergent abilities of large language models a mirage?

Quanta Magazine: How Quickly Do Large Language Models Learn Unexpected Skills? - A new study suggests that so-called emergent abilities actually develop gradually and predictably, depending on how you measure them.

Stanford’s Institute for Human-Centered Artificial Intelligence (HAI): AI’s Ostensible Emergent Abilities Are a Mirage

Forbes: AI ‘Emergent Abilities’ Are A Mirage, Says AI Researcher

Andrew Ng endorsed our paper & believes it is evidence that AGI won’t come discontinuously, but instead, will come smoothly and predictably

This work was also covered by: Vice, Medium, Hackwernews, NeurIPS blog, Reddit, and more, courtesy of Google search.


Awards