Selected Publications [Full List]

Are Emergent Abilities of Large Language Models a Mirage? Rylan Schaeffer, Brando Miranda, Sanmi Koyejo. [NeurIPS Outstanding Main Track Paper Award 2023 & NeurIPS Oral] [OpenReview] [Stanford IEEE Invited Talk 2023]

Is Pre-training Truly Better Than Meta-Learning? Brando Miranda, Patrick Yu, Saumya Goyal, Yu-Xiong Wang, Sanmi Koyejo. ICML Data-Centric Machine Learning Workshop 2023. [Poster] [arxiv] [ICML PDF] [Code Coming Soon]

Beyond Scale: the Diversity Coefficient as a Data Quality Metric Demonstrates LLMs are Pre-trained on Formally Diverse Data. Alycia Lee*, Brando Miranda*, Patrick Yu, and Oluwasanmi Koyejo. ICML Data-Centric Machine Learning Workshop 2023 & ICML Challenges in Deployable Generative AI Workshop 2023. [Poster] [arxiv] [ICML PDF] [Code]

The Curse of Low Task Diversity: On the Failure of Transfer Learning to Outperform MAML and Their Empirical Equivalence. Brando Miranda, Patrick Yu, Yu-Xiong Wang, Oluwasanmi Koyejo. NeurIPS Meta-Learning Workshop 2022, Contributed Talk. [arXiv] [Poster] [5 minute video] [15 minute video Contributed Talk] [Code Coming Soon]

Does MAML Only Work via Feature Re-use? A Data Centric Perspective Brando Miranda, Yu-Xiong Wang, Oluwasanmi Koyejo. Preprint, Best research project award for graduate course CS 598 “Learning to Learn” by professor Y. Wang UIUC (December 2020). [arXiv] [5 minute video]

Weight and Batch Normalization implement Classical Generalization Bounds. Tomaso Poggio Andrzej Banburski, Qianli Liao, Brando Miranda, Lorenzo Rosasco, Jack Hidary. ICML Workshop 2019 [PDF]

High-performance and scalable on-chip digital Fourier transform spectroscopy. Derek M Kita, Brando Miranda, David Favela, David Bono, Jérôme Michon, Hongtao Lin, Tian Gu, Juejun Hu. Nature Communications 2018. [PDF]

Why and when can deep-but not shallow-networks avoid the curse of dimensionality: a review. Tomaso Poggio, Hrushikesh Mhaskar, Lorenzo Rosasco, Brando Miranda, Qianli Liao. International Journal of Automation and Computing 2017, Most Cited Paper Certificate awarded by International Journal of Automation & Computing (IJAC). [PDF] [Award]

Media Coverage

Below are selected links showcasing media coverage of some of my work:

The New York Times: Silicon Valley Confronts the Idea That the ‘Singularity’ Is Here

Y Combinator News: Are emergent abilities of large language models a mirage?

Quanta Magazine: How Quickly Do Large Language Models Learn Unexpected Skills? - A new study suggests that so-called emergent abilities actually develop gradually and predictably, depending on how you measure them.

Stanford’s Institute for Human-Centered Artificial Intelligence (HAI): AI’s Ostensible Emergent Abilities Are a Mirage

Forbes: AI ‘Emergent Abilities’ Are A Mirage, Says AI Researcher

This work was also covered by: Vice, Medium, Hackwernews, NeurIPS blog, Reddit, and more, courtesy of Google search.