Embrace AI or Be Left Behind — By People, Not Machines
~6 min read
Brando Miranda — April 2026
TL;DR. The real risk of ignoring AI isn’t that machines replace you — it’s that people who use them will. AI is a force of nature: trained on the world’s data, improving relentlessly. Adapting isn’t optional enthusiasm; it’s survival. I’d rather merge with the wave than fight it.
A conversation with my collaborator Dan this week crystallized something I’ve felt for a while but hadn’t written down. He mentioned wanting to opt out of AI-assisted peer review, half-joking that he didn’t trust me not to use Claude Code for mine. I told him what I tell everyone: I personally feel morally obliged to always say AI is allowed — if not encouraged — to help. Otherwise I’d be inconsistent with my own values, and worse, I’d be promoting a future I think is dangerous.
That sounds strong. Let me explain what I mean.
Trained on the world’s data
These models were trained on the entire data of the world. Think about that for a moment. Every textbook, every paper, every forum post, every codebase — distilled into a system that can reason across all of it simultaneously. How do we expect to compete with that? It’s ridiculous to believe we can, at least on the axis of breadth. A single human who has read a thousand papers is impressive. A model that has ingested millions is operating at a fundamentally different scale.
This isn’t a reason for despair. It’s a reason to change the game. The human advantage was never breadth — it was taste, intent, and knowing what question to ask. But breadth matters enormously for execution, and on execution these systems are already better than the average practitioner in most domains. That includes peer review, code generation, and mathematical reasoning — three things I care about deeply.
A force of nature
I keep coming back to the same phrase: AI is a force of nature. Not because I think it’s mystical or beyond understanding, but because its trajectory has the same quality as other forces we can’t individually opt out of. You don’t negotiate with compound interest. You don’t vote on whether Moore’s Law applies to you. And you don’t get to decide that language models trained on the sum of human knowledge won’t affect your field. They will. The question is whether you’re positioned to benefit or positioned to be displaced.
Once these systems digest the whole universe of human knowledge — and they’re close — I don’t understand why we’d doubt them. It’s just the way things are. It’s better to accept reality and merge with it than to fight a pointless battle.
Left behind — by whom?
Here’s the part people get wrong. The fear isn’t that AI replaces you. Not directly. The fear is that people who embrace AI will leave you behind.
A researcher who uses AI to draft, review, iterate, and verify will produce more, catch more errors, and explore more ideas than one who doesn’t. A formal methods group that integrates language models into proof search will make faster progress than one that spends years grinding for a 0.02% improvement on a benchmark no one uses. The technology doesn’t care about your preferences. But the people who adopt it will outpace the people who don’t, and that gap compounds.
I don’t want to be left behind. And to put it more bluntly — being left behind in a world that moves this fast isn’t a career inconvenience. It’s closer to an existential risk. Not from AI itself, but from the people who wield it while you’re still deciding whether to try.
Why I feel morally obliged
I say “morally obliged” because I think there’s a responsibility that comes with seeing where things are heading. If I believed AI tools were harmful, I’d say so. But I don’t. I think they’re the most powerful amplifier of human capability we’ve ever built, and that restricting access to them — or discouraging their use — actively harms the people who listen.
When a student asks me whether they should use AI for their research, the answer is always yes. When a reviewer asks whether AI-assisted review is acceptable, the answer is always yes. Not because I think the tools are perfect — they aren’t — but because refusing to use them doesn’t make you more rigorous. It makes you slower. And slower, in a competitive landscape that moves at model-training speed, means fewer ideas explored, fewer papers written, and fewer opportunities to do the work that matters.
If I told my students not to use AI, I’d be promoting a future where they’re less competitive, less productive, and less prepared. That’s not a future I’m willing to endorse. It’s inconsistent with everything I believe about technology and human progress.
Adaptation is the only strategy
I think about this through the lens of my own research. I work on AI for formal verification in Lean 4. I co-founded Stanford AI for Lean because I believe the intersection of AI and formal methods is where the future of mathematics lives. I built a multi-agent workflow because I wanted to use AI agents seriously — not as toys, but as structured components of a rigorous process.
Every one of these decisions was an adaptation. I saw where the wave was going, and I chose to ride it instead of watching it from shore. Sometimes that means my tools are better than me at specific subtasks, and that’s fine. The point was never to be the best at everything — it was to be the person who knows how to direct the best tools at the right problems.
That’s the real skill now. Not whether you can write code faster than a model — you can’t. Not whether you can review a paper more carefully than an ensemble of models — you probably can’t. But whether you can choose the right problem, formulate the right question, and structure a process that catches errors before they matter. That’s still ours. For now.
Why I’m here in the first place
I should also say plainly: I find this incredible. The fact that we built systems that learn from data, generalize beyond what they were shown, and now reason across nearly the entire written record of humanity — it’s astonishing. I’m here because I’m fascinated by how it actually works, not only because the wave is unavoidable. Curiosity got me into this field long before it was fashionable.
I’ve been at this since 2012, training base learning algorithms back when neural nets were still a slightly disreputable thing to bring up at a serious ML group meeting. I can pinpoint the exact moment I committed to it: a clip from Andrew Ng’s Stanford machine learning course where the VC-dimension generalization bound goes up on the board. That equation made the argument concrete for me. Learning is a quantifiable thing. Generalization is bounded by structure plus data. If you can automate the loop — hypothesize, test, update — you can automate the scientific method itself. And if you can automate the scientific method, you can solve intelligence; and if you can solve intelligence, you can in principle solve any problem science is capable of solving. That was the argument that hooked me, and it still is.
That conviction is why “force of nature” doesn’t read as fatalism to me. It reads as the field finally catching up to what the theory implied a long time ago.
I have no choice
People sometimes ask why I’m so enthusiastic about AI. The honest answer is: I have no choice. Not in the sense that someone is forcing me, but in the sense that the alternative — pretending these tools don’t change everything — is self-defeating. I embrace AI because the cost of not embracing it is higher than the cost of any mistake I might make along the way.
It’s a force of nature. The only rational response is to learn to work with it.
If you’d like to cite this post:
@misc{miranda2026embraceai,
author = {Miranda, Brando},
title = {Embrace {AI} or Be Left Behind --- By People, Not Machines},
year = {2026},
month = {April},
howpublished = {\url{https://brando90.github.io/brandomiranda/2026/04/27/embrace-ai-or-be-left-behind.html}},
note = {Blog post}
}