AI Book Review: Supremacy — AI, ChatGPT, and the Race that Will Change the World
Table of Contents
“You didn’t need a brilliant idea to start a successful tech company. You just needed a brilliant person behind the wheel.”
That quote stayed with me long after finishing Parmy Olson’s Supremacy: AI, ChatGPT, and the Race that Will Change the World. It captures something I’ve been thinking about a lot — the uncomfortable tension between innovation and responsibility in the AI space.
Why This Book Matters
Olson, a Bloomberg journalist who has covered tech for over a decade, traces the parallel stories of OpenAI and DeepMind — two organizations that started with idealistic visions of building safe, beneficial AI, and gradually found themselves in a high-stakes commercial race.
The book doesn’t read like a technical manual. It reads like a thriller. And that’s what makes it powerful — it makes the human dynamics behind AI development accessible to anyone.
The Human Paradox at AI’s Center
What struck me most was how the book frames AI development as fundamentally a people story. The brilliant researchers, the clashing egos, the shifting alliances, the moments where commercial pressure won out over safety concerns.
This is the paradox: the technology designed to augment human intelligence is being shaped by very human flaws — ambition, impatience, competitive pressure, and the desire to be first rather than to be careful.
Key Takeaways for Business Leaders
1. The Safety vs. Speed Tension Is Real
Both OpenAI and DeepMind were founded with safety as a core mission. Both eventually faced pressure to ship fast. The book documents how this tension played out in real decisions — and the compromises that followed.
For your organization: If the companies building AI struggle to balance speed and safety, you can expect the same pressure internally. Build governance structures before the pressure hits.
2. AI Concentration Is a Risk
The book highlights how AI development has consolidated around a small number of companies with enormous compute resources. This concentration affects pricing, access, and the diversity of approaches in the market.
For your organization: Don’t build your entire AI strategy around a single provider. Understand vendor dependencies and build in flexibility.
3. Transparency Isn’t Optional
One of the most compelling threads in the book is how both organizations struggled with transparency — internally and externally. When researchers raised safety concerns, the response varied from genuine engagement to quiet dismissal.
For your organization: Create real channels for raising concerns about AI use. Not just a policy document — actual mechanisms where people feel safe speaking up.
What the Book Doesn’t Cover
Olson focuses primarily on the OpenAI and DeepMind story. There’s less coverage of the broader AI ecosystem — the open-source movement, smaller companies doing innovative work, or the regulatory landscape outside the US and UK.
The book also doesn’t spend much time on practical frameworks for responsible AI adoption. It’s diagnostic, not prescriptive. Which is fine — that’s where organizations like ours come in.
Who Should Read This
- Business leaders considering AI strategy — to understand the landscape you’re operating in
- Anyone curious about AI — the narrative is genuinely engaging and requires no technical background
- Teams building AI governance — to understand the organizational pressures that lead to shortcuts
Final Thought
Supremacy reinforced something I believe deeply: responsible AI isn’t just a technical challenge. It’s a leadership challenge. The technology will keep advancing. The question is whether we’ll build the organizational wisdom to use it well.
If you’re leading AI adoption in your organization, this book gives you essential context for the decisions ahead.
How AI Was Used in This Post
AI assisted with drafting and editing this review. The opinions and analysis are entirely human. All contributions were reviewed to ensure accuracy and a human-centered tone.
Frequently Asked Questions
Business leaders considering AI strategy, anyone curious about the AI industry (no technical background required), and teams building AI governance frameworks. The book reads like a thriller and makes the human dynamics behind AI development accessible to everyone.
Three main takeaways: the tension between AI safety and speed-to-market is real and will affect your organization too; AI development is concentrated among a few companies, so avoid building your strategy around a single provider; and transparency about AI use is not optional -- create real channels for raising concerns.
The book is diagnostic rather than prescriptive. It documents the organizational pressures and human dynamics that lead to shortcuts in AI safety. It does not provide practical frameworks for responsible AI adoption, but the stories give essential context for why governance structures matter.
Building an AI governance framework?
Get our free AI Readiness Assessment to understand where your organization stands — and what to do next.
Take the AI Readiness Assessment