In April 2025, a research group called the AI Futures Project published a document that quietly became one of the most read forecasts in tech history. Over one million people consumed it in the first few weeks - including, reportedly, the US Vice President. The report is called AI 2027, and if you run a business and have not read it, you probably should.
Written by former OpenAI researcher Daniel Kokotajlo alongside Eli Lifland, Thomas Larsen, Romeo Dean, and the writer Scott Alexander, AI 2027 is not a vague gesture at a distant future. It is a month-by-month scenario forecast mapping how AI could progress from today's capable but limited systems to something that surpasses the best human researchers at virtually every task. The key word is "could." But the credibility behind it is not easily dismissed.
What the report actually says
The scenario traces a trajectory across three years. In 2025, AI agents begin handling meaningful autonomous tasks - research, code, customer interactions. By early 2026, the leading AI labs have systems capable of running complex multi-step workflows with minimal supervision. In early 2027, coding becomes fully automatable, and by late 2027, an "intelligence explosion" begins - AI systems improving themselves faster than any human team could manage.
AI 2027 - key milestones
AI capability trajectory - original vs revised forecast
Illustrative model based on AI 2027 report milestones and METR task-horizon data
The report presents two endings. One where AI development goes badly - an alignment failure, a race between superpowers that nobody wins cleanly. And one that lands somewhere between cautious optimism and genuine human flourishing. The authors are honest that the good ending requires things to go unusually well.
Why this matters for founders right now
Here is the trap most entrepreneurs fall into when they encounter forecasts like this: they either dismiss them as science fiction, or they freeze waiting for more certainty. Both responses are mistakes.
The business case for paying attention is not primarily about whether AGI arrives in 2027 or 2032. It is about what is already true today, and the rate at which it is compounding. The METR data cited throughout AI 2027 shows that the length of complex tasks AI can handle reliably has been doubling roughly every four to six months since 2024. That is not a prediction. That is a measurement.
The strategic question is not "when does AGI arrive." It is "what happens to your competitive position in the next 18 months as AI systems take over more of what your team currently does?" That window is short enough to matter for decisions you are making this quarter.
For founders, the practical implication is this: any part of your business that runs on knowledge work, software development, content, research, or analysis is about to get structurally cheaper for whoever deploys AI most effectively. That is an opportunity if you move early. It is a threat if your competitors do and you do not.
The timeline revision is not reassurance
In November 2025, Kokotajlo updated his forecast. Progress had moved somewhat slower than the original scenario depicted. His revised median is now "around 2030, lots of uncertainty though." Some read this as good news - more time to prepare. That is the wrong interpretation.
The capabilities available in today's models are already sufficient to fundamentally reshape how work gets done. Waiting for AGI as a signal to act means watching competitors deploy current generation tools while you deliberate. The revision extended the timeline by a few years. It did not change the direction of travel.
What founders should actually do
The honest answer is not a checklist. It is a posture. Start treating AI as infrastructure, not a feature. That means deploying it in your core operations - not as a productivity curiosity but as a structural component of how your product gets built, how your customers get served, and how your team gets leverage.
It means building your business on the assumption that the cost of software will approach zero, that intelligence will be abundant and cheap, and that the moat for every company is shifting from what you know to how well you orchestrate what AI can do on your behalf.
AI 2027 is worth reading not because every prediction will prove correct, but because it forces the kind of structured thinking about the near future that most business plans avoid entirely. The forecasters are not asking you to believe them. They are asking you to take the scenario seriously enough to prepare for it.
That seems like the minimum reasonable response.