The Exit That Rewrote the Script

On March 10, 2026, a company called AMI Labs - Advanced Machine Intelligence - announced it had raised $1.03 billion in a seed round at a $3.5 billion pre-money valuation. It is one of the largest seed rounds in technology history. And it was raised on a single, provocative thesis: the foundation the entire AI industry is built on is wrong.

The man behind it is Yann LeCun. Turing Award winner. Co-inventor of modern deep learning. Former chief AI scientist at Meta for more than a decade, where he founded and led FAIR, one of the most influential research labs in the world. He left Meta in November 2025 - and his departure was not quiet.

LeCun did not leave to start another LLM company. He left because he believes large language models are fundamentally incapable of producing real intelligence. Not that they need more data. Not that they need more compute. That the architecture itself - predicting the next token - will never get there.

We are going to have AI systems that have human-level intelligence, but they are not going to be built on LLMs. There are major conceptual breakthroughs that have to happen first. And this company is focusing on the next generation.

That is not a hedge. It is a declaration of war against the prevailing orthodoxy of the industry - against OpenAI, against Anthropic, against Google DeepMind, and, pointedly, against Meta itself.

What AMI Labs Is Actually Building

The technology at the centre of AMI Labs is called JEPA - Joint Embedding Predictive Architecture. LeCun developed it during his time at Meta and published a position paper on it in 2022. The concept is deceptively simple, but the implications are enormous.

Current AI systems - ChatGPT, Claude, Gemini - are generative. They learn by predicting the next token. Give them a sentence, they predict the next word. Give them an image, they predict the next pixel. This works remarkably well for language. It does not work well for the physical world, because the physical world is full of unpredictable detail that does not matter.

Consider a video of a tree in the wind. A generative model tries to predict exactly how every leaf will move in the next frame. That is computationally absurd and fundamentally misguided. Most of the movement is random. What matters is the abstract understanding: there is a tree, there is wind, leaves move.

JEPA does not predict details. It predicts abstract representations. Instead of generating the next pixel, it predicts the next meaning. It learns in what researchers call "latent space" - a compressed, high-level representation of reality that captures what matters and ignores what does not.

This is not just a technical distinction. It is the difference between an AI that can talk about the world and an AI that can understand it.

The Team That Makes Investors Write Cheques

The funding alone would be notable. The team behind it makes it extraordinary.

Alex LeBrun is CEO. He previously co-founded and led Nabla, a French health AI startup that encountered the limitations of LLMs firsthand - hallucinations in medical contexts where a wrong answer could be dangerous. Before that, he built Wit.ai, which Facebook acquired, and worked under LeCun at FAIR. LeBrun was candid about the timeline, telling TechCrunch that AMI Labs is not a typical startup that ships product in three months. It could take years for world models to reach commercial application.

Laurent Solly, Meta's former vice president for Europe, joined as COO. Saining Xie, who came from Google DeepMind, is chief science officer. Pascale Fung leads research and innovation. Michael Rabbat, a former research science director at Meta, runs world models. The founding bench is twelve people spread across four continents - Paris, New York, Montreal, and Singapore.

The investor list reads like a who's who of global tech capital. The round was co-led by Cathay Innovation, Greycroft, Hiro Capital, HV Capital, and Bezos Expeditions. Corporate backers include Nvidia, Samsung, Toyota Ventures, and Temasek. Individual investors include Tim Berners-Lee, Mark Cuban, Eric Schmidt, and Xavier Niel.

The World Model Wave

AMI Labs is not operating in isolation. It is part of a wave that has gathered extraordinary momentum in the first quarter of 2026.

Fei-Fei Li's World Labs - focused on spatial intelligence and 3D environments - raised $1 billion in February 2026 from investors including Nvidia, AMD, Autodesk, and Andreessen Horowitz. Its first product, Marble, generates persistent 3D worlds from text and images and is already commercially available. Google DeepMind's Project Genie, powered by Genie 3, is the first real-time interactive world model. Nvidia's Cosmos platform - trained on 20 million hours of real-world data - has been downloaded over 2 million times.

Combined, over $2 billion in new funding has flowed into world model companies in the first ten weeks of 2026 alone. The thesis behind all of them is the same: text-based intelligence has peaked. The next frontier belongs to AI that can see, simulate, and understand physical reality.

Why This Matters If You Do Not Build AI

If you are a business leader reading this, you might be tempted to file it under "interesting research, no immediate impact." That would be a mistake.

The implications cascade quickly. If world models mature on the timeline LeCun envisions - and a billion dollars of funding certainly accelerates that timeline - the entire stack of AI applications needs rethinking.

Start with the obvious. Healthcare, where LLM hallucinations are not inconveniences but patient safety risks. Robotics, where a text-based model cannot tell a machine how to pick up a cup. Autonomous systems, where predicting what happens next in physical space is the entire problem. Industrial automation, where sensor data is continuous, high-dimensional, and noisy - exactly the domain JEPA was designed for.

LeBrun told Forbes that healthcare was a primary reason he took the CEO role. AMI Labs' mission statement explicitly targets industries where reliability, controllability, and safety matter most. These are not academic priorities. They are markets worth trillions.

Real intelligence does not start in language. It starts in the world.

The European Dimension

There is a detail in the AMI Labs story that European leaders should pay close attention to. The company is headquartered in Paris.

Not San Francisco. Not New York. Paris.

This is not symbolic. LeCun has deep ties to French research institutions and has been vocal about building European AI capacity. AMI Labs, with its $1 billion war chest and globally distributed team, is the strongest signal yet that frontier AI research can be led from Europe - not just regulated by it.

One investor, Pierre-Eric Leibovici of Daphni, said AMI Labs could be the first European company to reach the scale of the GAFAM companies. That is a bold claim. But it is no longer an absurd one. The combination of a Turing Award founder, a billion in capital, Nvidia and Samsung as strategic backers, and a research thesis that diverges from the American LLM orthodoxy gives AMI Labs something no European AI company has had before: genuine credibility at the frontier.

For European founders, the timing matters. The narrative has been that Europe regulates while America and China build. AMI Labs - and the ecosystem of world model research it is catalysing - rewrites that story. If world models become the next dominant paradigm, Europe will not be playing catch-up. It will be where the paradigm was born.

The Uncomfortable Question for Everyone Else

Here is the question that nobody building on LLMs wants to sit with: what if LeCun is right?

Not completely right. Not right tomorrow. But directionally right. What if language models do plateau? What if the next generation of intelligence requires a fundamentally different architecture? What if the companies that invested everything in scaling token prediction find themselves defending a local maximum while the real summit is somewhere else entirely?

The counter-argument is obvious and strong. LLMs have delivered extraordinary results. They power real products used by hundreds of millions of people. The scaling laws have not broken yet. And the world model approach is, as LeBrun himself admitted, years away from commercial viability.

But the history of technology is not kind to incumbents who mistake the current paradigm for the final one. Nokia had 50% global market share in mobile phones when the iPhone launched. The technology that disrupts you rarely comes from the direction you are watching.

LeCun has spent years arguing - often loudly, often alone - that language models cannot achieve real intelligence. He was dismissed as a contrarian. Now he has a billion dollars, a world-class team, four research hubs on three continents, and Nvidia as a backer.

The dismissal is getting harder to sustain.

What Smart Leaders Do Now

This is not a moment to panic or pivot. It is a moment to think.

LLMs are not going to disappear. They will continue to be enormously useful for language tasks - communication, analysis, coding, and content generation. The products built on them are real and valuable.

But smart leaders do not bet exclusively on one architecture lasting forever. They build systems that are modular. They watch the research. They design their AI strategy so that the intelligence layer can be upgraded without rebuilding everything around it.

The companies that will navigate this transition best are the ones that separate their orchestration from their intelligence. If the model underneath changes - from an LLM to a world model to something nobody has named yet - the system above it should keep working.

That is not just good engineering. In a world where a Turing Award winner just raised a billion dollars to bet against the current paradigm, it is a survival strategy.