Most of the noise in AI right now is about models. Bigger context windows, better reasoning, faster inference. None of that is wrong, but it misses the real shift. The most consequential thing in agentic AI over the last twelve months has not been a model release. It has been the quiet emergence of two open protocols that, taken together, are starting to function as the operating system for autonomous software.

They are called MCP and A2A. Both are now governed by the Linux Foundation. Both have crossed from specification into production. And if you are running a business that will eventually depend on AI agents doing real work - inside your company, with your customers, against your competitors - the protocol layer is now the most important architectural decision you will not get to make twice.

Here is what they are, what they do, and why they matter.

MCP: how an agent reaches into the world

The Model Context Protocol (MCP) was introduced by Anthropic in November 2024. The problem it solves is mechanically boring and strategically enormous. Before MCP, every AI assistant had to be wired by hand to every tool it might use. Five AI clients talking to ten internal systems meant fifty bespoke integrations, each with its own auth, semantics, and failure modes. The industry called this the N-by-M problem. MCP collapses it into N plus M.

In plain terms: MCP is a single, vendor-neutral language that lets any AI agent discover and use any tool, database, or service - Gmail, Stripe, your CRM, your warehouse, a custom internal API - without bespoke wiring on either side. The agent asks a server what it can do. The server answers. The agent calls it.

Adoption has been the fastest of any standard in the AI industry. By the end of its first year, MCP had passed 10,000 active public servers, 97 million monthly SDK downloads, and first-class support inside ChatGPT, Claude, Gemini, Microsoft Copilot, Cursor, and Visual Studio Code. In December 2025, Anthropic donated MCP to the newly formed Agentic AI Foundation under the Linux Foundation, with backers including OpenAI, Block, Google, Microsoft, AWS, Cloudflare, and Bloomberg. Translation: the standard is no longer owned by anyone. It belongs to the industry.

MCP turned the integration layer into a commodity. That sounds dull. It is not. Commoditizing integration is what makes autonomous agents commercially viable.

A2A: how agents talk to each other

The Agent-to-Agent protocol (A2A) was introduced by Google in April 2025 and donated to the Linux Foundation in June. It addresses a different problem. MCP lets an agent reach down into tools and data. A2A lets an agent reach sideways to other agents - across vendors, across frameworks, across organizational boundaries.

Each A2A-compliant agent publishes an Agent Card describing what it can do, what skills it offers, and how to reach it. Other agents can then discover it, negotiate how they want to communicate, and collaborate on long-running tasks without ever exposing their internal state, memory, or tools to each other. Think of it less as an API and more as a way for autonomous services to do business with one another.

In April 2026, A2A reached its one-year mark with more than 150 organizations supporting the standard, including AWS, Cisco, Google, IBM, Microsoft, Salesforce, SAP, and ServiceNow. Production deployments now span supply chain, financial services, insurance, and IT operations. The v1.0 release earlier this year added Signed Agent Cards (cryptographic proof that an Agent Card actually came from the domain it claims), multi-tenancy, and version negotiation. Together, those changes turned A2A from a promising spec into something enterprises can actually deploy.

There is also a payments extension worth knowing about. AP2 (Agent Payments Protocol), now an official A2A extension with more than 60 organizations supporting it, provides agents with a way to transact on a user's behalf with cryptographic proof of consent. The agentic commerce layer is already being built.

Why these two protocols matter together

The clearest way to think about this: MCP is plumbing. A2A is the electrical distribution panel.

An agent uses MCP to grab data and operate tools inside its own world. It uses A2A to coordinate with other agents that handle parts of the job it cannot do alone. One protocol gives an agent reach. The other gives it relationships. Neither is sufficient on its own. Together, they are the foundation for a multi-agent economy.

This matters now because the alternative is the world we already know from SaaS: closed platforms, proprietary connectors, and integration bills that grow faster than the value of the software being integrated. Agentic AI without open protocols would replay that pattern at ten times the speed and ten times the lock-in. That outcome is no longer the default. The two foundational protocols are open, neutral, and governed by the same body that stewards Linux, Kubernetes, and Node.js.

If your AI strategy assumes a single vendor will solve everything for you, you are now betting against the direction the entire industry has aligned on.

What this means for business leaders

Three practical implications, in order of urgency.

First, treat MCP support as a product requirement, not a technical detail. If your software has data or actions that your customers' AI agents will eventually want to reach, exposing an MCP server is now table stakes. Forrester expects 30 percent of enterprise app vendors to ship one in 2026. The ones that do not will spend 2027 explaining why their customers' agents cannot see them.

Second, stop thinking about AI as a single agent and start thinking about agent fleets. The most useful real-world deployments already involve multiple specialist agents collaborating - sales, operations, finance, support - each with its own context and tools. A2A is what lets those fleets cross organizational lines without rebuilding the world. Internal coordination is the entry point. External coordination is where the strategic value sits.

Third, audit your vendor choices through a protocol lens. Any AI platform that is not MCP-native and A2A-aware is asking you to bet on a private standard against an open one. There are moments in the history of technology when that bet has paid off. This is unlikely to be one of them.

The bigger picture

The reason both protocols ended up at the Linux Foundation in the same six-month window is no coincidence. It reflects a recognition across Anthropic, OpenAI, Google, Microsoft, AWS, IBM, and Salesforce that the agentic era cannot be built on proprietary plumbing. The model layer will continue to compete fiercely. The protocol layer needs to be a common ground or nothing scales.

For European builders, this is good news. Open, vendor-neutral standards make it materially easier to build sovereign, compliant, and competitive autonomous software without surrendering the integration layer to any one hyperscaler. The ground beneath agentic AI has just stabilized. What gets built on top of it over the next 24 months will matter.

Models are a moving target. Protocols compound. Pay attention to the protocols.