Last month, a founder I know showed me his AWS bill. He'd built an AI-powered customer support tool. Ten users. Pre-revenue. His monthly infrastructure cost: $14,200.

He thought that was normal.

It's not. But I understand why he thought so. Because nobody in the AI industry is being honest about what things actually cost.

I run an AI platform from Copenhagen. For the past three months, I've been deep in the economics of AI product development – not the theoretical kind that consultants write reports about, but the actual, line-item, what-does-it-cost-to-ship-this kind.

What I found is both terrifying and, if you know where to look, full of opportunity.

The $690 Billion Elephant in the Room

Let's start at the top. In 2026, global AI infrastructure spending is projected to hit $690 billion. Microsoft alone is investing $80 billion in AI data centers this fiscal year. Alphabet is targeting $93 billion. Meta: $72 billion.

These numbers are so large they've lost all meaning. So let me make them mean something.

A single gigawatt of AI-optimized data center capacity now costs $45 to $55 billion to construct – nearly triple a standard facility.

That cost gets passed down to every API call, every token, every generation. It lands on the invoice of every startup trying to build something with AI. And most founders have no idea how the pricing actually works.

The Token Tax: What AI Models Actually Cost

Every major AI model charges per token – roughly three-quarters of a word. But here's what most people miss: output tokens cost 3 to 10 times more than input tokens. And that changes everything.

Here's what the landscape looks like right now:

Model Input $/1M Output $/1M Blended* vs Cheapest Best For
Claude Opus 4.6 $15.00 $75.00 $45.00 375x Complex reasoning
Claude Sonnet 4.5 $3.00 $15.00 $9.00 75x Balanced quality
GPT-5.2 Pro $21.00 $168.00 $94.50 788x Flagship reasoning
GPT-4o $2.50 $10.00 $6.25 52x General purpose
GPT-4o Mini $0.15 $0.60 $0.38 3x High volume
Gemini 2.5 Pro $1.25 $10.00 $5.63 47x Production apps
Gemini Flash-Lite $0.10 $0.40 $0.25 2x Speed / cost
Kimi K2.5 $0.60 $3.00 $1.80 15x Open-source alt
DeepSeek V3.2 $0.07 $0.35 $0.12 1x (baseline) Cost leader

* Blended = average of input + output at 1:1 ratio. Real-world ratios vary (chatbots often 1:3 input:output).

Read that table carefully. The difference between the cheapest and most expensive option is not 2x or 5x. It's 788x.

That means two founders building the exact same product, with the same features, could have cost structures that differ by nearly three orders of magnitude – based solely on which model they picked.

The Real Cost of Building an AI Platform in 2026

Forget the consultant reports that say "AI projects cost between $50,000 and $600,000." That range is so wide it's useless.

Here's what I've actually seen after analyzing costs across 50+ AI platforms, talking to founders, and building one myself:

Category Bootstrap VC-Funded Enterprise
AI Model API costs $200–$3K/mo $5K–$50K/mo $50K–$500K/mo
Cloud infrastructure $50–$500/mo $2K–$20K/mo $20K–$200K/mo
Engineering team $0 (founder) $40K–$150K/mo $200K–$1M/mo
Data / training $0–$1K/mo $5K–$50K/mo $50K–$500K/mo
Compliance (GDPR etc.) $0–$500/mo $2K–$10K/mo $10K–$100K/mo
Total monthly burn $250–$5K $54K–$280K $330K–$2.3M

That bootstrap column is the one that should get your attention. Because it proves something the industry doesn't want you to know:

You can build a production AI platform for the cost of a nice dinner. Or you can spend $2.3 million a month. The technology is the same. The difference is architecture decisions made in week two.

The Dirty Secret: Why Most AI Startups Overspend by 500%

Businesses routinely underestimate AI project costs by 500 to 1,000 percent when scaling from pilot to production. But the inverse is equally true: they also routinely overspend by the same margin because they make three predictable mistakes:

Mistake 1: They use the flagship model for everything. For 70 to 80 percent of production workloads, mid-tier models perform identically to premium models. One startup I know cut their monthly AI bill from $3,000 to $150 – a 95% reduction – by switching their chatbot from GPT-4 to GPT-4o Mini with zero quality loss.

Mistake 2: They don't manage token economics. Output tokens cost 3 to 10 times more than input tokens. A seemingly minor change in prompt structure can double inference costs overnight. Yet most founders have never looked at their input/output ratio.

Mistake 3: They confuse infrastructure cost with product cost. Cloud providers, API vendors, and consulting firms all benefit from you overspending. Nobody in the value chain is incentivized to help you spend less. The difference between startups that burn $200,000 a month on AI and those that stay under $3,000 is not money. It's architecture.

The AI App Builder Economy: A $20/Month Illusion

There's a booming market of AI app builders – Lovable, Bolt.new, Replit, v0 – all promising to let you build apps from natural language. Most start at $20 a month. Sounds accessible, right?

Here's what actually happens:

Users report burning through 1.3 million tokens in a single day on Bolt.new. Others have spent over $1,000 fixing code issues. On Lovable, there are reports of burning 150 messages just getting a layout right. One user called the credit system a "tax on creativity."

The business model isn't selling you a tool. It's selling you tokens. And tokens are consumed unpredictably, creating what users call "token anxiety" – the fear that every prompt is draining your account.

This is the fundamental tension in AI product economics: the cost of intelligence is variable, but users expect fixed prices. Every AI platform is wrestling with this. Most are losing.

What the Smart Money Actually Does

The Y Combinator S24 batch included an Indian legal AI startup that serves 40,000 users for roughly $5,200 a month in total AI and infrastructure costs. That's $0.13 per user per month.

How? Three patterns that separate survivors from burnouts:

The waterfall pattern. Route queries through a cache first, then a small cheap model, then escalate to the expensive model only when needed. This alone can cut costs by 60 to 80 percent.

Model mixing. Use the cheapest model that produces acceptable output for each specific task. Customer support? Flash model. Legal contract analysis? Flagship model. One-size-fits-all is the most expensive architectural decision you can make.

Prompt engineering as cost control. Every unnecessary token in your prompt is money burned. Shorter, more precise prompts aren't just better for quality. They're better for your bank account.

The Uncomfortable Truth for Europe

I wrote last week about Europe falling behind in the AI race. The cost data makes it worse.

US hyperscalers are spending $690 billion on AI infrastructure in 2026. That spending creates economies of scale that drive down per-unit costs. European startups pay the same API prices as American ones, but without the same revenue potential, market size, or investor appetite to absorb those costs.

The result: every European AI founder is playing the same game with a smaller stack. You can't outspend Silicon Valley. But you can out-architect them.

The winners in 2026 won't be the companies that spend the most on AI. They'll be the ones that spend the least per unit of value delivered.

What This Means for You

If you're a founder building with AI, here's my honest advice:

Know your numbers. If you can't tell me your cost per user per month, your input/output token ratio, and your blended model cost, you're flying blind.

Start with the cheapest model that works. Test Gemini Flash or GPT-4o Mini before reaching for the flagship. You might be surprised.

Design your architecture for cost from day one. Adding cost optimization later is like trying to make a building energy-efficient after it's built. Possible, but painful and expensive.

Watch the open-source wave. DeepSeek, Kimi, and models running on platforms like DeepInfra are closing the quality gap while offering costs that are 10 to 100 times lower than frontier closed models. The gap between open and closed models has effectively collapsed.

The AI industry wants you to believe that building AI products requires massive capital. It doesn't. It requires understanding the economics. And now you have the numbers.