APIs are the connective tissue of our digital world. They quietly power everything from your morning weather check to your evening streaming binge—invisible yet essential to our connected experiences. Now there's a new protocol reshaping the landscape: Model Context Protocol (MCP). And as with any technological shift, misconceptions and hot takes are spreading faster than accurate information.

Debunking the doomsday predictions

Let's be honest with ourselves. When we first see a new technology protocol, our self-serving bias kicks in. Those with API investments fear disruption; newcomers hype revolutionary change. Both miss the symbiotic relationship at play.

Some headlines suggest MCP could spell the end of APIs as we know them. These aren't just wrong—they fundamentally misunderstand how MCP actually works. It's like claiming highways will kill cars, rather than give them somewhere to drive.

What is MCP, really?

MCP (Model Context Protocol) creates a standardized way for AI assistants to interact with tools, services, and data sources. Think of it as a spiritual successor to LSP (Language Server Protocol), which transformed how IDEs work with programming languages.

Remember our 3-tier architecture discussions? MCP represents another evolution in that journey. Instead of terminal emulators connecting to mainframes, we now have AI assistants connecting to digital services.

graph LR
    subgraph "Application Host Process"
        H[Host] --> C1[Client 1]
        H --> C2[Client 2]
        H --> C3[Client 3]
    end

    subgraph "Local machine"
        C1 --> S1[Server 1<br>Files & Git]
        C2 --> S2[Server 2<br>Database]
        S1 <--> R1[("Local<br>Resource A")]
        S2 <--> R2[("Local<br>Resource B")]
    end

    subgraph "Internet"
        C3 --> S3[Server 3<br>External APIs]
        S3 <--> R3[("Remote<br>Resource C")]
    end

This diagram reveals the core insight: MCP is built to consume APIs, not replace them. The host (an AI assistant like Claude) communicates with clients, which in turn communicate with servers that access various resources. Beep boop. 🤖

It's APIs all the way down

The relationship between MCP and APIs is symbiotic, not adversarial. MCP servers are essentially specialized API clients with a standardized interface—they're not replacing APIs, they're consuming them en masse.

Companies like Dylibso with their mcp.run platform are generating MCP servers directly from OpenAPI definitions. This is like taking existing API descriptions and magically creating MCP-compatible interfaces for AI assistants.

Sounds suspiciously like API promotion, not replacement, doesn't it? 🤔

The security elephant in the room

Let's hold on to our britches, because we're getting to the good part: security concerns. They're very real, but they're not entirely new—they mirror issues we've seen with LSP servers, with one critical difference: blast radius.

While an LSP vulnerability might only impact your development team, MCP affects everyone using AI assistants. Imagine a bad actor hijacking the CEO's AI assistant—suddenly they have access to sensitive communications, strategic planning, and more. That darn application server strikes again, but now it's personal.

Most AI hosts ask you to approve every tool use, but there's a problem: there's not enough information for users to make truly informed decisions: