- MCP has crossed 10,000 public servers in its registry as of March 2026 — it is no longer an Anthropic-specific protocol, with OpenAI and Google both adopting it
- MCP's three primitives (resources, tools, prompts) map cleanly to any existing API or data source — the migration path from OpenAI function calling is straightforward
- The stateful server pattern (persistent connections, long-lived context) is MCP's key architectural advantage over REST-based tool calling
- Security and authorization in MCP servers are critical and frequently underimplemented — every production MCP server needs OAuth 2.1 and input validation
Section 1 — What MCP Actually Is
Model Context Protocol (MCP) is a JSON-RPC 2.0 based protocol that standardizes how LLMs interact with external tools, data sources, and systems. Before MCP, every AI application implemented tool calling differently — OpenAI's function calling format, Anthropic's tool use format, and various custom implementations were all incompatible. MCP provides a vendor-neutral protocol that any LLM can use to interact with any server that implements the spec.
The protocol was released by Anthropic in late 2024 and reached critical mass in 2025 when OpenAI and Google both adopted it. By March 2026, MCP has become the standard for AI agent tool integration in the same way that LSP (Language Server Protocol) became the standard for IDE language features — one server implementation works across all clients.
Section 2 — Building a Production MCP Server
An MCP server exposes three types of primitives: Resources (data the LLM can read), Tools (functions the LLM can call), and Prompts (reusable prompt templates). The simplest useful pattern is a read/write tool server that wraps an existing REST API.
// TypeScript MCP server: database query tool with proper error handling
// Uses the official @modelcontextprotocol/sdk
import { Server } from '@modelcontextprotocol/sdk/server/index.js';
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
import {
CallToolRequestSchema,
ListToolsRequestSchema,
ListResourcesRequestSchema,
ReadResourceRequestSchema,
} from '@modelcontextprotocol/sdk/types.js';
import { z } from 'zod';
import { Pool } from 'pg';
const pool = new Pool({ connectionString: process.env.DATABASE_URL });
const server = new Server(
{ name: 'database-mcp', version: '1.0.0' },
{ capabilities: { tools: {}, resources: {} } }
);
// List available tools
server.setRequestHandler(ListToolsRequestSchema, async () => ({
tools: [
{
name: 'query_database',
description: 'Execute a read-only SQL query against the production database. Only SELECT statements are permitted.',
inputSchema: {
type: 'object',
properties: {
sql: {
type: 'string',
description: 'A SELECT SQL query. Must not contain INSERT, UPDATE, DELETE, DROP, or other write operations.',
},
limit: {
type: 'number',
description: 'Maximum rows to return (default: 100, max: 1000)',
default: 100,
},
},
required: ['sql'],
},
},
],
}));
// Tool execution with input validation
server.setRequestHandler(CallToolRequestSchema, async (request) => {
if (request.params.name !== 'query_database') {
throw new Error(`Unknown tool: ${request.params.name}`);
}
const { sql, limit = 100 } = request.params.arguments as {
sql: string;
limit?: number;
};
// Security: only allow SELECT statements
const normalizedSql = sql.trim().toUpperCase();
if (!normalizedSql.startsWith('SELECT')) {
return {
content: [{
type: 'text',
text: 'Error: Only SELECT queries are permitted. Write operations are not allowed.',
}],
isError: true,
};
}
// Enforce row limit to prevent huge result sets
const safeSql = `WITH limited AS (${sql}) SELECT * FROM limited LIMIT ${Math.min(limit, 1000)}`;
try {
const result = await pool.query(safeSql);
return {
content: [{
type: 'text',
text: JSON.stringify({
rowCount: result.rowCount,
rows: result.rows,
fields: result.fields.map(f => ({ name: f.name, type: f.dataTypeID })),
}, null, 2),
}],
};
} catch (e) {
const error = e as Error;
return {
content: [{
type: 'text',
text: `Query error: ${error.message}`,
}],
isError: true,
};
}
});
const transport = new StdioServerTransport();
await server.connect(transport);
Section 3 — MCP Transport and Deployment Patterns
| Transport | Use Case | Connection Type | Production Ready |
|---|---|---|---|
| stdio | Local tools (Claude Desktop, Cursor) | Process-level, single client | Yes — for local use |
| HTTP + SSE | Remote servers, multi-client | HTTP long-poll | Yes — most common for production |
| WebSocket | Low-latency, bidirectional | Persistent connection | Yes — for agent loops |
| In-process | Same-process embedding | Direct function calls | Yes — for tight integration |
Section 4 — MCP Security: The Underimplemented Layer
The most dangerous pattern in the MCP ecosystem is servers deployed without proper authentication. An MCP server with access to production databases, internal APIs, or code execution capabilities is a high-value target. The MCP spec includes OAuth 2.1 support for HTTP transports, but many community servers skip this entirely.
The production security requirements for an MCP server are: authentication (who is calling?), authorization (what are they allowed to do?), input sanitization (is the tool input safe to use?), and audit logging (what actions were taken?).
# Python MCP server with OAuth 2.1 authentication (HTTP transport)
from mcp.server.fastmcp import FastMCP
from mcp.server.auth import OAuthProvider, RequireScopes
from functools import wraps
import jwt
import structlog
log = structlog.get_logger()
# Configure OAuth 2.1 with your identity provider
auth_provider = OAuthProvider(
issuer="https://auth.mycompany.com",
audience="mcp-server-production",
jwks_uri="https://auth.mycompany.com/.well-known/jwks.json",
)
mcp = FastMCP("company-internal-tools", auth=auth_provider)
@mcp.tool()
@RequireScopes("tools:deploy") # Scope-based authorization
async def deploy_service(
service_name: str,
version: str,
environment: str,
) -> str:
"""Deploy a service to the specified environment.
Requires the tools:deploy scope.
Only 'staging' and 'production' environments are valid.
"""
# Input validation — never trust LLM-generated inputs without checking
allowed_environments = {"staging", "production"}
if environment not in allowed_environments:
raise ValueError(f"Invalid environment: {environment}. Must be one of {allowed_environments}")
if not service_name.replace("-", "").isalnum():
raise ValueError(f"Invalid service name: {service_name}")
# Audit log every action with caller identity
log.info("mcp_tool_invoked",
tool="deploy_service",
caller=mcp.current_user.sub, # From JWT
service=service_name,
version=version,
environment=environment,
)
# Execute deployment via internal API
result = await deploy_api.trigger_deployment(
service=service_name,
version=version,
environment=environment,
triggered_by=mcp.current_user.sub,
)
return f"Deployment triggered: {result.deployment_id}"
MCP servers that expose tools reading external content (web pages, documents, emails) are vulnerable to prompt injection — malicious content in the external source can manipulate the LLM to take unintended actions via your tools. The defenses are: clearly separate untrusted content from system instructions, validate and sanitize LLM-generated tool inputs, implement a confirmation step for destructive actions, and prefer tools with narrow, explicit effects over broad "do anything" tools.
Section 5 — The MCP Ecosystem in 2026
The MCP ecosystem has matured into several distinct categories. Official SDKs cover 8 languages with production-quality implementations. Hosted registries (the official MCP registry, Smithery.ai, and company-internal registries) provide discovery and versioning for published servers. Platform integrations mean that Claude, GPT-5, Gemini, Cursor, Windsurf, VS Code Copilot, and JetBrains AI all consume MCP servers natively.
The highest-value public MCP servers as of March 2026 are database connectors (Postgres, MySQL, SQLite, Supabase), version control integrations (GitHub, GitLab), cloud provider tools (AWS, GCP, Azure), observability tools (Datadog, Grafana), and communication platforms (Slack, Linear, Notion).
The enterprise adoption pattern is: start with read-only tools (database queries, log search, documentation lookup), validate that the AI agent uses them correctly and safely, then progressively add write tools (issue creation, deployment triggers, configuration changes) with appropriate authorization and confirmation flows.
Verdict
If you are building AI agents or AI-assisted developer tools, MCP is the standard to build on — the ecosystem momentum and multi-platform support make it the clear winner over custom tool-calling implementations. Build your internal tools as MCP servers and your AI integrations as MCP clients. This gives you interoperability across AI platforms as the market evolves. Security-first: implement OAuth 2.1, input validation, and audit logging from day one. The attack surface of an authenticated MCP server with production database access is significant and needs to be treated as such.
Data as of March 2026.
— iBuidl Research Team