What does MCP stand for?
MCP stands for Model Context Protocol. It's an open specification that defines how AI models communicate with external tools and data sources through a standard interface.
Before MCP, every AI integration was bespoke. If you wanted Claude to read your HubSpot data, you built a custom integration. If you wanted Cursor to write to your database, you built another one. Each AI client had its own integration pattern, and none of them were compatible.
MCP solves this by defining a universal protocol. You build one MCP server that exposes your tools and data. Any AI client that speaks MCP can connect to it: today that includes Claude Desktop, Claude Code, Cursor, Windsurf, Cline, and a growing number of agents and IDEs.
Analogy
If APIs are like phone numbers (you need to know the number and the protocol to call), MCP servers are like a receptionist: the AI walks in, asks what's available, and gets connected to the right tool automatically.
How MCP servers work
An MCP server exposes three things to any connected AI client:
- Tools: executable functions the AI can call. Each tool has a name, description, and a JSON schema defining its parameters. Example: a
create_contacttool that takes a name, email, and company. - Resources: read-only data the AI can access. These are URIs that return structured content. Example: a
contacts://recentresource that returns the last 50 CRM contacts. - Prompts: reusable prompt templates stored on the server. These give the AI context about how to use the tools effectively for specific tasks.
When an AI client connects to an MCP server, it first calls the discovery endpoint to learn what tools and resources are available. The AI then uses this knowledge to decide which tools to call during a conversation or task execution. The entire exchange happens over JSON-RPC, either via stdio (local) or HTTP with SSE (remote).
MCP server vs. API
If you already have REST APIs, you might wonder why you need MCP at all. The distinction matters:
| Dimension | Traditional API | MCP Server |
|---|---|---|
| Discovery | Manual: read docs, write integration code | Automatic: AI discovers tools at runtime |
| Consumer | Developers writing code | AI tools discovering tools at runtime |
| Schema | OpenAPI/Swagger (optional) | JSON Schema per tool (required) |
| Transport | HTTP REST/GraphQL | JSON-RPC over stdio or HTTP+SSE |
| Integration effort | Per-client, per-API | Build once, any MCP client connects |
| Controls | Custom (rate limits, API keys) | Tool discovery plus whatever approvals and access rules the server adds |
MCP doesn't replace your APIs. It wraps them in a layer that AI models can understand natively. Your HubSpot API still exists. The MCP server just gives Claude a way to discover and call it without someone manually writing the integration.
Which AI tools support MCP?
MCP adoption has accelerated rapidly since early 2025. Here's the current state as of March 2026:
| AI Client | MCP Support | Transport |
|---|---|---|
| Claude Desktop | Full (native) | stdio + HTTP+SSE |
| Claude Code | Full (native) | stdio + HTTP+SSE |
| Cursor | Full | stdio + HTTP+SSE |
| Windsurf | Full | stdio + HTTP+SSE |
| Cline (VS Code) | Full | stdio |
| OpenAI Agents SDK | Client support | HTTP+SSE |
| Custom agents | Via MCP SDK | stdio or HTTP+SSE |
Why approvals and control matter
A standard MCP server makes tools available to an AI client. That is useful, but it also means the client may be able to read data or trigger write actions in real systems. For most business use, visibility and control matter as much as the protocol itself.
- Clear plans: your team should be able to see what the AI is trying to do before a risky action happens.
- Approvals before writes: if the tool can change records, send messages, or trigger external actions, important steps should be reviewable first.
- Scoped access and visible history: teams need to know which tools are available, who used them, and what happened.
This is the same trust problem teams face with AI agents more broadly. It is not enough for a tool connection to exist. The team also needs approvals, visibility, and control.
The protocol is not the whole story: once an AI tool can act inside your stack, the practical questions become who can use it, what it can touch, and which actions need a person to review first.
How to set up an MCP server
There are two broad paths depending on your technical depth:
Option A: Build from scratch with the MCP SDK
The official MCP SDK (available in TypeScript and Python) lets you define tools, resources, and prompts programmatically. You write handler functions for each tool, define JSON schemas for parameters, and run the server locally or deploy it to a host. This gives you full control but requires engineering time and a custom control layer.
Option B: Use a higher-level product
Some products can hide the SDK layer and make the connection easier for the team using it. If you go this route, look for clear tool definitions, approvals before important actions, visible action history, and simple access control.
Build AI agents for your business
No code. No complexity. Just describe what you need.
MCP server examples by use case
MCP servers can expose tools from many systems. Here are a few common patterns:
| Use case | Tools exposed | AI client |
|---|---|---|
| CRM operations | search_contacts, update_deal, create_note | Claude Desktop / Cursor |
| Content management | list_drafts, publish_post, update_metadata | Claude Code |
| Support triage | get_tickets, classify_ticket, assign_agent | Custom agent |
| Data enrichment | lookup_company, enrich_contact, verify_email | Cursor / Windsurf |
| Financial ops | get_invoices, create_payment, reconcile_entry | Claude Desktop |
Common questions
Do I need to write code to create an MCP server?
Often, yes. If you are building directly with the MCP SDK, you usually need TypeScript or Python. Some higher-level products may hide that layer, but the underlying MCP server is still a technical interface.
Is MCP an open standard or proprietary?
MCP is an open specification originally developed by Anthropic and now adopted across the AI ecosystem. It's not locked to any single AI model or platform. Any AI client that implements the MCP protocol can connect to any MCP server.
How is MCP different from function calling?
Function calling is a model-level feature where the AI outputs a structured JSON object that your code interprets. MCP is a transport-level protocol that lets the AI discover and invoke tools at runtime without pre-registration. Function calling requires you to define every tool in the prompt; MCP lets tools be discovered dynamically.
Can I control which AI models access my MCP server?
Yes. In practice, teams usually control which clients can connect, which tools they can call, and whether important actions need approval before anything writes to a real system.
What happens if an AI model calls an MCP tool incorrectly?
The server can return a structured error and the AI client can retry with corrected inputs. This is also why approvals and clear tool definitions matter when the tool can change real data.
How many MCP servers can a single AI client connect to?
There's no protocol-level limit. Claude Desktop, Cursor, and Windsurf all support multiple simultaneous MCP server connections. In practice, each server exposes a set of tools and the AI model reasons across all of them to determine which tools to use for a given task.