pinksheep
Guides/MCP

What is an MCP server? The operator's guide

Quick answer

An MCP server is a standard interface that lets AI tools discover and call external tools and data sources. For business teams, the important question is not just what MCP is. It is how to keep control when an AI tool can touch real systems.

An MCP server is a standard interface that lets AI tools discover and call external tools and data sources. For business teams, the important question is not just what MCP is. It is how to keep control when an AI tool can touch real systems.

By pinksheep Editorial TeamUpdated 24 March 20268 min read

What does MCP stand for?

MCP stands for Model Context Protocol. It's an open specification that defines how AI models communicate with external tools and data sources through a standard interface.

Before MCP, every AI integration was bespoke. If you wanted Claude to read your HubSpot data, you built a custom integration. If you wanted Cursor to write to your database, you built another one. Each AI client had its own integration pattern, and none of them were compatible.

MCP solves this by defining a universal protocol. You build one MCP server that exposes your tools and data. Any AI client that speaks MCP can connect to it: today that includes Claude Desktop, Claude Code, Cursor, Windsurf, Cline, and a growing number of agents and IDEs.

Analogy

If APIs are like phone numbers (you need to know the number and the protocol to call), MCP servers are like a receptionist: the AI walks in, asks what's available, and gets connected to the right tool automatically.

How MCP servers work

An MCP server exposes three things to any connected AI client:

  1. Tools: executable functions the AI can call. Each tool has a name, description, and a JSON schema defining its parameters. Example: a create_contact tool that takes a name, email, and company.
  2. Resources: read-only data the AI can access. These are URIs that return structured content. Example: a contacts://recent resource that returns the last 50 CRM contacts.
  3. Prompts: reusable prompt templates stored on the server. These give the AI context about how to use the tools effectively for specific tasks.

When an AI client connects to an MCP server, it first calls the discovery endpoint to learn what tools and resources are available. The AI then uses this knowledge to decide which tools to call during a conversation or task execution. The entire exchange happens over JSON-RPC, either via stdio (local) or HTTP with SSE (remote).

MCP server vs. API

If you already have REST APIs, you might wonder why you need MCP at all. The distinction matters:

DimensionTraditional APIMCP Server
DiscoveryManual: read docs, write integration codeAutomatic: AI discovers tools at runtime
ConsumerDevelopers writing codeAI tools discovering tools at runtime
SchemaOpenAPI/Swagger (optional)JSON Schema per tool (required)
TransportHTTP REST/GraphQLJSON-RPC over stdio or HTTP+SSE
Integration effortPer-client, per-APIBuild once, any MCP client connects
ControlsCustom (rate limits, API keys)Tool discovery plus whatever approvals and access rules the server adds

MCP doesn't replace your APIs. It wraps them in a layer that AI models can understand natively. Your HubSpot API still exists. The MCP server just gives Claude a way to discover and call it without someone manually writing the integration.

Which AI tools support MCP?

MCP adoption has accelerated rapidly since early 2025. Here's the current state as of March 2026:

AI ClientMCP SupportTransport
Claude DesktopFull (native)stdio + HTTP+SSE
Claude CodeFull (native)stdio + HTTP+SSE
CursorFullstdio + HTTP+SSE
WindsurfFullstdio + HTTP+SSE
Cline (VS Code)Fullstdio
OpenAI Agents SDKClient supportHTTP+SSE
Custom agentsVia MCP SDKstdio or HTTP+SSE

Why approvals and control matter

A standard MCP server makes tools available to an AI client. That is useful, but it also means the client may be able to read data or trigger write actions in real systems. For most business use, visibility and control matter as much as the protocol itself.

  1. Clear plans: your team should be able to see what the AI is trying to do before a risky action happens.
  2. Approvals before writes: if the tool can change records, send messages, or trigger external actions, important steps should be reviewable first.
  3. Scoped access and visible history: teams need to know which tools are available, who used them, and what happened.

This is the same trust problem teams face with AI agents more broadly. It is not enough for a tool connection to exist. The team also needs approvals, visibility, and control.

The protocol is not the whole story: once an AI tool can act inside your stack, the practical questions become who can use it, what it can touch, and which actions need a person to review first.

How to set up an MCP server

There are two broad paths depending on your technical depth:

Option A: Build from scratch with the MCP SDK

The official MCP SDK (available in TypeScript and Python) lets you define tools, resources, and prompts programmatically. You write handler functions for each tool, define JSON schemas for parameters, and run the server locally or deploy it to a host. This gives you full control but requires engineering time and a custom control layer.

Option B: Use a higher-level product

Some products can hide the SDK layer and make the connection easier for the team using it. If you go this route, look for clear tool definitions, approvals before important actions, visible action history, and simple access control.

Build AI agents for your business

No code. No complexity. Just describe what you need.

MCP server examples by use case

MCP servers can expose tools from many systems. Here are a few common patterns:

Use caseTools exposedAI client
CRM operationssearch_contacts, update_deal, create_noteClaude Desktop / Cursor
Content managementlist_drafts, publish_post, update_metadataClaude Code
Support triageget_tickets, classify_ticket, assign_agentCustom agent
Data enrichmentlookup_company, enrich_contact, verify_emailCursor / Windsurf
Financial opsget_invoices, create_payment, reconcile_entryClaude Desktop

Common questions

Do I need to write code to create an MCP server?

Often, yes. If you are building directly with the MCP SDK, you usually need TypeScript or Python. Some higher-level products may hide that layer, but the underlying MCP server is still a technical interface.

Is MCP an open standard or proprietary?

MCP is an open specification originally developed by Anthropic and now adopted across the AI ecosystem. It's not locked to any single AI model or platform. Any AI client that implements the MCP protocol can connect to any MCP server.

How is MCP different from function calling?

Function calling is a model-level feature where the AI outputs a structured JSON object that your code interprets. MCP is a transport-level protocol that lets the AI discover and invoke tools at runtime without pre-registration. Function calling requires you to define every tool in the prompt; MCP lets tools be discovered dynamically.

Can I control which AI models access my MCP server?

Yes. In practice, teams usually control which clients can connect, which tools they can call, and whether important actions need approval before anything writes to a real system.

What happens if an AI model calls an MCP tool incorrectly?

The server can return a structured error and the AI client can retry with corrected inputs. This is also why approvals and clear tool definitions matter when the tool can change real data.

How many MCP servers can a single AI client connect to?

There's no protocol-level limit. Claude Desktop, Cursor, and Windsurf all support multiple simultaneous MCP server connections. In practice, each server exposes a set of tools and the AI model reasons across all of them to determine which tools to use for a given task.