How to Vibe Code MCP Server in 10 Minutes with AI and Cursor

And Why You Should Build One

Vibe Coding Your MCP in 10 minutes — thumbnail generated with the help of GPT-4o image generation

You might have heard the buzz around a new protocol called MCP (Model Context Protocol). It’s being touted as the game-changer for how AI models connect to data and tools — think of it like the universal API standard but built specifically for the AI era.

APIs vs LSP vs MCP. Source: Anthropic’s workshop on MCP.

Claims like universal standard and game-changing always pique my interest, especially when they promise to simplify complex integrations. As someone who uses AI daily, I wanted to see if MCP lived up to the hype and how quickly someone could actually get started.

So, I set myself a challenge:

Could I build not just one but two functional MCP servers in roughly 10–15 minutes, even starting with minimal MCP knowledge?

And could I leverage AI tools to make it even faster?

Spoiler alert: Yes

The process revealed just how powerful MCP (and modern AI coding assistants) can be.

This article walks you through exactly how I did it, step by step.

If you prefer to watch instead of reading, feel free to check the video version here:

First Off, Why Does MCP Even Matter?

Before diving into the build, let’s quickly touch on why MCP is generating excitement.

Imagine building web apps back before APIs became standard. Every connection to another service (like payments, maps, or social media) would need custom, bespoke code. It was messy, slow, and incredibly inefficient. APIs changed everything by providing a standardized way for software to communicate.

That’s precisely what MCP aims to do for AI models.

The Old Way (Function Calling):
Until recently, connecting AI models like GPT, Claude, or Gemini to external tools relied on function calling. The problem? Each major AI provider (OpenAI, Anthropic, Google) had its own format. If you built a tool integration for GPT, you’d have to rewrite significant parts of it to work with Claude or Gemini.

The New Way (MCP):
MCP introduces a single, standardized protocol. You build your tool integration (your “MCP Server”) once, defining its capabilities clearly. Then, any MCP-compatible client can instantly understand and use it.

Think about the implications:

  • Build Once, Run Everywhere: Massive time savings for developers.

  • Future-Proofing: Your tools remain compatible as new models emerge.

  • Democratization: Easier for anyone to connect AI to their unique data and workflows.

Just like early API adopters gained a significant advantage, getting familiar with MCP now puts you ahead of the curve in the rapidly evolving AI landscape.

What’s even more interesting, OpenAI just added support for MCP servers into the Agents SDK. This shows that MCP is becoming a standard recognized by big players in the industry.

MCP integration in Agents SDK.

The Experiment: Building 2 MCP Servers

My goal was to test the waters by building two distinct MCP servers:

  1. Weather MCP: A simple server to fetch real-time US weather data. Great for understanding the fundamentals.

  2. Obsidian MCP: A more complex server to connect an AI assistant to my personal notes vault in Obsidian (my “second brain”). This would test AI-assisted development.

My Toolkit:

  • Cursor: My AI-first code editor.

  • Claude 3.7: The AI model assisting within Cursor.

  • Claude Desktop: An easy way to test MCP servers locally.

  • Node.js & TypeScript: The environment for building the servers.

  • MCP SDK (TypeScript): The official library for building MCP servers.

Part 1: Building the Weather MCP

For the first server, I followed Anthropic’s official MCP quickstart documentation closely to grasp the core concepts.

Here’s the rapid breakdown:

1. Quick Setup: Getting the Project Ready

  • Created a Node.js project (weather/).

  • Installed essentials: @modelcontextprotocol/sdk (the core library) and zod (for data validation).

  • Configured tsconfig.json for TypeScript and updated package.json scripts. All code went into src/index.ts.

# Create a new directory for our project
mkdir weather
cd weather

# Initialize a new npm project
npm init -y

# Install dependencies
npm install @modelcontextprotocol/sdk zod
npm install -D @types/node typescript

# Create our files
mkdir src
touch src/index.ts

Here’s the project structure:

Project structure after this step. In case of any problems follow the official documentation.

2. Defining the Server Logic

  • Initialized the MCP Server: Just a few lines of code to get the server instance ready.

import { McpServer } from '@mcp/sdk';

const server = new McpServer({
  name: "weather",
  version: "1.0.0"
});
  • Added Helper Functions: Small functions to call the weather API and format the JSON response into clean text.

Most of the time, the MCP server will call external APIs. You need to define the logic inside the server.

Registered the Tools: This is the heart of MCP. I defined two tools:

  • get-alerts: Fetches alerts for a US state (e.g., CA).

  • get-forecast: Gets weather for specific latitude/longitude.

Crucially, each tool needs a clear description so the AI knows when and how to use it.

Here’s the code for one of the tools — get-forecast:

server.tool(
  "get-forecast",
  "Get weather forecast for a location",
  {
    latitude: z.number().min(-90).max(90).describe("Latitude of the location"),
    longitude: z.number().min(-180).max(180).describe("Longitude of the location"),
  },
  async ({ latitude, longitude }) => {
    // Get grid point data
    const pointsUrl = `${NWS_API_BASE}/points/${latitude.toFixed(4)},${longitude.toFixed(4)}`;
    const pointsData = await makeNWSRequest<PointsResponse>(pointsUrl);

    if (!pointsData) {
      return {
        content: [
          {
            type: "text",
            text: `Failed to retrieve grid point data for coordinates: ${latitude}, ${longitude}. This location may not be supported by the NWS API (only US locations are supported).`,
          },
        ],
      };
    }

    const forecastUrl = pointsData.properties?.forecast;
    if (!forecastUrl) {
      return {
        content: [
          {
            type: "text",
            text: "Failed to get forecast URL from grid point data",
          },
        ],
      };
    }

3. Testing with Claude Desktop

  • Ran npm run build to compile the code.

  • Edited Claude Desktop’s configuration (Settings → Developer → Edit Config) to point to my server’s dist/index.js file.

  • Restarted Claude Desktop (pkill -f “Claude” && sleep 2 && open -a “Claude”).

The Result? Success! The new weather tools instantly appeared. Asking “Hey Claude, what’s the weather in LA?” triggered my MCP server seamlessly.

Weather forecast in Claude Dekstop after configuring the MCP.

Part 2: Building the Obsidian MCP (Accelerated with AI)

Okay, the manual approach worked, but it required carefully following documentation. What if I wanted something more complex, faster?

This is where I brought in AI assistance with Cursor and Claude 3.7. The goal: an MCP server that lets Claude search, read, and update notes in my local Obsidian vault via the Obsidian Local REST API plugin.

Cursor + Claude Sonnet 3.7 is a powerful combo!

My AI-Assisted Workflow:

  1. Feed the AI Context:

  • MCP Docs: Added the official MCP documentation (and the TypeScript SDK docs) to Cursor’s Docs feature. This gives the AI foundational knowledge.

Docs feature in Cursor can be a very helpful way to give the right context to the LLM!

  • Obsidian API Docs: Added the documentation for the Obsidian Local REST API plugin to Cursor Docs. The AI needs to know how to talk to Obsidian.

  • Clear Requirements: Created a requirements.md file detailing exactly what the Obsidian MCP should do (e.g., “Tool to search notes by keyword”, “Tool to read the content of a specific note”, “Tool to append text to a note”).

I'd like you to implement these tools:
"""

### Search

- search - Search for documents matching a specified text query across all files in the vault. Return things like date of creation, etc.
- list_files_in_dir - Lists all files and directories in a specific Obsidian directory

### Read

- get_content - get content from a single file and load it to Claude's context
- get_contents - get contents from selected file paths and load it to Claude's context

### Write:

- update note (change note's content)
- append
- patch
- create note
  """

As well as the prompts:
"""
Prompts

- **Note Summarization**: Generate summaries for long notes
  """

For reference, I also paste the content of the Obsidian MCP written in Python:

2. Generate the Code: I asked Cursor (powered by Claude 3.7), using my requirements and the provided documentation context, to generate the complete MCP server code.

3. Iterate and Debug (with AI Help):

  • Initial Output: Cursor generated a solid first draft, including tools for searching, reading, and writing files, plus the main server setup. However, it didn’t immediately work flawlessly with Claude Desktop.

  • Refinement 1 (SDK Alignment): I instructed Cursor to review the documentation once again to align with the MCP specification for discovery by clients like Claude Desktop. Cursor searched the web for specifics, cross-referenced the SDK, and updated the code.

  • Refinement 2 (API Key): I noticed my OBSIDIAN_API_KEY wasn’t being accessed correctly when the server ran via Claude Desktop. I asked Cursor to diagnose this. It correctly identified that the environment variable needed to be set within the Claude Desktop configuration file itself, not just in my terminal, so the server process launched by Claude would inherit it.

Result: After these AI-assisted iterations, the Obsidian MCP worked seamlessly within Claude Desktop!

The Power of an Obsidian MCP

This was genuinely cool. I can now have conversations with Claude that directly leverage my personal knowledge base. For example:

  • I collect summaries of valuable YouTube videos in my Obsidian vault, organized by creator (like Greg Isenberg).

My Greg Isenberg aggregate note in Obsidian. Each note has summary of Greg’s video.

  • Instead of manually searching through potentially long notes, I can ask Claude:

Review my notes on Greg Isenberg and extract his top 3 insights on community building.

Example prompt we can use with Obsidian-MCP

  • Claude uses the MCP server to read the relevant notes and provides a synthesized answer, pulling directly from my curated information. I can even ask it to add new insights to those notes.

Synthesized output in Claude Desktop. The artifact can be saved as a new note in my Obsidian Vault

Key Takeaways from the Experiment

  • MCP is Practical: It’s not just theoretical. You can build working MCP servers today using the available SDKs and AI Coding Assistants with Cursor.

  • Vibe Coding is Real: Using tools like Cursor with models like Claude 3.7 dramatically speeds up development, especially for integrations. The key is providing good context (docs, requirements).

  • Iteration is Still Necessary: AI won’t always get it perfect on the first try. Understanding the underlying tech (MCP, the target API) helps guide the AI during debugging.

  • The “Build Once, Run Everywhere” Promise is Powerful: Knowing these tools work across different models without rewriting is a huge incentive, especially knowing that big players like OpenAI adapt this protocol as well!

Get Started with MCP

Building these two servers showed me that MCP is accessible and powerful. It genuinely feels like the next step in making AI more practically useful by connecting it seamlessly to our existing tools and data.

You now have a blueprint to start building your own MCP integrations.

Learning MCP now places you at the forefront of AI development. Give it a try!

Thanks for reading! If this guide was clear and helpful, please clap or share. Follow for more simple AI explanations.