The iconic MCP from TRON (1982)
The malevolent AI from TRON (1982)
"I can run things 900 to 1200 times
better than any human"
💭 Personal note: At billiger.de, we also called our Shop-Management-Tool "MCP" - so I always get into a nostalgic mood when seeing those 3 letters! 😄
😫 Copy file contents → Paste into AI
📋 Copy AI response → Paste back to editor
🔄 Repeat 50 times per day
"Is this what I'm paid to do?"
Every developer working with AI has felt this pain...
"I need to make sure something new isn't just a fad, but here to stay"
I don't understand everything yet, but trust me:
This is worth investigating.
Editor A ←→ Language 1
Editor A ←→ Language 2
Editor B ←→ Language 1
Editor B ←→ Language 2
...n×m integrations
Editor A ←→ LSP ←→ Language 1
Editor B ←→ LSP ←→ Language 2
Editor C ←→ LSP ←→ Language 3
...n+m integrations
┌─────────────────────┐ │ HOST │ ← AI Applications (not AI models!) │ ┌─────────────┐ │ (Claude Desktop, VS Code) │ │ CLIENT │────┼──→ MCP SERVER ──→ (acts as client) │ └─────────────┘ │ (Your Python Code!) ↓ │ ┌─────────────┐ │ Other MCP Server │ │ CLIENT │────┼──→ ┌─────────────┐ ↑ │ └─────────────┘ │ │ RESOURCES │ Sampling └─────────────────────┘ │ TOOLS │ (ask LLM) ↑ │ PROMPTS │ │ └─────────────────└─────────────┘────────┘ Bidirectional connections
Key insights:
• MCP extends AI applications, not AI models!
• Servers can also be clients to other servers
• Chaining enables complex agent workflows! 🚀
For applications to read
For AI models to execute
For human consumption
Resources: Application chooses what to read | Tools: AI model chooses what to call | Prompts: Human chooses what to use
JSON-RPC was chosen because it's boring! Transfer mechanism doesn't matter - reused proven solution from LSP. Bidirectional connections enable servers to call back to clients.
Server can ask client's LLM for help without sharing API keys!
Enables chaining: Server A → Server B → Client's LLM
# pip install mcp
from mcp import serve
from mcp.server.stdio import stdio_server
app = serve()
@app.list_resources()
async def list_resources():
return [Resource(uri="file://README.md", name="Project README")]
@app.read_resource()
async def read_resource(uri: str):
with open(uri.replace("file://", "")) as f:
return f.read()
@app.list_tools()
async def list_tools():
return [Tool(name="word_count", description="Count words in text")]
@app.call_tool()
async def call_tool(name: str, arguments: dict):
if name == "word_count":
return {"count": len(arguments["text"].split())}
if __name__ == "__main__":
stdio_server(app)
This is a complete, working MCP server! ✨
{
"mcpServers": {
"my-server": {
"command": "python",
"args": ["my_mcp_server.py"]
}
}
}
Local file access
SQLite explorer
Repository history
Slack, Discord
Weather, News
Linting, Testing
GitHub Issues
Custom dashboards
What will you build?
Maintained by Anthropic
• Curated list of servers
• Basic quality checks
• Regular updates
• modelcontextprotocol.io/servers
Community-driven list
• More comprehensive
• Faster additions
• Less vetting
• GitHub: awesome-mcp-servers
MCP servers run with YOUR permissions!
• They can access files, run commands, make network requests
• No sandboxing by default - full system access
• Always review code before running unknown servers
• Consider running in containers or VMs for isolation
🔍 Trust but verify - inspect server code before installation!
Plenty of room for innovation!
Active Discord & GitHub discussions
Thank you for your attention!
Time to build some amazing MCP servers! 🚀
GitHub: github.com/ephes
Mastodon: @jochen@fedi.wersdoerfer.de
Email: jochen-mcp@wersdoerfer.de