I was wondering about mcp server for long time, and finally I gave it a quick try.

idea about what to “mcp” Link to heading

There are many mcp out there, doing all kinds of tasks. For example, there is a nice curated repo here that you can find all kinds of mcp servers.

But what should I start with my own mcp server experience? I wondered long. Finally I spotted on something that LLM historically doesn’t do well: calculations.

Don’t get me wrong, LLM nowadays can do calculations - even for complicated math problems. But if you remember, it used to be not able to compare which one is larger: 3.9 or 3.11. This problem let me believe that maybe I don’t want to trust LLM to do calculation, and I want to trust programming language more to do that.

MCP server provides the bridging.

the server Link to heading

here is the repo.

references during the developing Link to heading

issues when developing Link to heading

  1. timing out after connect successfully to claude ai. Still I dont know why, but different client has different temper. Eventually it works with github copilot.

streamable over sse Link to heading

Key improvements of Streamable HTTP over SSE

  • Single endpoint: Unlike the older SSE model that required separate endpoints, Streamable HTTP uses a single endpoint for both client requests and server responses.
  • Bidirectional communication: It enables real-time bidirectional streaming through a unified protocol, allowing both client and server to send messages on the same connection.
  • Compatibility: It maintains compatibility with standard HTTP infrastructure by operating over plain HTTP, making it suitable for use with various middleware and serverless platforms.
  • Resilience: It includes features like session resume on drops, which improves reliability compared to the traditional SSE model.
  • Flexibility: It can use SSE as an efficient mechanism for streaming, making it a hybrid approach that combines the strengths of both protocols.

further thoughts Link to heading

People are saying it’s a wrapper of apis, which sounds like true according to my quick experience. However it does provide a decoupled way connecting APIs with LLM. Don’t know what will be changing in the future, but it’s something we need to keep an eye on.