When someone asks an AI assistant about a company, what does it actually know? Usually: whatever it scraped during training months ago, summarized, possibly wrong. The company may have rebranded, changed services, or shipped something interesting — the AI has no idea.
We built something different. MC Soft Solution runs a live Model Context Protocol (MCP) server embedded directly in this website. Any MCP-compatible AI can connect, ask questions, and get current, structured answers about who we are, what we build, and how to reach us. No scraping. No stale data. A live endpoint that we own and control.
This post is the technical story behind that. And in a small meta-twist: it was written and published through the same system it describes.
What MCP Is
Model Context Protocol is an open standard from Anthropic that gives AI assistants a structured way to connect to external data and tools. Think of it as an API designed specifically for AI agents — instead of parsing HTML, the agent sends a typed request and gets a typed answer back.
MCP is already supported natively in Claude, Cursor, and a growing number of AI-powered tools. For a business, having an MCP server means your information is machine-readable in real time — not a cached snapshot, but a live endpoint that answers exactly what you've told it to say.
Two Layers of Access
We run two MCP servers in a single Axum application, mounted at different paths with different access controls:
The public endpoint (/mcp) is open to any AI. No keys, no accounts. A Claude user, a Cursor workspace, a custom AI agent — any MCP client can connect and query our knowledge base directly.
The private endpoint (/mcp/admin) requires a Bearer token. This is our toolset for managing content: publishing posts, updating the knowledge base, reviewing drafts before they go live.
Both servers run inside the same Rust binary, served alongside the rest of the site. The overhead is negligible. The infrastructure complexity is zero.
What AI Agents Can Discover
The public knowledge base is organized into named sections — services, about, portfolio, faq — each queryable by any connected AI. The conversation goes something like this:
The answer the AI gives reflects what we've actually published — not a cached, potentially outdated summary from a training crawl six months ago. We own the endpoint. We decide what it says. When services change, we update one knowledge entry and every AI that queries our server gets the new answer immediately.
How We Publish Content
The private endpoint is what makes the publishing workflow unusual. Here's the exact sequence this post went through:
publish_post accepts full Markdown — headings, tables, code fences with syntax hints, inline SVG, and Mermaid fences for client-side diagram rendering. The server renders it to HTML using pulldown-cmark and sanitizes it through ammonia with extended SVG attribute support. The post lands in the database as a draft — not visible to the public.
get_blog_post retrieves the saved post including the rendered HTML. We review exactly what visitors will see before the post is live.
update_post(slug, published: true) makes it live. One tool call. Instant.
No CMS login. No file upload. No deploy trigger. The entire workflow runs through the AI assistant's tool interface, and the output lands directly in PostgreSQL.
The Stack
Running an MCP server inside a Rust web application turned out to be surprisingly compact:
| Component | Library |
|---|---|
| MCP transport | rmcp (Rust MCP SDK, streamable HTTP) |
| Markdown rendering | pulldown-cmark |
| HTML sanitization | ammonia |
| Database | PostgreSQL via sqlx |
| HTTP framework | Axum |
The whole site — web server, both MCP endpoints, blog, admin dashboard, contact form — ships as a single ~4 MB binary to a VPS. No containers, no orchestration, no cold starts.
Why This Matters for Businesses
AI assistants are changing how people discover services. The first search results are increasingly bypassed in favour of a direct answer from Claude or another AI. If your business information isn't structured and machine-readable at a live endpoint, you're invisible to that layer.
An MCP knowledge base is always current because you own the server. When your service offering changes, you update one entry. Every AI that queries your endpoint gets the new answer immediately — no waiting for a recrawl, no hoping the next training run picks it up.
We built this for ourselves first. But the pattern scales to any business. A services company, a product team, a consulting firm — anything that benefits from being accurately discoverable by AI benefits from a public MCP endpoint.
If this kind of thinking appeals to you — AI-native architecture, Rust backends that actually ship, tools that reduce friction instead of adding it — get in touch. We build things like this for clients too.