FastClient — our in-house CRM, built in Rust + Axum with an integrated MCP server — already worked beautifully from Claude Code. A user drops a Bearer API key into their mcpServers config and Claude has secure, user-scoped access to customers and notes:
{
"mcpServers": {
"fastclient": {
"type": "http",
"url": "https://fastclientcrm.mcsoftsolution.com/mcp",
"headers": { "Authorization": "Bearer <key>" }
}
}
}
Solid for developers. But when we tried to connect the same server from claude.ai → Settings → Connectors → Add custom connector — the consumer UI that non-developers actually click — the form took only a URL. No field for pasting a Bearer token. The protocol wants the server to speak OAuth.
This post is the playbook we used to make that work: a minimum-viable OAuth 2.1 authorization server built into the same Axum binary, just enough to satisfy the MCP remote-connector spec, without turning a small self-hosted app into Keycloak.
What the spec actually asks for
Custom connectors in claude.ai implement the MCP remote-server spec, which sits on top of four RFCs:
| RFC | What it gives you |
|---|---|
| 8414 | OAuth 2.0 Authorization Server Metadata — /.well-known/oauth-authorization-server |
| 9728 | OAuth 2.0 Protected Resource Metadata — /.well-known/oauth-protected-resource |
| 7591 | Dynamic Client Registration — POST /oauth/register |
| 7636 | PKCE with S256 |
The happy surprise: once you accept the shape of it, the whole thing is about 400 lines of Rust plus one SQL migration. The client (claude.ai) doesn't need a pre-provisioned Client ID/Secret — it registers itself on the fly.
The handshake, end-to-end
sequenceDiagram
participant U as User (browser)
participant C as claude.ai
participant S as FastClient (Axum)
U->>C: "Add custom connector" with MCP URL
C->>S: POST /mcp (no auth)
S-->>C: 401 + WWW-Authenticate: resource_metadata=...
C->>S: GET /.well-known/oauth-protected-resource
S-->>C: { authorization_servers: [...] }
C->>S: GET /.well-known/oauth-authorization-server
S-->>C: endpoints + PKCE methods
C->>S: POST /oauth/register (public client, redirect_uris)
S-->>C: { client_id }
C->>U: redirect /oauth/authorize?... (PKCE challenge)
U->>S: login + approve consent
S->>U: 302 to claude.ai/callback?code=...
U->>C: (browser follows)
C->>S: POST /oauth/token (code + verifier)
S-->>C: { access_token }
C->>S: POST /mcp + Bearer access_token
S-->>C: tools/list -> ...
Every hop except the consent UI is pure JSON-over-HTTP — no sessions, no cookies, no pre-shared secrets.
The design choice that paid off most
The biggest win was refusing to introduce a second credential type. FastClient already has an api_keys table — SHA-256-hashed, user-scoped, already used by the MCP Bearer-auth extractor. Instead of inventing OAuth access tokens with their own table, lifetime management, and revocation logic, we added one column:
ALTER TABLE api_keys
ADD COLUMN oauth_client_id TEXT
REFERENCES oauth_clients(client_id) ON DELETE SET NULL;
When /oauth/token succeeds, it does exactly what the manual "Create API Key" UI does — mint a raw key, hash it, insert the row — except it stamps the row with the oauth_client_id the user just authorized:
let raw_key = format!("fc_{}", Uuid::new_v4().to_string().replace('-', ""));
let key_hash = hash_api_key(&raw_key);
sqlx::query(
"INSERT INTO api_keys (user_id, key_hash, name, oauth_client_id)
VALUES ($1, $2, $3, $4)",
)
.bind(row.user_id)
.bind(&key_hash)
.bind(&display_name)
.bind(&client_id)
.execute(&mut *tx)
.await?;
Ok(Json(TokenResponse {
access_token: raw_key,
token_type: "Bearer",
scope: row.scope,
}))
The payoff cascades:
- Zero changes to the MCP auth extractor.
Authorization: Bearer <whatever>still just hashes the key and resolves auser_id. OAuth tokens and hand-minted keys flow through the same code path. - The existing Settings → API Keys page doubles as a connector-management UI. The claude.ai connection shows up as a row named after the app with a
Revokebutton. No new screen to build. - Revocation Just Works. Deleting the row (from the UI, or via
ON DELETE CASCADEwhen the user account is gone) kills that access token instantly.
You don't always need a new abstraction.
PKCE verification is a one-liner
PKCE looks intimidating until you realize the entire verification is "re-run the challenge and compare":
fn verify_pkce(verifier: &str, expected_challenge: &str) -> bool {
let digest = Sha256::digest(verifier.as_bytes());
let computed = base64::engine::general_purpose::URL_SAFE_NO_PAD.encode(digest);
constant_time_eq(computed.as_bytes(), expected_challenge.as_bytes())
}
The only footgun is requiring S256 and rejecting plain — which is a four-line guard in the authorize handler. Modern MCP clients don't send plain anyway, but your discovery document should be explicit:
"code_challenge_methods_supported": ["S256"]
The WWW-Authenticate hint everyone forgets
Easy detail to miss: when an MCP client probes your /mcp endpoint unauthenticated and gets a 401, RFC 9728 says you should include a WWW-Authenticate header pointing them at your resource metadata. Claude.ai uses this to bootstrap discovery without needing to know the exact URL shape in advance:
let www_auth = format!(
"Bearer realm=\"mcp\", resource_metadata=\"{base}/.well-known/oauth-protected-resource\""
);
(
StatusCode::UNAUTHORIZED,
[(header::WWW_AUTHENTICATE, www_auth)],
Json(json!({ "error": "Missing authorization header" })),
).into_response()
Most clients will guess the discovery URL correctly from the host — but advertising it explicitly means your server works out of the box with any new RFC-9728-aware client, not just the one you tested against.
Consent UI: surviving the login hop
FastClient's SvelteKit layout bounces unauthenticated users to /login. Fine for the customer list; catastrophic for /oauth/consent?code_challenge=…&state=… — if those query params are lost on the round-trip, the OAuth flow breaks half-completed and claude.ai's callback errors out.
The fix is pedestrian: preserve the current URL through the hop.
if (!isPublic && !auth.isAuthenticated) {
const next = $page.url.pathname + $page.url.search;
goto(`/login?next=${encodeURIComponent(next)}`);
}
And post-login:
const next = $page.url.searchParams.get('next');
goto(next && next.startsWith('/') ? next : '/dashboard');
The startsWith('/') check is the only security-critical line in that snippet — without it, an attacker can craft ?next=https://evil.com and turn your login endpoint into an open redirector. Always validate that the redirect target is a same-origin path.
Deploying without angering certbot
One operational trap bit us: adding new nginx locations to a Let's Encrypt-managed vhost. Our first-time setup script wrote the server block from a template; certbot later inlined the SSL stanzas — listen 443 ssl;, cert paths, include options-ssl-nginx.conf — into the same file. Re-running setup.sh would have blown those away and taken HTTPS down until someone thought to re-run certbot.
The fix was a separate in-place patch script that:
- Backs the config up with a timestamp suffix.
- Parses the file and bails if already patched (idempotent — safe to re-run).
- Inserts two new
locationblocks —/oauth/and/.well-known/oauth-— at a known marker line. - Runs
nginx -tbeforesystemctl reload, so a syntax error never reaches the running daemon.
One subtlety worth pointing out: the /.well-known/oauth- location is deliberately narrow. Broader forms like /.well-known/ would shadow /.well-known/acme-challenge/, which certbot uses for HTTP-01 cert renewals. That's the kind of landmine you discover three months later when a renewal silently fails.
Bonus: sqlx "dead code" that isn't dead
While tidying the PR, we hit a classic Rust warning:
warning: field `key_hash` is never read
The instinct is #[allow(dead_code)]. The better move is to ask: why is this field here at all?
With sqlx's FromRow derive, your struct doesn't have to mirror the database columns — it only has to declare the ones you want to consume. Extra columns in SELECT * are silently ignored. For ApiKey, the hash was only ever written (at insert and lookup time) and never read through the struct. The field was a holdover from an ActiveRecord-shaped instinct.
Removing it beats silencing the warning, and the struct becomes honest about what the code actually touches:
#[derive(Debug, Clone, sqlx::FromRow, Serialize)]
pub struct ApiKey {
pub id: Uuid,
pub user_id: Uuid,
pub name: String,
pub last_used_at: Option<DateTime<Utc>>,
pub created_at: DateTime<Utc>,
}
#[allow(dead_code)] is fine for genuinely load-bearing-but-unread code. But when the compiler flags something, it's worth a second look — the warning is often trying to tell you something true.
Wrap-up
Remote MCP with OAuth 2.1 sounds heavier than it is. Three routes (/oauth/register, /oauth/authorize, /oauth/token), two discovery docs, one table-column addition, and a PKCE verifier boil down to a few hundred lines of Rust. The gnarliest bits aren't the protocol — they're the operational ones: not breaking your existing TLS config, keeping the access-token abstraction singular, and validating ?next redirect targets.
If you're running your own MCP server and want claude.ai users to be able to plug it in the same way they connect Google Calendar, that's the whole playbook.