Most deploy tutorials for Rust apps go one of two ways: "just use Docker" or "wire up GitHub Actions." Both are fine answers. Neither is the right answer when you're a solo developer with a $6/month VPS, no CI budget, and a project you just want running tonight.
Here's the pipeline I built to deploy a Rust/Axum server to a Debian 12 VPS — no Docker, no GitHub Actions, no third-party services. One command from my Mac, live in under two minutes.
The Stack
- Rust + Axum 0.8 — the server
- cargo-zigbuild + Zig 0.15 — cross-compilation from macOS to Linux x86_64
- Bun + Tailwind v4 — CSS build step
- SCP + SSH — artifact delivery
- Nginx + systemd — serving and process management on the VPS
Why Not Docker?
Docker is the right answer for teams. Reproducible environments, image registries, orchestration — all of that matters when more than one person is deploying, or when you need staging/prod parity across machines you don't control.
For a solo project on a VPS you own? Docker adds:
- A registry to push to (or a build step on the server)
- Networking overhead for a single-process app
- One more thing to understand when something breaks at 11pm
The alternative — cross-compile locally, SCP the binary, restart the service — is three steps. It's been working since before Docker existed.
The Cross-Compilation Problem (And the cargo-zigbuild Solution)
Compiling a Rust binary on macOS for Linux used to mean setting up a cross-compilation toolchain,
fighting linker flags, or maintaining a Linux build VM. cargo-zigbuild eliminates all of that.
Zig ships a C compiler and linker that targets every major platform. cargo-zigbuild
wraps it so Cargo can use it transparently.
cargo-zigbuild uses Zig as a cross-linker — one command produces a fully static Linux binary from a Mac.
# Install once
cargo install cargo-zigbuild
brew install zig
# Cross-compile for Linux x86_64
cargo zigbuild --release --target x86_64-unknown-linux-gnu
The output is a fully static-linked Linux binary. It works. No Dockerfile needed on the dev machine.
The Build Script
#!/usr/bin/env bash
set -e
echo "==> Building CSS..."
bun run build:css
echo "==> Cross-compiling for Linux..."
cargo zigbuild --release --target x86_64-unknown-linux-gnu
echo "==> Packaging artifact..."
mkdir -p dist
cp target/x86_64-unknown-linux-gnu/release/server dist/server
tar -czf dist/mcsoftsolution.tar.gz -C dist server static/
echo "==> Build complete: dist/mcsoftsolution.tar.gz"
Three steps: CSS build, cross-compile, package. The tarball includes the binary and the static/ directory (fonts, images, compiled CSS).
The Deploy Script
#!/usr/bin/env bash
set -e
SERVER="user@mcsoftsolution.com"
REMOTE_DIR="/var/www/mcsoftsolution-v2"
TARBALL="dist/mcsoftsolution.tar.gz"
echo "==> Building..."
./scripts/build.sh
echo "==> Uploading to server..."
scp "$TARBALL" "$SERVER:/tmp/mcsoftsolution.tar.gz"
echo "==> Extracting and restarting..."
ssh -t "$SERVER" "
sudo tar -xzf /tmp/mcsoftsolution.tar.gz -C $REMOTE_DIR &&
sudo systemctl restart mcsoftsolution
"
echo "==> Done."
ℹ️ Why
ssh -t? The-tflag allocates a pseudo-TTY sosudocan prompt for a password interactively. Without it,sudofails silently in non-interactive SSH sessions.
Nginx Serves Static Files Directly
A common pattern in Axum apps is to wire up tower-http's ServeDir
and let Rust handle everything. That works fine for development. In production, Nginx is faster
at static file serving and gives you better control over cache headers.
Nginx handles TLS and routes: static files go direct to disk with long-cache headers; everything else proxies to Axum on port 4200.
server {
listen 443 ssl;
server_name mcsoftsolution.com;
# Static files: Nginx serves directly, bypasses Axum
location /static/ {
alias /var/www/mcsoftsolution-v2/static/;
expires 1y;
add_header Cache-Control "public, immutable";
}
# Everything else: proxy to Axum
location / {
proxy_pass http://127.0.0.1:4200;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
One-year cache headers on static assets. Zero Rust overhead for files that never change.
The Port Conflict Nobody Mentions
When I first deployed, port 3000 was already in use on the server (leftover Node app). The standard advice is to audit what's running. My actual move: pick a different port.
ss -tlnp | grep -E '3000|4000|4200|8080'
Port 4200 was free. One sed on .env and the nginx config, done.
No recompile needed — PORT is read from the environment at startup.
⚠️ Hard-coding ports in Rust server code is almost never the right call. Read from env so you can change it without a recompile:
let port: u16 = std::env::var("PORT")
.unwrap_or_else(|_| "4200".to_string())
.parse()
.expect("PORT must be a number");
What the First-Time Setup Looks Like
One-time setup on the server is a separate setup.sh that:
- Creates
/var/www/mcsoftsolution-v2/owned bywww-data - Installs the systemd unit via heredoc (self-contained, no repo needed on server)
- Backs up the existing nginx config before replacing it
- Prints instructions for creating
.envwith credentials
After that, every future deploy is just:
./scripts/deploy.sh
# CSS build → cross-compile → SCP → extract → systemctl restart
# One sudo password prompt. Done.
The Transfer.sh Detour (What I Tried First)
The original plan was to build locally, upload to transfer.sh, then have the server
curl the artifact down. Clean pull-model, no direct SSH needed for the file transfer.
Three ephemeral file hosts were down simultaneously during the first real deploy. That plan ended immediately.
Three third-party file hosts went down simultaneously mid-deploy. SCP over existing SSH was the obvious fix in hindsight.
✅ The lesson: If you already have SSH access to the server, SCP is strictly better than routing through a third party. Zero availability dependency, one less URL to manage, and it's already in your toolchain.
Result
The site is live — HTTP 200, correct SSL, long-cache static files.
Deploy command is faster than waiting for a CI runner to spin up.
The v1 PHP site still runs on the same server, sharing the same PostgreSQL database.
No big-bang migration required — just systemctl restart when you're ready to switch.
The Setup If You Want To Copy It
Dev machine prerequisites:
cargo install cargo-zigbuild
brew install zig bun
rustup target add x86_64-unknown-linux-gnu
Server prerequisites:
- Nginx with an existing config you're willing to modify
- A
www-datauser (standard on Debian/Ubuntu) - SSH key auth (you don't want to type a password on every deploy — just the sudo prompt)
Everything else is in scripts/build.sh, scripts/deploy.sh, and deploy/setup.sh.
CI is the right tool for teams. For a solo project, it's overhead. The Rust ecosystem's cross-compilation story is good enough now that you don't need a Linux build machine — just Zig and five minutes.