Per-Stack Plan Selection¶
Muppy App Server templates are generic: they carry no built-in workload assumptions. Sizing is a workload concern — you pick CPU and RAM based on what your app actually does. This page is the reference for that choice.
Why two columns: Build and Run
Many apps need much more memory to build than to run. You can provision one App Server to build and run your app, or — more commonly in production — provision a Build server that compiles the app and a separate Run server that only executes the artifact. Pick each plan according to its actual role.
Add headroom for AI agents
If you run an AI coding agent (Claude Code, GitHub Copilot CLI, Cursor, Aider, …) on the server itself, add its footprint on top of the Build+Run numbers:
- Claude Code CLI: reserve ~4 GB extra RAM (Node runtime + context cache + tool processes).
- Smaller agents (shell-based wrappers, language servers): +1–2 GB.
Agents running on the developer's laptop don't count — only server-side agents consume server RAM.
Stack → CPU / RAM recommendations¶
Values are vCPU × RAM. Build and Run are independent roles — size each server for its actual job.
| Stack | Build — minimum | Build — recommended | Run — minimum | Run — recommended |
|---|---|---|---|---|
| Static site (plain HTML/CSS, Hugo, mkdocs) | 1 × 1 GB | 1 × 1 GB | 1 × 1 GB | 1 × 1 GB |
| Pure Python (stdlib only, no native deps) | 1 × 1 GB | 2 × 2 GB | 1 × 1 GB | 1 × 2 GB |
| Go (single-binary) | 2 × 2 GB | 2 × 2 GB | 1 × 1 GB | 1 × 1 GB |
| Node (Next.js, Vite, webpack bundle) | 2 × 2 GB | 2 × 4 GB | 1 × 1 GB | 2 × 2 GB |
| Python with native deps (pillow, numpy, psycopg2) | 2 × 2 GB | 2 × 4 GB | 1 × 1 GB | 2 × 2 GB |
| Ruby / Rails (bundle install, asset precompile) | 2 × 2 GB | 2 × 4 GB | 2 × 2 GB | 2 × 2 GB |
| Java / JVM (Gradle, Maven, Spring Boot, Quarkus) | 2 × 4 GB | 2 × 4 GB | 2 × 2 GB | 2 × 4 GB |
| .NET / MSBuild | 2 × 4 GB | 2 × 4 GB | 2 × 2 GB | 2 × 4 GB |
| Rust (release build) | 2 × 4 GB | 4 × 8 GB | 1 × 1 GB | 2 × 2 GB |
| C++ (CMake, large projects) | 2 × 4 GB | 4 × 8 GB | 1 × 1 GB | 2 × 2 GB |
Values in bold are the plan we most often see customers end up on after a first-round incident.
Why the build column is often bigger¶
Build-time memory is dominated by the toolchain's daemons and in-memory work, not by your source code size:
- Gradle / Maven / Kotlin compiler: JVM heap defaults to 512 MB – 1 GB per daemon; parallel compilation pushes the peak to 1.5 – 2 GB. On a 1 GB machine, the Linux OOM killer intervenes.
- Node bundlers (Next.js, webpack, Vite production build): V8 heap default is ~1.5 GB; memory-intensive rewrites (tree-shaking, minification, source maps) sustain high usage for seconds to minutes.
- Python native deps:
pip installof a wheel is fine on 1 GB, but a source install with C extensions (psycopg2, uwsgi, some ML libs) needs a working compiler and enough RAM for headers/templates — 2 GB minimum. - Rust release build:
cargo build --releasecompiles with optimizations; per-crate peak memory can exceed 1 GB for medium-sized workspaces. - C++ with templates: instantiation memory can be surprisingly large; 1 GB is usually too tight.
Why the run column is often small¶
Most apps run in a fraction of their build memory:
- A compiled Go or Rust binary uses the RAM its code actually needs, often under 100 MB.
- A Spring Boot JVM runs comfortably in 256 – 512 MB after JIT warm-up.
- A Next.js production server needs less than its build did — typically 2 GB suffices.
- Python/Ruby servers land anywhere from 50 MB (tiny Flask) to several hundred MB (Rails with multiple Puma workers).
If you size the run server too generously you pay for idle RAM. Size it to the actual runtime peak + headroom; scale up on evidence, not fear.
What to do when sizing was wrong¶
You usually notice two symptoms:
- Build server too small: OOM killer log in
mpy_setup.shoutput (killed, exit code 137, Gradle daemon "disappeared"). Resize to a larger plan (mgx_resize_server) and retry. - Run server too small: service restart storm, OOM in
journalctl, slow first request. Resize up.
Resizes are non-destructive — disk and data survive. Resize up aggressively, resize down conservatively (with a real measurement in hand).