Designing Your mgx¶
TL;DR
One app per mgx. Install whatever that app needs to run and develop — satellites shipped by the upstream, and single-consumer dependencies (Redis, Postgres, Mongo, Mailpit, …). Don't host services meant for other mgx: share them through a dedicated mgx or a SaaS.
What's a mgx?¶
A mgx is Manganese's unit of deployment — an LXC container or VM today, a multi-container Kubernetes atom tomorrow. Whichever form, the rules on this page apply identically: a mgx exists to host one application and everything that application needs.
The rule: one app + what it needs¶
A mgx hosts one application (WSO2 MI, Odoo, a Rails backend, a custom SaaS…) plus two categories of co-located components.
Always IN — Satellites¶
Components that the app's upstream ships with the app. They share the app's release cycle, have the same owner, and break when the app breaks.
Typical satellites:
- Control planes bundled with the app — WSO2 ICP for WSO2 MI, Django Admin, Odoo Database Manager, Strapi Admin.
- Public APIs — the app's API surface, served by the same process or a sibling process, typically path-routed as
/api/*on the main URL. - App-specific exporters —
node_exporterscoped to the app's metrics, WSO2 MI metrics exporter, Odoo Prometheus exporter. - Webhook receivers and OAuth callbacks shipped as part of the app.
Always IN — App-local dependencies¶
Stateful or technical services that your app needs to run or develop, and that no other mgx talks to. Install them on the same mgx so your app can start.
Typical app-local dependencies:
- Runtime stores — Redis cache, MongoDB, local Postgres, Elasticsearch, message brokers.
- Dev-time companions — Mailpit for email testing, Adminer or pgAdmin for database inspection, Redis Commander, Swagger UI, Storybook.
Dev-time companions never need a public URL — they're reached via Code Server's proxy at <codr-url>/proxy/<N>/ where N is the local port they listen on.
The single-consumer test¶
One question decides whether something stays IN or must move OUT:
"Does any other mgx need to talk to this service?"
- No → IN. The service is app-local. It lives here.
- Yes → OUT. The service is cross-cutting. It belongs on its own mgx.
This test replaces any intuition about "infrastructure vs business logic". What matters is whether the service is shared.
OUT — two paths¶
Services that fail the single-consumer test go outside the mgx, in one of two ways.
Shared services → dedicated mgx. Self-hosted services that serve multiple apps:
- Central identity provider (Keycloak, Authentik).
- Aggregated monitoring dashboards (central Grafana, shared Prometheus).
- Internal artifact or container registries.
- Internal GitLab, Gitea, Jenkins.
- Shared Vault for secrets.
Each gets its own mgx, with its own primary URL.
Commodities with mature SaaS → external. Horizontal services that SaaS providers solve cheaply and at scale:
- Object storage → S3, R2, Backblaze.
- Transactional email → Mailgun, SES, Postmark.
- Message queues for cross-app fan-out → SQS, NATS Cloud.
- Analytics → Plausible Cloud, Matomo Cloud.
- Aggregated dashboards → Grafana Cloud, Datadog.
Decision tree¶
When in doubt about a new component, run the four questions in order and stop at the first match:
- Is it shipped by the upstream with the app? → IN (satellite).
- Is it single-consumer for this app — runtime or dev? → IN (app-local dependency).
- Is it shared across multiple mgx? → Dedicated mgx.
- Is it a commodity with a mature SaaS and no co-location constraint? → External SaaS.
Answers 1 and 2 keep it on this mgx. Answers 3 and 4 send it elsewhere. There is no fifth option.
For AI agents
When asked to install service X on a mgx, run this checklist:
- Is X the app's upstream satellite? → IN, proceed.
- Is X consumed only by this app (runtime or dev)? → IN, proceed.
- Does any other mgx need to talk to X? → STOP. Recommend a dedicated mgx.
- Is X a commodity with a mature SaaS (mail, S3, analytics, aggregated dashboards)? → STOP. Recommend the SaaS.
Only answers 1 and 2 authorise installation on the current mgx.
When to promote a dependency out¶
App-local dependencies stay IN as long as they remain single-consumer. You don't need to migrate on a deadline. Migrate when one of these signals appears:
- A second mgx needs to read or write the same data store. The service has become shared by definition — move it to a dedicated mgx.
- Backup, HA or compliance requirements exceed what colocation can reasonably provide.
- The dependency's upgrade cycle conflicts with your app's release cadence — you start deferring app releases because upgrading the dep is risky.
- You keep redeploying the same dep with identical configuration on every project. It has become commodity for you — switch to a SaaS.
Start on the mgx. Promote out when the pain emerges. That's the full policy.
Public exposure: two endpoints today¶
A mgx exposes exactly two Internet-facing endpoints — this surface is fixed and cannot be extended by installing anything on the server.
- Your app →
https://<app-url>Pipeline: Cloudflare → Sunray Zero-Trust → Traefik →localhost:$APP_HTTP_PORT. SSL terminates upstream. Your service binds$APP_HTTP_PORTin plain HTTP. - Code Server IDE →
https://codr-<app-url>Pipeline: Cloudflare → Sunray Zero-Trust → Traefik →code-server. Authenticated user only.
Multiple concerns within your app — admin panel, API, webhook receivers — are handled by path routing on your main URL: https://<app-url>/api/*, /admin/*, /webhooks/stripe. This fits the vast majority of monolithic apps (Rails, Django, Laravel, Odoo, Next.js).
Dev escape hatch: /proxy/<N>/¶
For any local HTTP port on the server — app-local dependencies, dev-time companions, secondary services — Code Server exposes an authenticated proxy:
<codr-url>/proxy/<N>/
N is any local port. Only the authenticated user reaches this path; external clients (webhook providers, mobile SDKs) cannot.
Use /proxy/<N>/ for:
- Inspecting Adminer, pgAdmin, Redis Commander in dev.
- Testing a Mailpit UI.
- Browsing a Swagger UI or Storybook running locally.
- Checking a secondary service's dashboard or
/healthendpoint.
Coming later — opt-in additional subdomains
A planned extension to App Servers will allow exposing two optional public subdomains (api-<app-url>, webhook-<app-url>, …) for cases that path routing cannot cover — typically external webhooks from Stripe/GitHub, CORS-strict front/back splits, or production-topology mirroring. Not available today. Until this ships, path routing on the main URL plus /proxy/<N>/ for dev-time tools cover the full feature set.
See the services.yml Service Definitions Guide for the service declaration syntax.
Concrete scenarios¶
| You want to… | Answer | Reason |
|---|---|---|
| Add Django Admin to your Django app | IN | Upstream satellite; path-routed as /admin/* |
| Add WSO2 ICP alongside WSO2 MI | IN | Upstream satellite |
| Add a Stripe webhook receiver | IN | Path-routed on the main URL (e.g. /webhooks/stripe) |
| Add Redis as a cache for your app | IN | App-local dependency, single-consumer |
| Use Postgres as your app database | IN | Already provisioned by Manganese — use PG* env vars |
| Add Mailpit for dev email testing | IN | Dev-time companion, reached via /proxy/<N>/ |
| Add Adminer to inspect the local DB | IN | Dev-time companion, reached via /proxy/<N>/ |
| Share a Redis cache across three apps | Dedicated mgx | Multi-consumer — no longer single-consumer |
| Add Grafana to monitor several apps | Dedicated mgx or SaaS | Cross-cutting dashboard |
| Add Keycloak as org-wide SSO | Dedicated mgx | Shared identity service |
| Send real transactional email in prod | SaaS | Commodity, mature SaaS market |
| Store long-term user uploads | SaaS (S3/R2) | Commodity object storage |
FAQ¶
Why not put everything on one mgx to save cost?
One mgx = one app identity. Piling services that serve other apps onto one mgx loses that identity — backups, access control, upgrade schedules, and failure blast radius all become entangled. A dedicated mgx per shared service keeps each unit reasoning-sized.
What if my "single-consumer" Redis starts being used by another team?
It is now multi-consumer, so it stops qualifying as app-local. Plan a migration to a dedicated mgx (or a managed Redis service). Until the second consumer actually shows up, staying IN is fine.
What about my production database?
Treat it exactly like any dependency. If it is single-consumer, it can live on the same mgx. If HA or compliance requirements grow beyond what colocation supports, promote it to a dedicated mgx or a managed service. The decision is based on signals, not on the word "production".
I'm just prototyping — do these rules still apply?
Yes, but the promotion triggers almost never fire during a prototype, so in practice you install everything locally and move on. The rules protect you later, when one of the signals appears.
Related¶
- services.yml Service Definitions Guide — declare services that run on your mgx.
- Environment Variables (mgx) — conventions for runtime configuration.