Self-hosted feature flags give you control and data residency, but many teams assume it means maintaining a second platform, custom integrations, and ops overhead. In practice, the decision is often strategic: who owns the release process, where evaluation data lives, and how the system scales from a single team to many services and environments. Flagent is built so you can run flags and A/B experiments in one place, with one UI and one API, and plug Ktor (or other Kotlin services) in with minimal setup — without turning feature flags into a separate “product” that needs its own ops playbook. This post walks through when to choose self-hosted, what you get in one deploy, how to run and integrate Flagent in production, and how to grow from a pilot to a platform.
When to choose self-hosted
SaaS flag services (LaunchDarkly, Unleash Cloud, etc.) are easy to start: sign up, get an API key, and go. They make sense for small teams or when you want zero ops. The trade-offs become visible as you scale or tighten compliance:
- Data and compliance — Your flag state and evaluation data live in the vendor’s cloud. For regulated industries (finance, healthcare, government) or strict data residency (EU-only, on-prem), self-hosted keeps everything inside your boundary. Audit trails and “who changed what” stay in your control.
- Cost at scale — SaaS pricing often scales with monthly active users (MAU) or evaluation count. At high traffic, the bill can grow quickly. Self-hosted is a fixed cost (your VMs/containers and DB); you can run high throughput without per-request fees. For platform teams serving many product teams, that predictability matters.
- Vendor lock-in — Migrating off a SaaS provider means changing SDKs, config, and workflows across all services. With Flagent you own the deployment and the data; you can export flags (YAML/JSON) and move to another host or fork the code. No “exit tax” when requirements change.
- Multi-environment and on-prem — Staging, production, and air-gapped or on-prem deployments are easier when you deploy the same artifact yourself. You control versions, patches, and network topology (e.g. Flagent only on internal network).
Flagent keeps the “you run it” part small: a single JVM backend (Ktor), PostgreSQL/MySQL/SQLite, and an admin UI (Compose for Web). No separate analytics or experiment product to wire together — flags, experiments, evaluation counts, and basic analytics live in the same deploy. One binary, one DB, one UI.
What you get in one deploy
- Feature flags — Create flags in the UI or via API; enable/disable, segment by rules (region, tier, etc.), and gradual rollout by percentage.
- A/B experiments — Variants, distributions, and deterministic bucketing (MurmurHash3). Same user always gets the same variant.
- Evaluation counts and analytics — Ingest evaluation events and optional custom events; see counts per flag and over time in the UI.
- Crash ingestion — Send crash reports from SDKs; list them in the UI and (in Enterprise) tie crashes to flags for safer rollouts.
- Webhooks and GitOps — Notify external systems on flag changes; export/import YAML/JSON; CLI for
flags list,flags create,eval, and sync in CI.
From pilot to platform
A typical path: one team adopts Flagent for a single Ktor service and a few flags (e.g. new payment flow). Once the rollout and kill-switch workflow prove useful, other teams plug in the same Flagent instance: they create their own flags and segments, and evaluation is consistent across services. You then add staging/production instances (or separate DBs per env), use the CLI and GitOps for CI, and treat Flagent as shared platform infrastructure. Ownership stays clear: platform/SRE runs the service; product teams own their flag keys and rollout decisions. Flagent’s API keys and (in Enterprise) audit log support that split.
5-minute setup: Docker
Run Flagent with Docker. You must pass admin credentials and a JWT secret or the UI login won’t work. For local dev, the below is enough; for production, use strong secrets and a real DB (see Production below).
docker run -d -p 18000:18000 \
-e FLAGENT_DB_DBDRIVER=sqlite3 \
-e FLAGENT_DB_DBCONNECTIONSTR=/data/flagent.sqlite \
-e FLAGENT_ADMIN_EMAIL=admin@local \
-e FLAGENT_ADMIN_PASSWORD=admin \
-e FLAGENT_JWT_AUTH_SECRET=dev-secret-at-least-32-characters-long \
-v flagent-db:/data \
ghcr.io/maxluxs/flagent
Open http://localhost:18000, log in with admin@local / admin. Create a flag (e.g. key new_payment_flow) in the UI. For production-like setup with PostgreSQL, use Docker Compose:
git clone https://github.com/MaxLuxs/Flagent.git && cd Flagent
docker compose up -d
Same UI and API; data persists in PostgreSQL.
Golden path: one command (Flagent + Ktor sample)
From the Flagent repo root you can start both Flagent and the Ktor sample app in one go:
./scripts/run-golden-path.sh
This starts Flagent (Docker Compose), waits for health, optionally seeds the new_payment_flow flag, and runs the Ktor sample on port 8080. Result: Flagent UI at http://localhost:18000, sample at http://localhost:8080. Try:
curl -s "http://localhost:8080/feature/new_payment_flow?entityID=user1"
See Getting Started for details and the --no-seed / --no-sample options.
Ktor integration in 3 steps
Step 1: Add the plugin and configure
In your Ktor Application.module(), install the Flagent plugin and set the base URL (and optional API key for multi-tenant):
installFlagent {
flagentBaseUrl = "http://localhost:18000"
enableEvaluation = true
enableCache = true
cacheTtlMs = 60_000
}
Step 2: Get the client and evaluate in a route
get("/pay") {
val entityID = call.request.queryParameters["entityID"] ?: "guest"
val client = call.application.getFlagentClient() ?: return@get call.respond(503, "Flagent unavailable")
val result = client.evaluate(
EvaluationRequest(flagKey = "new_payment_flow", entityID = entityID, entityType = "user")
)
when (result.variantKey) {
"treatment" -> call.respond(mapOf("flow" to "new"))
else -> call.respond(mapOf("flow" to "legacy"))
}
}
Step 3: Control rollout in the UI
In Flagent UI, create the flag, add variants (control, treatment), and add a segment with e.g. 10% rollout to treatment. No code change — you move to 50% or 100% by editing the segment. If something breaks, disable the flag: everyone gets control immediately (kill switch).
Gradual rollout and kill switch in practice
A typical flow: ship the new payment code behind a flag, start with 10% of users on treatment, watch metrics, then 50%, then 100%. If error rate or support tickets spike, disable the flag in the UI — no redeploy. The full walkthrough with screenshots and curl examples is in the tutorial Gradual rollout of new payment on Ktor in 30 minutes.
Production: security and resilience
For production deployments, treat Flagent like any critical service:
- Secrets — Set
FLAGENT_JWT_AUTH_SECRETand admin password via env or secret manager; never commit them. Rotate periodically. - TLS — Put Flagent behind a reverse proxy (e.g. nginx, Caddy) or load balancer that terminates TLS. Internal services can call Flagent over the internal network.
- Backups — Back up the DB (PostgreSQL/MySQL/SQLite) on a schedule. Flag state is small; restores are straightforward. Exporting flags to YAML/JSON (CLI or API) gives you a human-readable backup and GitOps source of truth.
- Availability — Run at least two replicas behind a load balancer if you need HA; use a shared DB. The Ktor plugin caches evaluation results with a TTL, so brief Flagent unavailability may not affect already-cached evaluations — but plan for “Flagent down” with fallback behavior in your app (e.g. default to control variant).
See Deployment for the full production checklist and upgrade path.
CLI for automation and CI
Flagent ships a CLI script (scripts/flagent-cli.sh) so you can list flags, create flags, and evaluate from the command line or CI. That enables scripts, GitOps pipelines, and “smoke tests” (e.g. after deploy, eval a known flag and assert the variant).
./scripts/flagent-cli.sh flags list --url http://localhost:18000
./scripts/flagent-cli.sh flags create --key my_feature --description "My feature" --enabled --url http://localhost:18000
./scripts/flagent-cli.sh eval --flag-key my_feature --entity-id user1 --url http://localhost:18000
Requires curl and jq. See CLI Reference and GitOps for export/import and branch-based flag creation.
Summary
Self-hosted feature flags are a strategic choice: control over data and compliance, predictable cost at scale, no vendor lock-in, and the ability to run the same stack in every environment. Flagent gives you one self-hosted platform — flags, experiments, analytics, crash list, webhooks, and GitOps — in one UI and one API. Docker gets you running in minutes; the Ktor plugin plugs evaluation into your routes in three steps. Use the golden path script to try Flagent and a sample app together, then follow the gradual-rollout tutorial for a real payment-flow scenario. For production, follow the deployment guide: strong secrets, TLS, backups, and optional HA. From a single-team pilot to a multi-team platform, you keep one stack, one language, and your data.
- Getting Started — Quick start and golden path.
- Deployment — Production security, TLS, backups, upgrade path.
- Tutorial: Ktor payment rollout (30 min) — Step-by-step with code and UI.
- Why Flagent — Comparison with other platforms.