2025-04-01 · 6 min read
From Docker Compose to Kubernetes: A Step-by-Step Migration Guide
Your Docker Compose setup got you this far, but you're hitting its limits. Here's how to migrate to Kubernetes methodically — without rewriting everything at once or losing sleep over production.
When Docker Compose Stops Being Enough
Docker Compose is fantastic. It gets a bad reputation in certain circles, but for small teams running a handful of services, it's simple, fast, and it works. We've seen companies run production workloads on Compose for years without issues.
But there comes a point where you start feeling the edges. Maybe it's the third time you've had downtime because a single host restarted. Maybe you need to scale one service independently without scaling everything else. Maybe your on-call engineer is tired of SSH-ing into a box at 2 AM to restart a crashed container.
If you're nodding along, it's probably time to think about Kubernetes. The migration doesn't have to be a big-bang rewrite. Here's how to do it incrementally.
Step 1: Audit Your Compose Setup
Before writing a single Kubernetes manifest, document what you actually have. Pull up your docker-compose.yml and inventory every service, volume, network, and environment variable:
# Typical docker-compose.yml you might be starting with
services:
api:
build: ./api
ports:
- "8080:8080"
environment:
- DATABASE_URL=postgres://db:5432/myapp
- REDIS_URL=redis://redis:6379
depends_on:
- db
- redis
volumes:
- ./api:/app
restart: always
worker:
build: ./worker
environment:
- DATABASE_URL=postgres://db:5432/myapp
- REDIS_URL=redis://redis:6379
depends_on:
- db
- redis
restart: always
db:
image: postgres:16
volumes:
- pgdata:/var/lib/postgresql/data
environment:
- POSTGRES_DB=myapp
- POSTGRES_PASSWORD=localdev
redis:
image: redis:7-alpine
volumes:
pgdata:
Make a list: what's stateless (api, worker), what's stateful (db, redis), what uses host volumes, and what needs external access. This inventory drives your migration order.
Step 2: Containerize Properly
If your Compose setup uses build: with local code mounts (like that volumes: ./api:/app above), you need real, production-ready container images first. Kubernetes pulls images from registries, not local directories.
Write a Dockerfile that produces a self-contained image:
# api/Dockerfile
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --production
COPY . .
RUN npm run build
FROM node:20-alpine
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
EXPOSE 8080
USER node
CMD ["node", "dist/server.js"]
Push these images to a container registry. GitHub Container Registry, ECR, GCR, Docker Hub — pick one your team already has access to:
docker build -t ghcr.io/yourorg/api:v1.0.0 ./api
docker push ghcr.io/yourorg/api:v1.0.0
Do this for every service that uses build: in your Compose file. Services using stock images (like postgres:16) can keep their existing image references.
Step 3: Translate to Kubernetes Manifests
Here's where the actual migration happens. Each Compose service becomes a Kubernetes Deployment and a Service. Let's translate the API service:
# k8s/api/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: api
labels:
app: api
spec:
replicas: 2
selector:
matchLabels:
app: api
template:
metadata:
labels:
app: api
spec:
containers:
- name: api
image: ghcr.io/yourorg/api:v1.0.0
ports:
- containerPort: 8080
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: app-secrets
key: database-url
- name: REDIS_URL
valueFrom:
secretKeyRef:
name: app-secrets
key: redis-url
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "500m"
readinessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 15
periodSeconds: 20
---
# k8s/api/service.yaml
apiVersion: v1
kind: Service
metadata:
name: api
spec:
selector:
app: api
ports:
- port: 8080
targetPort: 8080
Notice the differences from Compose. Environment variables now come from Kubernetes Secrets instead of being hardcoded. There are resource requests and limits. There are health checks. You get two replicas instead of one. These aren't optional extras — they're the features that make Kubernetes worth the migration.
Create the secrets separately:
kubectl create secret generic app-secrets \
--from-literal=database-url='postgres://db:5432/myapp' \
--from-literal=redis-url='redis://redis:6379'
Step 4: Handle Stateful Services Carefully
This is where teams make the most mistakes. Databases and other stateful services need different treatment than your stateless application containers.
Our strong recommendation: don't run databases in Kubernetes unless you have a specific reason to. Use a managed service instead. RDS for Postgres, ElastiCache for Redis, Cloud SQL for GCP. Managed services handle backups, failover, patching, and all the operational burden that comes with running stateful workloads.
If you're currently running Postgres in Compose, the migration path is:
- Provision a managed database (RDS, Cloud SQL, etc.)
- Migrate your data using
pg_dump/pg_restore - Update your application's
DATABASE_URLto point to the managed instance - Remove the database container from your setup entirely
If you absolutely must run a database in Kubernetes (air-gapped environments, cost constraints, specific compliance requirements), use a StatefulSet with persistent volumes:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres
spec:
serviceName: postgres
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:16
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
value: myapp
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secrets
key: password
volumeMounts:
- name: pgdata
mountPath: /var/lib/postgresql/data
volumeClaimTemplates:
- metadata:
name: pgdata
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 20Gi
But seriously, use managed services if you can.
Step 5: Set Up Ingress
In Compose, you map ports directly to the host. In Kubernetes, you use an Ingress controller to route external traffic to your services:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: api-ingress
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
ingressClassName: nginx
tls:
- hosts:
- api.yourcompany.com
secretName: api-tls
rules:
- host: api.yourcompany.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: api
port:
number: 8080
You'll need an Ingress controller installed in your cluster (nginx-ingress or AWS ALB Ingress Controller are the most common) and cert-manager for automatic TLS certificates. These are one-time cluster setup tasks.
Step 6: Migrate Incrementally
Don't migrate everything at once. Here's the order we recommend:
- Stateless workers first. Background job processors, queue consumers. They're the easiest to migrate because they have no inbound traffic routing to worry about.
- Internal APIs next. Services that talk to other services but aren't directly exposed to the internet.
- External-facing services last. Your public API, your web frontend. These require Ingress configuration and DNS changes.
- Stateful services never (use managed services) or very last if you must.
Run the Compose and Kubernetes versions side by side during the transition. Use a feature flag or DNS-based traffic shifting to gradually move traffic to the Kubernetes deployment.
What You Gain
After the migration, you'll have:
- Self-healing containers that restart automatically on failure
- Horizontal scaling with a single
kubectl scalecommand or autoscaling policies - Rolling deployments with zero downtime out of the box
- Resource isolation so one runaway service can't starve the others
- Declarative configuration that's version-controlled and reproducible
The operational overhead of Kubernetes is real, but for teams that have outgrown Compose, the reliability improvements pay for themselves quickly.
Outgrowing Docker Compose and considering Kubernetes? We've guided dozens of teams through this migration without downtime or drama. Let's plan your migration together.