Dockerising Your Laravel App for Kubernetes: From Dockerfile to Running Pod
Docker Compose is fine until it isn't. The moment you need redundancy, rolling deploys, or autoscaling, you're looking at Kubernetes — and most guides either skip the Dockerfile entirely or drop you straight into Helm. This walks through a complete laravel kubernetes deployment from a production-ready Dockerfile to a live pod you can hit in a browser, using Minikube locally.
If you're evaluating whether K8s is even the right move, Laravel Vapor vs Forge in 2026 covers the trade-offs between managed platforms and self-hosted infrastructure.
The Production Dockerfile for Laravel Kubernetes Deployment
A single-stage Dockerfile that copies everything into one image will work, but you'll end up shipping composer dev dependencies, raw node_modules, and build tooling into production. Multi-stage builds fix this.
The approach: two builder stages (Node for assets, Composer for PHP dependencies), then a lean php:8.3-fpm-alpine runtime image that only gets the artefacts.
# ── Stage 1: Node asset build ─────────────────────────────────────────────
FROM node:20-alpine AS node-builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# ── Stage 2: Composer dependency install ──────────────────────────────────
FROM composer:2.8 AS composer-builder
WORKDIR /app
COPY composer.json composer.lock ./
# Install production dependencies only — no scripts yet (APP_KEY not available)
RUN composer install \
--no-dev \
--no-scripts \
--prefer-dist \
--optimize-autoloader
COPY . .
RUN composer dump-autoload --no-dev --optimize
# ── Stage 3: Runtime image ────────────────────────────────────────────────
FROM php:8.3-fpm-alpine AS runtime
# Install nginx and required PHP extensions
RUN apk add --no-cache nginx \
&& docker-php-ext-install pdo pdo_mysql opcache pcntl
WORKDIR /var/www/html
# Copy application code from builder stages
COPY --from=composer-builder /app /var/www/html
COPY --from=node-builder /app/public/build /var/www/html/public/build
# Nginx config and startup script
COPY docker/nginx.conf /etc/nginx/http.d/default.conf
COPY docker/start.sh /start.sh
RUN chmod +x /start.sh \
# Writable directories for Laravel
&& chown -R www-data:www-data storage bootstrap/cache \
&& chmod -R 775 storage bootstrap/cache
EXPOSE 8080
CMD ["/start.sh"]
Both nginx and php-fpm run inside the same container here, started by start.sh. This is slightly against the "one process per container" principle, but it keeps the pod topology simple and avoids the volume-sharing complexity of a true nginx sidecar. For this level of setup it's the right trade-off — you can always split them later. The startup script runs Laravel's cache warmup first so the container is fully optimised when the readiness probe fires:
#!/bin/sh
set -e
# Warm caches now that env vars are injected by Kubernetes
php artisan config:cache
php artisan route:cache
php artisan view:cache
# Start php-fpm as a daemon, then nginx in the foreground
php-fpm --daemonize
exec nginx -g 'daemon off;'
The nginx config listens on port 8080 (avoids needing root) and proxies PHP requests to php-fpm on 127.0.0.1:9000:
server {
listen 8080;
root /var/www/html/public;
index index.php;
charset utf-8;
add_header X-Frame-Options "SAMEORIGIN";
add_header X-Content-Type-Options "nosniff";
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location ~ \.php$ {
fastcgi_pass 127.0.0.1:9000;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
include fastcgi_params;
fastcgi_hide_header X-Powered-By;
}
location ~ /\.(?!well-known).* {
deny all;
}
}
If you want to go further on image optimisation — OPcache tuning, layer caching in CI, and comparing Alpine vs FrankenPHP base images — optimising Laravel Docker images with multi-stage builds covers that in detail.
.dockerignore — Keep the Image Lean
Without a .dockerignore, Docker copies your entire project context on every COPY . ., including git history, local .env, and test fixtures. Add this at the project root:
.git
.env
.env.*
node_modules
vendor
storage/logs
storage/framework/cache
storage/framework/sessions
storage/framework/views
tests
*.md
.DS_Store
Build and Push to a Registry
# Build and tag (replace with your Docker Hub username or ECR repo)
docker build -t yourusername/laravel-app:latest .
# Push to Docker Hub
docker push yourusername/laravel-app:latest
For CI pipelines, build with the commit SHA as a secondary tag so you can roll back to any exact version: docker build -t yourusername/laravel-app:${GITHUB_SHA} .
Kubernetes ConfigMap — Environment Variables Without Baking Them In
Non-sensitive config goes in a ConfigMap. This keeps your image generic — the same image can run in staging and production by swapping the ConfigMap.
# k8s/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: laravel-config
data:
APP_ENV: "production"
APP_DEBUG: "false"
LOG_CHANNEL: "stderr" # Logs go to kubectl logs, not storage/logs
CACHE_STORE: "redis"
SESSION_DRIVER: "redis"
QUEUE_CONNECTION: "redis"
LOG_CHANNEL: "stderr" is worth calling out — it routes logs to stdout/stderr, which is where Kubernetes expects them. Logging to storage/logs in a pod is unreliable because the filesystem is ephemeral.
Kubernetes Deployment — The Laravel Kubernetes Deployment Manifest
# k8s/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: laravel-app
spec:
replicas: 2
selector:
matchLabels:
app: laravel
template:
metadata:
labels:
app: laravel
spec:
containers:
- name: laravel
image: yourusername/laravel-app:latest
ports:
- containerPort: 8080
# Load all keys from the ConfigMap as env vars
envFrom:
- configMapRef:
name: laravel-config
# Secrets injected individually
env:
- name: APP_KEY
valueFrom:
secretKeyRef:
name: laravel-secrets
key: APP_KEY
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: laravel-secrets
key: DB_PASSWORD
# /up is Laravel's built-in health route (available since Laravel 10)
readinessProbe:
httpGet:
path: /up
port: 8080
initialDelaySeconds: 10
periodSeconds: 10
failureThreshold: 3
livenessProbe:
httpGet:
path: /up
port: 8080
initialDelaySeconds: 30
periodSeconds: 30
resources:
requests:
cpu: "100m"
memory: "128Mi"
limits:
cpu: "500m"
memory: "512Mi"
Create the Secret before applying the Deployment:
kubectl create secret generic laravel-secrets \
--from-literal=APP_KEY='base64:your-key-here' \
--from-literal=DB_PASSWORD='your-db-password'
Kubernetes Service — Exposing the App
# k8s/service.yaml
apiVersion: v1
kind: Service
metadata:
name: laravel-service
spec:
selector:
app: laravel
ports:
- protocol: TCP
port: 80
targetPort: 8080
type: NodePort # Use LoadBalancer for AWS/GCP/Azure
NodePort works with Minikube. For cloud providers, swap type: NodePort for type: LoadBalancer and a cloud load balancer gets provisioned automatically.
Run Locally with Minikube
# Start Minikube with enough resources
minikube start --cpus 2 --memory 4096
# Apply all manifests in the k8s/ directory
kubectl apply -f k8s/
# Watch pods come up
kubectl get pods -w
# Once Running, open the service in your browser
minikube service laravel-service
kubectl get pods -w will show the readiness probe status in real time. If a pod is stuck in 0/1 Running, check the logs:
kubectl logs <pod-name>
kubectl describe pod <pod-name>
The describe output will tell you exactly which probe is failing and why.
Gotchas and Edge Cases
Ephemeral storage. Pod filesystems are wiped when a pod restarts. Never write to storage/app in a K8s deployment — use S3 (or compatible) via Laravel's Flysystem S3 driver. The same applies to compiled views if you're caching them to disk during runtime; bake them in at image build time or accept they regenerate on restart.
Session and cache drivers. The ConfigMap above already sets these to Redis, but this is the most common gotcha for developers moving from a single-server setup. File-based sessions will break instantly across multiple replicas since each pod has its own filesystem. Redis (or database) is non-negotiable.
Queue workers need a separate Deployment. Your web pods should not run queue:work. Queue workers have different resource profiles, different restart policies, and you'll want to scale them independently. Spin up a second Deployment with the same image, override the command to ["php", "artisan", "queue:work", "--tries=3"], and remove the readiness probe. See scaling Laravel queues in production for how to size workers and manage backpressure, and Laravel Horizon queue monitoring in production for how to get visibility into what's happening once workers are running.
php artisan migrate on deploy. Don't run migrations in the container startup script — if you have two replicas coming up simultaneously, both will try to run migrations at the same time. Use a Kubernetes Job or an init container that runs before the main container starts.
OPcache and code changes. OPcache caches compiled PHP bytecode in memory. When you deploy a new image, new pods start fresh. Old pods drain naturally. No OPcache stale-file issues — this is one area where K8s is actually simpler than long-running server deployments.
Wrapping Up
You now have a production-ready Dockerfile, a working set of K8s manifests, and a running pod in Minikube. The natural next step from here is wiring up a CI pipeline to build, push, and apply on every merge — zero-downtime Laravel deployments with GitHub Actions and Forge covers the deployment pipeline side. When you're ready to move beyond YAML management into proper release tooling, Helm and a proper Ingress controller are where this setup evolves next.
Steven is a software engineer with a passion for building scalable web applications. He enjoys sharing his knowledge through articles and tutorials.