Practice - Container Orchestration & Compose#
This practice guide contains hands-on exercises to reinforce your understanding of Docker Compose. Complete each exercise in order to build your skills progressively.
Docker and Docker Compose installed
Completed Docker Fundamentals exercises
Basic understanding of networking concepts
Exercise 1: Docker Compose Basics#
Objective: Create your first multi-container application with Docker Compose.
Skills Practiced:
Writing docker-compose.yml
Starting and stopping services
Viewing logs and status
Steps#
# 1. Create project directory
mkdir compose-basics
cd compose-basics
# 2. Create a simple FastAPI web app
cat > app.py << 'EOF'
from fastapi import FastAPI
import os
import socket
app = FastAPI()
@app.get("/")
def hello():
return {
"message": "Hello from Docker Compose!",
"hostname": socket.gethostname(),
"environment": os.getenv("APP_ENV", "development")
}
@app.get("/health")
def health():
return {"status": "healthy"}
EOF
# 3. Create pyproject.toml
cat > pyproject.toml << 'EOF'
fastapi==0.109.0
uvicorn[standard]==0.27.0
EOF
# 4. Create Dockerfile
cat > Dockerfile << 'EOF'
FROM python:3.11-slim
WORKDIR /app
COPY pyproject.toml .
COPY --from=ghcr.io/astral-sh/uv:latest /uv /usr/local/bin/uv
RUN uv sync --frozen
COPY app.py .
EXPOSE 8000
CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "8000"]
EOF
# 5. Create docker-compose.yml
cat > docker-compose.yml << 'EOF'
services:
web:
build: .
ports:
- "8000:8000"
environment:
- APP_ENV=docker-compose
EOF
# 6. Build and start services
docker compose up -d --build
# 7. Check running services
docker compose ps
# 8. View logs
docker compose logs web
# 9. Test the application
curl http://localhost:8000
curl http://localhost:8000/health
# 10. Stop services
docker compose down
Verification Checklist#
Service starts successfully
Application responds on port 8000
Environment variable is passed correctly
Logs are visible with
docker compose logs
Exercise 2: Multi-service Application#
Objective: Build an application with multiple interconnected services.
Skills Practiced:
Service dependencies
Inter-service communication
Named volumes
Steps#
# 1. Create project directory
mkdir multi-service-app
cd multi-service-app
# 2. Create the FastAPI application with Redis counter
cat > app.py << 'EOF'
from fastapi import FastAPI
from contextlib import asynccontextmanager
import redis.asyncio as redis
import os
# Redis connection
redis_client = None
@asynccontextmanager
async def lifespan(app: FastAPI):
global redis_client
redis_client = redis.from_url(
os.getenv("REDIS_URL", "redis://redis:6379/0"),
decode_responses=True
)
yield
await redis_client.close()
app = FastAPI(lifespan=lifespan)
@app.get("/")
async def index():
count = await redis_client.incr("visits")
return {
"message": "Hello from FastAPI + Redis!",
"visits": count
}
@app.get("/health")
async def health():
try:
await redis_client.ping()
return {"status": "healthy", "redis": "connected"}
except:
return {"status": "unhealthy", "redis": "disconnected"}
@app.post("/reset")
async def reset():
await redis_client.set("visits", 0)
return {"message": "Counter reset", "visits": 0}
EOF
# 3. Create pyproject.toml
cat > pyproject.toml << 'EOF'
fastapi==0.109.0
uvicorn[standard]==0.27.0
redis==5.0.0
EOF
# 4. Create Dockerfile
cat > Dockerfile << 'EOF'
FROM python:3.11-slim
WORKDIR /app
RUN apt-get update && apt-get install -y curl && rm -rf /var/lib/apt/lists/*
COPY pyproject.toml .
COPY --from=ghcr.io/astral-sh/uv:latest /uv /usr/local/bin/uv
RUN uv sync --frozen
COPY app.py .
EXPOSE 8000
CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "8000"]
EOF
# 5. Create docker-compose.yml
cat > docker-compose.yml << 'EOF'
services:
web:
build: .
ports:
- "8000:8000"
environment:
- REDIS_URL=redis://redis:6379/0
depends_on:
redis:
condition: service_healthy
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8000/health"]
interval: 10s
timeout: 5s
retries: 3
start_period: 10s
redis:
image: redis:7-alpine
volumes:
- redis_data:/data
command: redis-server --appendonly yes
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 3s
retries: 3
volumes:
redis_data:
EOF
# 6. Start all services
docker compose up -d --build
# 7. Wait for health checks
sleep 10
docker compose ps
# 8. Test the application multiple times
echo "=== Testing visit counter ==="
for i in {1..5}; do
curl -s http://localhost:8000 | jq .
sleep 1
done
# 9. Check health endpoint
curl http://localhost:8000/health | jq .
# 10. View Redis data persistence
docker compose exec redis redis-cli GET visits
# 11. Restart services and check persistence
docker compose restart web
sleep 5
curl http://localhost:8000 | jq .
# Counter should continue from previous value
# 12. Clean up
docker compose down
# Note: Volume preserved, counter will persist
# 13. Start again and verify persistence
docker compose up -d
sleep 5
curl http://localhost:8000 | jq .
# Should continue counting
# 14. Full cleanup including volumes
docker compose down -v
Expected Output#
{
"message": "Hello from FastAPI + Redis!",
"visits": 5
}
Exercise 3: Complete RAG Stack#
Objective: Deploy a complete RAG application stack with API, database, cache, and vector store.
Skills Practiced:
Complex multi-service orchestration
Database initialization
Environment configuration
Service health dependencies
Steps#
# 1. Create project directory
mkdir rag-stack
cd rag-stack
# 2. Create the RAG API application
cat > app.py << 'EOF'
import os
from typing import Optional
from contextlib import asynccontextmanager
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
import redis.asyncio as redis
import asyncpg
# Configuration from environment
DATABASE_URL = os.getenv("DATABASE_URL")
REDIS_URL = os.getenv("REDIS_URL", "redis://redis:6379/0")
# Connection pools
db_pool = None
redis_client = None
@asynccontextmanager
async def lifespan(app: FastAPI):
global db_pool, redis_client
# Initialize connections
db_pool = await asyncpg.create_pool(DATABASE_URL)
redis_client = redis.from_url(REDIS_URL, decode_responses=True)
yield
# Cleanup
await db_pool.close()
await redis_client.close()
app = FastAPI(title="RAG API", version="1.0.0", lifespan=lifespan)
class Document(BaseModel):
content: str
metadata: Optional[dict] = {}
class Query(BaseModel):
question: str
top_k: int = 5
@app.get("/")
def root():
return {"service": "RAG API", "status": "running"}
@app.get("/health")
async def health():
status = {"api": "healthy"}
# Check Redis
try:
await redis_client.ping()
status["redis"] = "connected"
except:
status["redis"] = "disconnected"
# Check PostgreSQL
try:
async with db_pool.acquire() as conn:
await conn.fetchval("SELECT 1")
status["postgres"] = "connected"
except Exception as e:
status["postgres"] = f"error: {str(e)}"
return status
@app.post("/documents")
async def create_document(doc: Document):
# Check cache first
cache_key = f"doc:{hash(doc.content)}"
cached = await redis_client.get(cache_key)
if cached:
return {"message": "Document already exists", "cached": True}
# Insert into database
async with db_pool.acquire() as conn:
doc_id = await conn.fetchval(
"INSERT INTO documents (content, metadata) VALUES ($1, $2) RETURNING id",
doc.content, str(doc.metadata)
)
# Cache the result
await redis_client.setex(cache_key, 3600, str(doc_id))
return {"id": doc_id, "message": "Document created"}
@app.get("/documents")
async def list_documents():
# Try cache first
cached = await redis_client.get("documents:list")
if cached:
return {"documents": eval(cached), "cached": True}
async with db_pool.acquire() as conn:
rows = await conn.fetch(
"SELECT id, content, metadata FROM documents ORDER BY id DESC LIMIT 100"
)
docs = [dict(row) for row in rows]
# Cache for 60 seconds
await redis_client.setex("documents:list", 60, str(docs))
return {"documents": docs, "cached": False}
@app.post("/query")
async def query_documents(q: Query):
async with db_pool.acquire() as conn:
rows = await conn.fetch(
"SELECT id, content FROM documents WHERE content ILIKE $1 LIMIT $2",
f"%{q.question}%", q.top_k
)
results = [dict(row) for row in rows]
return {"query": q.question, "results": results}
@app.delete("/cache")
async def clear_cache():
await redis_client.flushdb()
return {"message": "Cache cleared"}
EOF
# 3. Create pyproject.toml
cat > pyproject.toml << 'EOF'
fastapi==0.109.0
uvicorn[standard]==0.27.0
redis==5.0.0
asyncpg==0.29.0
pydantic==2.5.0
EOF
# 4. Create Dockerfile
cat > Dockerfile << 'EOF'
FROM python:3.11-slim
WORKDIR /app
# Install curl for healthcheck
RUN apt-get update && apt-get install -y curl && rm -rf /var/lib/apt/lists/*
COPY pyproject.toml .
COPY --from=ghcr.io/astral-sh/uv:latest /uv /usr/local/bin/uv
RUN uv sync --frozen
COPY app.py .
RUN useradd -m appuser
USER appuser
EXPOSE 8000
CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "8000"]
EOF
# 5. Create database initialization script
mkdir -p init-scripts
cat > init-scripts/01-schema.sql << 'EOF'
-- Create documents table
CREATE TABLE IF NOT EXISTS documents (
id SERIAL PRIMARY KEY,
content TEXT NOT NULL,
metadata TEXT,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
-- Create index for text search
CREATE INDEX IF NOT EXISTS idx_documents_content ON documents USING gin(to_tsvector('english', content));
-- Insert sample data
INSERT INTO documents (content, metadata) VALUES
('Docker is a containerization platform that packages applications with dependencies.', '{"source": "docker-docs"}'),
('Kubernetes is a container orchestration platform for managing containerized workloads.', '{"source": "k8s-docs"}'),
('Redis is an in-memory data structure store used as cache and message broker.', '{"source": "redis-docs"}'),
('PostgreSQL is a powerful open-source relational database system.', '{"source": "postgres-docs"}');
EOF
# 6. Create .env file
cat > .env << 'EOF'
POSTGRES_USER=raguser
POSTGRES_PASSWORD=ragpassword
POSTGRES_DB=ragdb
DATABASE_URL=postgresql://raguser:ragpassword@postgres:5432/ragdb
REDIS_URL=redis://redis:6379/0
EOF
# 7. Create docker-compose.yml
cat > docker-compose.yml << 'EOF'
services:
# ===================
# API Service
# ===================
api:
build: .
container_name: rag-api
ports:
- "8000:8000"
env_file:
- .env
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
restart: unless-stopped
# ===================
# PostgreSQL Database
# ===================
postgres:
image: postgres:16-alpine
container_name: rag-postgres
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_DB=${POSTGRES_DB}
volumes:
- postgres_data:/var/lib/postgresql/data
- ./init-scripts:/docker-entrypoint-initdb.d:ro
ports:
- "5432:5432"
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}"]
interval: 10s
timeout: 5s
retries: 5
restart: unless-stopped
# ===================
# Redis Cache
# ===================
redis:
image: redis:7-alpine
container_name: rag-redis
command: redis-server --appendonly yes
volumes:
- redis_data:/data
ports:
- "6379:6379"
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 5s
retries: 5
restart: unless-stopped
# ===================
# Adminer (DB UI)
# ===================
adminer:
image: adminer
container_name: rag-adminer
ports:
- "8080:8080"
depends_on:
- postgres
profiles:
- tools
volumes:
postgres_data:
redis_data:
EOF
# 8. Start all services
docker compose up -d --build
# 9. Wait for services to be healthy
echo "Waiting for services to start..."
sleep 15
# 10. Check service status
docker compose ps
# 11. Check health endpoint
echo "=== Health Check ==="
curl -s http://localhost:8000/health | jq .
# 12. List pre-loaded documents
echo "=== Pre-loaded Documents ==="
curl -s http://localhost:8000/documents | jq .
# 13. Add a new document
echo "=== Adding Document ==="
curl -s -X POST http://localhost:8000/documents \
-H "Content-Type: application/json" \
-d '{"content": "FastAPI is a modern Python web framework for building APIs.", "metadata": {"source": "fastapi-docs"}}' | jq .
# 14. Query documents
echo "=== Querying Documents ==="
curl -s -X POST http://localhost:8000/query \
-H "Content-Type: application/json" \
-d '{"question": "container", "top_k": 3}' | jq .
# 15. Start optional tools
echo "=== Starting Adminer (DB UI) ==="
docker compose --profile tools up -d adminer
echo "Adminer available at http://localhost:8080"
# 16. View logs
docker compose logs api --tail 20
# 17. Clean up
# docker compose down -v
Verification Checklist#
All services start and become healthy
API responds with connected status for Redis and PostgreSQL
Pre-loaded documents are visible
New documents can be added
Query returns relevant results
Cache is working (repeat requests are faster)
Exercise 4: Development vs Production Configuration#
Objective: Set up different configurations for development and production environments.
Skills Practiced:
Override files
Environment-specific configuration
Development workflow optimization
Steps#
# 1. Create project directory
mkdir env-configs
cd env-configs
# 2. Create FastAPI application
cat > app.py << 'EOF'
from fastapi import FastAPI
import os
app = FastAPI()
@app.get("/")
def index():
return {
"environment": os.getenv("APP_ENV", "production"),
"debug": os.getenv("DEBUG", "false"),
"database": os.getenv("DATABASE_URL", "not configured"),
"features": {
"hot_reload": os.getenv("HOT_RELOAD", "false"),
"debug_toolbar": os.getenv("DEBUG_TOOLBAR", "false")
}
}
@app.get("/health")
def health():
return {"status": "healthy"}
EOF
cat > pyproject.toml << 'EOF'
fastapi==0.109.0
uvicorn[standard]==0.27.0
EOF
# 3. Create base Dockerfile
cat > Dockerfile << 'EOF'
FROM python:3.11-slim
WORKDIR /app
RUN apt-get update && apt-get install -y curl && rm -rf /var/lib/apt/lists/*
COPY pyproject.toml .
COPY --from=ghcr.io/astral-sh/uv:latest /uv /usr/local/bin/uv
RUN uv sync --frozen
COPY . .
EXPOSE 8000
CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "8000"]
EOF
# 4. Create base docker-compose.yml
cat > docker-compose.yml << 'EOF'
# Base configuration - shared across environments
services:
app:
build: .
ports:
- "8000:8000"
environment:
- APP_ENV=production
- DATABASE_URL=postgresql://user:pass@db:5432/prod
restart: unless-stopped
db:
image: postgres:16-alpine
environment:
- POSTGRES_DB=prod
- POSTGRES_USER=user
- POSTGRES_PASSWORD=pass
volumes:
- db_data:/var/lib/postgresql/data
volumes:
db_data:
EOF
# 5. Create development override (auto-loaded)
cat > docker-compose.override.yml << 'EOF'
# Development overrides - automatically loaded with docker-compose.yml
services:
app:
build:
context: .
dockerfile: Dockerfile
volumes:
- ./app.py:/app/app.py # Hot reload
environment:
- APP_ENV=development
- DEBUG=true
- DATABASE_URL=postgresql://user:pass@db:5432/dev
- HOT_RELOAD=true
- DEBUG_TOOLBAR=true
command: uvicorn app:app --host 0.0.0.0 --port 8000 --reload
db:
environment:
- POSTGRES_DB=dev
ports:
- "5432:5432" # Expose for local tools
EOF
# 6. Create production configuration
cat > docker-compose.prod.yml << 'EOF'
# Production configuration
services:
app:
image: myapp:${VERSION:-latest}
deploy:
replicas: 2
resources:
limits:
cpus: '0.5'
memory: 512M
environment:
- APP_ENV=production
- DEBUG=false
- DATABASE_URL=${DATABASE_URL}
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8000/health"]
interval: 30s
timeout: 10s
retries: 3
logging:
driver: json-file
options:
max-size: "10m"
max-file: "3"
db:
# In production, typically use managed database
# This is just for demonstration
deploy:
resources:
limits:
cpus: '1.0'
memory: 1G
EOF
# 7. Create environment files
cat > .env.dev << 'EOF'
APP_ENV=development
DEBUG=true
DATABASE_URL=postgresql://user:pass@db:5432/dev
EOF
cat > .env.prod << 'EOF'
APP_ENV=production
DEBUG=false
DATABASE_URL=postgresql://user:prodpass@db:5432/prod
VERSION=1.0.0
EOF
# 8. Run development environment (default)
echo "=== Starting Development Environment ==="
docker compose up -d --build
sleep 5
echo "Development config:"
curl -s http://localhost:8000 | jq .
docker compose down
# 9. Run production environment
echo "=== Starting Production Environment ==="
docker compose -f docker-compose.yml -f docker-compose.prod.yml --env-file .env.prod up -d --build
sleep 5
echo "Production config:"
curl -s http://localhost:8000 | jq .
# 10. Clean up
docker compose -f docker-compose.yml -f docker-compose.prod.yml down -v
Expected Output#
Development:
{
"environment": "development",
"debug": "true",
"features": {
"hot_reload": "true",
"debug_toolbar": "true"
}
}
Production:
{
"environment": "production",
"debug": "false",
"features": {
"hot_reload": "false",
"debug_toolbar": "false"
}
}
Exercise 5: Scaling and Load Balancing#
Objective: Scale services and implement basic load balancing with nginx.
Skills Practiced:
Service scaling
Load balancer configuration
Round-robin distribution
Steps#
# 1. Create project directory
mkdir scaling-demo
cd scaling-demo
# 2. Create FastAPI app that shows instance ID
cat > app.py << 'EOF'
from fastapi import FastAPI
import socket
app = FastAPI()
instance_id = socket.gethostname()
request_count = 0
@app.get("/")
def index():
global request_count
request_count += 1
return {
"instance": instance_id,
"requests_handled": request_count
}
@app.get("/health")
def health():
return {"status": "healthy", "instance": instance_id}
EOF
cat > pyproject.toml << 'EOF'
fastapi==0.109.0
uvicorn[standard]==0.27.0
EOF
cat > Dockerfile << 'EOF'
FROM python:3.11-slim
WORKDIR /app
COPY pyproject.toml .
COPY --from=ghcr.io/astral-sh/uv:latest /uv /usr/local/bin/uv
RUN uv sync --frozen
COPY app.py .
EXPOSE 8000
CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "8000"]
EOF
# 3. Create nginx configuration
cat > nginx.conf << 'EOF'
upstream backend {
server app:8000;
}
server {
listen 80;
location / {
proxy_pass http://backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /health {
access_log off;
proxy_pass http://backend/health;
}
}
EOF
# 4. Create docker-compose.yml
cat > docker-compose.yml << 'EOF'
services:
nginx:
image: nginx:alpine
ports:
- "80:80"
volumes:
- ./nginx.conf:/etc/nginx/conf.d/default.conf:ro
depends_on:
- app
app:
build: .
expose:
- "8000"
deploy:
replicas: 3
EOF
# 5. Start with multiple replicas
docker compose up -d --build --scale app=3
# 6. Wait for startup
sleep 10
# 7. Check running instances
docker compose ps
# 8. Test load balancing
echo "=== Testing Load Balancing ==="
for i in {1..10}; do
curl -s http://localhost/ | jq -r '.instance'
sleep 0.5
done
# 9. View all instance responses
echo ""
echo "=== Full responses ==="
for i in {1..6}; do
curl -s http://localhost/ | jq .
done
# 10. Scale up dynamically
echo ""
echo "=== Scaling to 5 replicas ==="
docker compose up -d --scale app=5
sleep 5
docker compose ps
# 11. Scale down
echo ""
echo "=== Scaling to 2 replicas ==="
docker compose up -d --scale app=2
docker compose ps
# 12. Clean up
docker compose down
Expected Output#
=== Testing Load Balancing ===
abc123def456
xyz789ghi012
abc123def456
xyz789ghi012
...
The instance IDs rotate, showing requests are distributed across containers.
Exercise 6: Complete Development Workflow#
Objective: Set up a complete development workflow with logs, debugging, and database management.
Skills Practiced:
Development tools integration
Log aggregation
Database management UI
Debugging workflow
Steps#
# 1. Create project directory
mkdir dev-workflow
cd dev-workflow
# 2. Create FastAPI application
cat > app.py << 'EOF'
from fastapi import FastAPI
from contextlib import asynccontextmanager
import redis.asyncio as redis
import asyncpg
import os
db_pool = None
redis_client = None
@asynccontextmanager
async def lifespan(app: FastAPI):
global db_pool, redis_client
db_pool = await asyncpg.create_pool(os.getenv("DATABASE_URL"))
redis_client = redis.from_url(os.getenv("REDIS_URL"), decode_responses=True)
yield
await db_pool.close()
await redis_client.close()
app = FastAPI(title="Dev Workflow Demo", lifespan=lifespan)
@app.get("/")
async def root():
return {"message": "Development workflow demo"}
@app.get("/health")
async def health():
try:
await redis_client.ping()
async with db_pool.acquire() as conn:
await conn.fetchval("SELECT 1")
return {"status": "healthy", "postgres": "ok", "redis": "ok"}
except Exception as e:
return {"status": "unhealthy", "error": str(e)}
EOF
cat > pyproject.toml << 'EOF'
fastapi==0.109.0
uvicorn[standard]==0.27.0
redis==5.0.0
asyncpg==0.29.0
EOF
cat > Dockerfile << 'EOF'
FROM python:3.11-slim
WORKDIR /app
RUN apt-get update && apt-get install -y curl && rm -rf /var/lib/apt/lists/*
COPY pyproject.toml .
COPY --from=ghcr.io/astral-sh/uv:latest /uv /usr/local/bin/uv
RUN uv sync --frozen
COPY app.py .
EXPOSE 8000
CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "8000", "--reload"]
EOF
# 3. Create comprehensive docker-compose.yml
cat > docker-compose.yml << 'EOF'
services:
# Main application
app:
build: .
ports:
- "8000:8000"
environment:
- DATABASE_URL=postgresql://dev:devpass@postgres:5432/devdb
- REDIS_URL=redis://redis:6379/0
volumes:
- ./app.py:/app/app.py # Hot reload
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
restart: unless-stopped
# PostgreSQL
postgres:
image: postgres:16-alpine
environment:
- POSTGRES_USER=dev
- POSTGRES_PASSWORD=devpass
- POSTGRES_DB=devdb
volumes:
- postgres_data:/var/lib/postgresql/data
ports:
- "5432:5432"
healthcheck:
test: ["CMD-SHELL", "pg_isready -U dev -d devdb"]
interval: 5s
timeout: 3s
retries: 5
# Redis
redis:
image: redis:7-alpine
command: redis-server --appendonly yes
volumes:
- redis_data:/data
ports:
- "6379:6379"
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 3s
retries: 5
# Database Admin UI
adminer:
image: adminer
ports:
- "8080:8080"
depends_on:
- postgres
profiles:
- tools
# Redis Commander (Redis UI)
redis-commander:
image: rediscommander/redis-commander:latest
environment:
- REDIS_HOSTS=local:redis:6379
ports:
- "8081:8081"
depends_on:
- redis
profiles:
- tools
volumes:
postgres_data:
redis_data:
EOF
# 4. Start main services
docker compose up -d --build
# 5. Start development tools
docker compose --profile tools up -d
# 6. Display access information
echo "
=== Development Environment Ready ===
Services:
- API: http://localhost:8000
- Health: http://localhost:8000/health
- Docs (Swagger): http://localhost:8000/docs
- ReDoc: http://localhost:8000/redoc
Database Tools:
- Adminer: http://localhost:8080
System: PostgreSQL
Server: postgres
Username: dev
Password: devpass
Database: devdb
- Redis Commander: http://localhost:8081
Commands:
- View logs: docker compose logs -f app
- Enter app container: docker compose exec app bash
- Enter postgres: docker compose exec postgres psql -U dev -d devdb
- Enter redis: docker compose exec redis redis-cli
"
# 7. Useful development commands
echo "=== Testing API ==="
curl -s http://localhost:8000 | jq .
curl -s http://localhost:8000/health | jq .
# 8. Database operations
echo ""
echo "=== Direct Database Access ==="
docker compose exec postgres psql -U dev -d devdb -c "\dt"
# 9. Redis operations
echo ""
echo "=== Direct Redis Access ==="
docker compose exec redis redis-cli KEYS "*"
docker compose exec redis redis-cli INFO memory | head -5
# 10. Tail logs
echo ""
echo "=== Application Logs (Ctrl+C to exit) ==="
docker compose logs -f app --tail 10
# Cleanup command (run manually)
# docker compose --profile tools down -v
Verification Checklist#
All services running and healthy
FastAPI Swagger docs accessible at /docs
Adminer accessible and can connect to PostgreSQL
Redis Commander showing Redis data
Hot reload working (change app.py and test)
Direct database/redis access working
Summary Checklist#
After completing all exercises, verify you can:
Write docker-compose.yml from scratch
Configure multi-service applications with FastAPI
Set up service dependencies with health checks
Configure volumes for data persistence
Use environment files and overrides
Scale services horizontally
Configure nginx as load balancer
Set up development tools (Adminer, Redis Commander)
Additional Challenges#
Add monitoring: Integrate Prometheus and Grafana for metrics
Log aggregation: Add Loki for centralized logging
CI/CD integration: Create GitHub Actions workflow to build and test
Kubernetes migration: Convert docker-compose.yml to Kubernetes manifests