Docker — Containers Explained
Ship Your App in a Box
Open interactive version (quiz + challenge)Real-world analogy
Remember when you shared a school project and your friend said 'it doesn't work on my computer'? Docker solves this. It puts your app in a shipping container (image) that works EXACTLY the same on ANY computer. No more 'works on my machine'!
What is it?
Docker packages your application and all its dependencies into a standardized container. A container is like a lightweight virtual machine that runs consistently on any system with Docker installed.
Real-world relevance
Almost every company uses Docker: Google runs billions of containers weekly. Spotify, PayPal, and Uber all use Docker for deployment. It's become a standard part of modern development.
Key points
- Containers — A Docker container is a lightweight, isolated environment that packages your application with everything it needs to run: code, runtime, libraries, and system tools. Unlike virtual machines, containers share the host OS kernel so they start in seconds and use minimal resources. Each container is isolated from others.
- Images — A Docker image is the blueprint for creating containers — like a recipe that produces the exact same dish every time. Images are built from a Dockerfile that specifies the base OS, your application code, dependencies, and startup command. Share images via Docker Hub so anyone can run your app identically.
- Docker Compose — Docker Compose lets you define and run multi-container applications with a single YAML file. Your app needs Node.js, MongoDB, and Redis? Define all three services in docker-compose.yml and run docker compose up to start everything together with proper networking. One command replaces complex setup instructions.
- Consistent Environments — Docker ensures development, staging, and production run the exact same containerized environment. If your app works in the Docker container on your laptop, it will work the same way on the production server. This eliminates the entire class of 'works on my machine' bugs caused by environment differences.
- Layering — Docker images are built in layers: each instruction in your Dockerfile creates a new layer. When you rebuild, Docker caches unchanged layers and only rebuilds what changed. Put rarely-changing layers (like npm install) before frequently-changing ones (like copying source code) for dramatically faster builds.
- Image Size Matters — Smaller images download faster and use less disk space. Start with alpine-based images (a tiny 5MB Linux distro instead of 900MB Ubuntu). Use multi-stage builds to compile in one stage and copy only the production artifacts to a minimal final image. Add a .dockerignore file to exclude node_modules and .git.
- Volumes for Persistent Data — Containers are ephemeral — when they stop, their data disappears. Docker volumes persist data outside the container lifecycle. Mount a volume for your MongoDB data directory so database records survive container restarts. Volumes also enable sharing data between containers and backing up important data.
- Networking Between Containers — Docker Compose creates a private network where containers can communicate using service names as hostnames. Your Node.js app connects to MongoDB at mongodb://mongo:27017 where 'mongo' is the service name. Expose only the ports you need to the outside world, keeping internal services like databases safely hidden.
Code example
// Dockerfile — the recipe for your app container 📝
FROM node:20-alpine
WORKDIR /app
# Install dependencies first (cached layer!)
COPY package.json pnpm-lock.yaml ./
RUN npm install -g pnpm && pnpm install
# Copy source code
COPY . .
# Build the app
RUN pnpm build
# Expose port for documentation
EXPOSE 3000
# Start the app (always last!)
CMD ["node", "dist/main.js"]
# docker-compose.yml — orchestrate multiple containers 🎼
services:
app:
build: .
ports:
- "3000:3000"
depends_on:
- mongodb
- redis
environment:
- DATABASE_URL=mongodb://mongodb:27017/myapp
- REDIS_URL=redis://redis:6379
mongodb:
image: mongo:7
ports:
- "27017:27017"
volumes:
- mongo_data:/data/db # Data persists!
redis:
image: redis:7-alpine
ports:
- "6379:6379"
volumes:
mongo_data:
# One command to start EVERYTHING: 🚀
# docker compose up -dLine-by-line walkthrough
- 1. Dockerfile — the recipe for your app container 📝
- 2. Dockerfile instruction
- 3.
- 4. Dockerfile instruction
- 5.
- 6. Install dependencies first (cached layer!)
- 7. Dockerfile instruction
- 8. Dockerfile instruction
- 9.
- 10. Copy source code
- 11. Dockerfile instruction
- 12.
- 13. Build the app
- 14. Dockerfile instruction
- 15.
- 16. Start the app
- 17. Dockerfile instruction
- 18.
- 19. Dockerfile instruction
- 20.
- 21. docker-compose.yml — orchestrate multiple containers 🎼
- 22.
- 23. Docker Compose section definition
- 24.
- 25.
- 26.
- 27.
- 28.
- 29.
- 30.
- 31.
- 32.
- 33.
- 34.
- 35.
- 36.
- 37.
- 38.
- 39. Docker Compose section definition
- 40.
- 41.
- 42.
- 43.
- 44.
- 45.
- 46.
- 47. Docker Compose section definition
- 48.
- 49.
- 50. One command to start EVERYTHING: 🚀
- 51. docker compose up -d
Spot the bug
FROM node:20-alpine
WORKDIR /app
COPY . .
RUN npm install
RUN npm run build
CMD ["node", "dist/main.js"]Need a hint?
Think about Docker layer caching when files change...
Show answer
Copying all files before npm install means any code change invalidates the install cache. Fix: COPY package*.json first, run npm install, THEN copy source code.
Explain like I'm 5
Docker is like a lunchbox for your app. At home, your sandwich is great. But bring it to school without a lunchbox, it gets squished! Docker puts your app in a special container so it works perfectly everywhere - your computer, friend's computer, or a big server.
Fun fact
The Docker logo (a whale carrying containers) is named 'Moby Dock'. And the Docker company was originally called 'dotCloud' — they pivoted entirely to containers because it was so popular! 🐋
Hands-on challenge
Write a Dockerfile for a NestJS app that uses multi-stage builds: Stage 1 installs ALL dependencies and compiles TypeScript. Stage 2 copies ONLY the compiled JS and production node_modules. Compare the image sizes of single-stage vs multi-stage. Can you get the final image under 150MB? Hint: use node:20-alpine as the base.
More resources
- Docker Get Started (Docker Official)
- Docker in 100 Seconds (Fireship)
- Docker Curriculum (Docker Curriculum)