Docker solves the “it works on my machine” problem by packaging your application and all its dependencies into a standardised container that runs identically everywhere. This guide takes you from zero Docker knowledge to deploying a multi-container application with a database, API server, and persistent storage.
Installing Docker
Install Docker Engine on your development machine:
# Ubuntu / Debian
sudo apt-get update
sudo apt-get install -y ca-certificates curl gnupg
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | \
sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo "deb [arch=$(dpkg --print-architecture) \
signed-by=/etc/apt/keyrings/docker.gpg] \
https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo $VERSION_CODENAME) stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
# Add your user to the docker group (avoids needing sudo)
sudo usermod -aG docker $USER
newgrp docker
# Verify installation
docker --version
docker compose version
On Windows or macOS, install Docker Desktop which includes everything.
Your First Dockerfile
A Dockerfile is a recipe that tells Docker how to build an image. Let us containerize a Node.js Express application:
// src/index.js
const express = require('express');
const { Pool } = require('pg');
const app = express();
app.use(express.json());
const pool = new Pool({
host: process.env.DB_HOST || 'localhost',
port: parseInt(process.env.DB_PORT || '5432'),
database: process.env.DB_NAME || 'myapp',
user: process.env.DB_USER || 'postgres',
password: process.env.DB_PASSWORD || 'postgres',
});
// Initialize database table
async function initDb() {
await pool.query(`
CREATE TABLE IF NOT EXISTS tasks (
id SERIAL PRIMARY KEY,
title VARCHAR(255) NOT NULL,
completed BOOLEAN DEFAULT false,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
)
`);
}
app.get('/tasks', async (req, res) => {
const { rows } = await pool.query(
'SELECT * FROM tasks ORDER BY created_at DESC'
);
res.json(rows);
});
app.post('/tasks', async (req, res) => {
const { title } = req.body;
const { rows } = await pool.query(
'INSERT INTO tasks (title) VALUES ($1) RETURNING *',
[title]
);
res.status(201).json(rows[0]);
});
app.patch('/tasks/:id', async (req, res) => {
const { id } = req.params;
const { completed } = req.body;
const { rows } = await pool.query(
'UPDATE tasks SET completed = $1 WHERE id = $2 RETURNING *',
[completed, id]
);
if (rows.length === 0) return res.status(404).json({ error: 'Not found' });
res.json(rows[0]);
});
app.delete('/tasks/:id', async (req, res) => {
await pool.query('DELETE FROM tasks WHERE id = $1', [req.params.id]);
res.status(204).send();
});
const PORT = process.env.PORT || 3000;
initDb().then(() => {
app.listen(PORT, () => console.log(`API running on port ${PORT}`));
});
Now the Dockerfile:
# Dockerfile
# Stage 1: Install dependencies
FROM node:20-alpine AS deps
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci --only=production
# Stage 2: Production image
FROM node:20-alpine AS runner
WORKDIR /app
# Create non-root user for security
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
# Copy dependencies from the deps stage
COPY --from=deps /app/node_modules ./node_modules
COPY src/ ./src/
COPY package.json ./
# Switch to non-root user
USER appuser
# Expose port and define the startup command
EXPOSE 3000
ENV NODE_ENV=production
CMD ["node", "src/index.js"]
This uses a multi-stage build. The first stage installs dependencies, and the second stage copies only what is needed for production. The result is a smaller, more secure image. The non-root user prevents container escape attacks.
Build and run the image:
# Build the image
docker build -t my-task-api:1.0 .
# List images
docker images
# Run the container (won't work yet - no database)
docker run -p 3000:3000 --name task-api my-task-api:1.0
Docker Compose: Multi-Container Applications
Real applications need multiple services. Docker Compose lets you define and run them together. Create a docker-compose.yml:
# docker-compose.yml
services:
api:
build: .
ports:
- "3000:3000"
environment:
DB_HOST: postgres
DB_PORT: 5432
DB_NAME: taskdb
DB_USER: taskuser
DB_PASSWORD: secretpassword
depends_on:
postgres:
condition: service_healthy
networks:
- app-network
restart: unless-stopped
postgres:
image: postgres:16-alpine
environment:
POSTGRES_DB: taskdb
POSTGRES_USER: taskuser
POSTGRES_PASSWORD: secretpassword
volumes:
- pgdata:/var/lib/postgresql/data
ports:
- "5432:5432"
networks:
- app-network
healthcheck:
test: ["CMD-SHELL", "pg_isready -U taskuser -d taskdb"]
interval: 5s
timeout: 5s
retries: 5
restart: unless-stopped
volumes:
pgdata:
driver: local
networks:
app-network:
driver: bridge
Let us break down the key concepts:
- depends_on with condition: The API waits until PostgreSQL passes its health check before starting.
- volumes (pgdata): Named volumes persist data across container restarts. Without this, your database would lose all data when the container stops.
- networks (app-network): Services on the same network can reach each other by service name. The API connects to
postgres(the service name) notlocalhost. - healthcheck: Docker periodically runs the check command. Other services can wait for a healthy status.
Run everything with a single command:
# Start all services in detached mode
docker compose up -d
# View logs
docker compose logs -f
# View running containers
docker compose ps
# Test the API
curl http://localhost:3000/tasks
curl -X POST http://localhost:3000/tasks \
-H "Content-Type: application/json" \
-d '{"title": "Learn Docker"}'
# Stop everything
docker compose down
# Stop and remove volumes (deletes database data)
docker compose down -v
Volumes: Persisting Data
Docker containers are ephemeral — when they stop, any data written inside them is lost. Volumes solve this. There are three types:
# Named volume (managed by Docker, best for databases)
volumes:
- pgdata:/var/lib/postgresql/data
# Bind mount (maps a host directory, best for development)
volumes:
- ./src:/app/src
# tmpfs mount (in-memory, best for sensitive temporary data)
tmpfs:
- /app/tmp
For development, bind mounts let you edit code on your host and see changes inside the container immediately. Here is a development override file:
# docker-compose.override.yml (auto-loaded in dev)
services:
api:
build:
context: .
target: deps # Use the deps stage, not runner
volumes:
- ./src:/app/src # Live-reload source code
environment:
NODE_ENV: development
command: npx nodemon src/index.js
Networks: Container Communication
By default, Docker Compose creates a network for all services in the file. You can create custom networks to control which services can communicate:
# Example: Frontend can reach API, but not database directly
services:
frontend:
image: nginx:alpine
networks:
- frontend-net
api:
build: .
networks:
- frontend-net
- backend-net
postgres:
image: postgres:16-alpine
networks:
- backend-net
networks:
frontend-net:
driver: bridge
backend-net:
driver: bridge
In this setup, the frontend can reach the API, and the API can reach PostgreSQL, but the frontend cannot connect directly to the database. This is a basic security boundary.
Essential Docker Commands
Here are the commands you will use daily:
# Images
docker build -t myapp:1.0 . # Build image
docker images # List images
docker rmi myapp:1.0 # Remove image
docker image prune -a # Remove unused images
# Containers
docker run -d -p 3000:3000 myapp:1.0 # Run in background
docker ps # List running containers
docker ps -a # List all containers (including stopped)
docker stop <container_id> # Stop a container
docker rm <container_id> # Remove a container
docker logs -f <container_id> # Follow container logs
docker exec -it <container_id> sh # Open a shell inside a container
# Cleanup
docker system prune -a # Remove ALL unused data (careful!)
docker volume prune # Remove unused volumes
Conclusion
Docker transforms deployment from a manual, error-prone process into a repeatable, version-controlled workflow. You write a Dockerfile once, and your app runs identically on your laptop, your colleague’s machine, CI/CD pipelines, and production servers. Start with single-container apps, graduate to Docker Compose for multi-service setups, and you will have a skill that is essential for modern development. Every major company in India — from Flipkart to Razorpay to Swiggy — uses Docker in their deployment pipeline, making it one of the highest-ROI skills you can learn.