Tips & Tricks

Docker for Web Developers: Getting Started with Containers

Docker for Web Developers: Getting Started with Containers

Every web developer has heard the phrase: “It works on my machine.” Docker eliminates this problem entirely by packaging your application with all its dependencies into a container that runs identically on any system. Whether you are building a WordPress site, a Node.js API, or a full-stack application with multiple services, Docker gives you consistent, reproducible environments from development through production.

This guide covers everything you need to start using Docker for web development: core concepts, writing Dockerfiles, composing multi-service applications, managing data with volumes, configuring networking, and deploying to production.

What Is Docker and Why Should You Care?

Docker is a platform for running applications in containers. A container is an isolated environment that has its own filesystem, network interfaces, and process tree, but shares the host operating system’s kernel. This makes containers dramatically lighter than virtual machines: they start in seconds, use minimal RAM, and you can run dozens on a single development machine.

For web developers specifically, Docker solves three persistent problems:

  • Environment consistency: Your application runs in the same environment on every developer’s machine, in CI/CD, and in production. No more debugging issues that only appear on one person’s laptop.
  • Dependency isolation: Different projects can use different versions of Node.js, PHP, Python, PostgreSQL, or any other dependency without conflicts. Project A uses Node 18, Project B uses Node 22, and they never interfere with each other.
  • Simplified onboarding: A new developer clones the repository, runs one command, and has a working development environment. No setup guide with 30 steps that are already outdated.

Core Concepts

Before writing any Docker configuration, understand these four foundational concepts:

  • Image: A read-only template that contains your application code, runtime, libraries, and system tools. Think of it as a snapshot of a configured system. Images are built from Dockerfiles and stored in registries like Docker Hub.
  • Container: A running instance of an image. You can run multiple containers from the same image, each with its own state. Containers are ephemeral by default: when they stop, any changes to their filesystem are lost unless you use volumes.
  • Dockerfile: A text file with instructions for building an image. Each instruction creates a layer in the image. Well-structured Dockerfiles produce small, fast-building images.
  • Docker Compose: A tool for defining and running multi-container applications. A single YAML file describes your web server, database, cache, and any other services, along with their configuration and connections.

Installing Docker

Docker Desktop is available for macOS, Windows, and Linux. It includes the Docker Engine, Docker CLI, Docker Compose, and a graphical dashboard. Download it from the official Docker website and follow the installation instructions for your operating system.

After installation, verify it works:

# Check Docker version
docker --version
# Docker version 27.x.x

# Check Docker Compose version
docker compose version
# Docker Compose version v2.x.x

# Run a test container
docker run hello-world

Note on licensing: Docker Desktop changed its licensing model in 2022 and requires a paid subscription for commercial use in organizations with more than 250 employees or more than $10 million in revenue. Docker Engine itself remains free and open source. For smaller teams and individual developers, Docker Desktop is free.

Writing Your First Dockerfile

A Dockerfile defines how to build your application image. Here is a practical example for a Node.js web application:

# Use an official Node.js runtime as the base image
FROM node:22-alpine

# Set the working directory inside the container
WORKDIR /app

# Copy dependency files first (for better layer caching)
COPY package.json package-lock.json ./

# Install dependencies
RUN npm ci --only=production

# Copy the rest of the application code
COPY . .

# Expose the port the app listens on
EXPOSE 3000

# Define the command to start the application
CMD ["node", "server.js"]

Building and Running

# Build the image and tag it
docker build -t my-web-app .

# Run a container from the image
docker run -p 3000:3000 my-web-app

# Run in detached mode (background)
docker run -d -p 3000:3000 --name web my-web-app

# View running containers
docker ps

# View container logs
docker logs web

# Stop the container
docker stop web

Dockerfile Best Practices

  • Use specific base image tags: node:22-alpine is better than node:latest. Pinned versions prevent unexpected breakages when the base image updates.
  • Order instructions for layer caching: Copy dependency files and install dependencies before copying application code. Dependencies change less frequently than your code, so Docker can cache those layers and skip reinstalling on every build.
  • Use Alpine-based images: Alpine Linux images are 5 to 10 times smaller than their Debian-based equivalents. Smaller images mean faster builds, faster pulls, and smaller attack surface.
  • Use multi-stage builds for production: Build your application in one stage with all build tools, then copy only the compiled output to a minimal final image.

Multi-Stage Build Example

# Stage 1: Build
FROM node:22-alpine AS builder
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci
COPY . .
RUN npm run build

# Stage 2: Production
FROM node:22-alpine
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY package.json .
EXPOSE 3000
CMD ["node", "dist/server.js"]

The final image contains only the compiled output and production dependencies, not the source code, build tools, or development dependencies. This can reduce image size by 50 percent or more.

Docker Compose for Multi-Service Development

Most web applications involve multiple services: a web server, a database, maybe a cache or a message queue. Docker Compose lets you define all of these in a single file and manage them with one command.

Full-Stack Compose Example

services:
  web:
    build: .
    ports:
      - "3000:3000"
    volumes:
      - .:/app
      - /app/node_modules
    environment:
      - DATABASE_URL=postgres://user:password@db:5432/myapp
      - REDIS_URL=redis://cache:6379
    depends_on:
      db:
        condition: service_healthy
      cache:
        condition: service_started

  db:
    image: postgres:16-alpine
    environment:
      POSTGRES_USER: user
      POSTGRES_PASSWORD: password
      POSTGRES_DB: myapp
    volumes:
      - db_data:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U user -d myapp"]
      interval: 5s
      timeout: 5s
      retries: 5

  cache:
    image: redis:7-alpine
    volumes:
      - redis_data:/data

volumes:
  db_data:
  redis_data:

Essential Compose Commands

# Start all services in the background
docker compose up -d

# View logs from all services
docker compose logs -f

# View logs from a specific service
docker compose logs -f web

# Stop all services
docker compose down

# Stop and remove volumes (reset database)
docker compose down -v

# Rebuild images after Dockerfile changes
docker compose up -d --build

# Execute a command inside a running container
docker compose exec web npm run migrate

Volumes: Managing Persistent Data

Containers are ephemeral. When a container stops, any data written to its filesystem disappears. Volumes solve this by providing persistent storage that survives container restarts and recreation.

Types of Volumes

  • Named volumes: Managed by Docker, stored in Docker’s data directory. Use these for database data, uploaded files, and anything that needs to persist across container restarts. In the Compose example above, db_data is a named volume.
  • Bind mounts: Map a host directory directly into the container. Use these for development to sync your source code into the container. In the Compose example, .:/app is a bind mount that makes your local code available inside the container.

The node_modules Trick

Notice the /app/node_modules anonymous volume in the Compose example. This prevents the bind mount from overwriting the container’s node_modules directory with your host’s node_modules (which may be built for a different platform). The container keeps its own copy of dependencies while still syncing your source code.

Networking Between Containers

Docker Compose creates a default network for each project. All services defined in the same Compose file can communicate using their service names as hostnames. In the example above, the web service connects to the database using db:5432 and to Redis using cache:6379. No IP addresses, no complex networking configuration.

If you need to connect containers from different Compose projects, create an external network:

# In docker-compose.yml
networks:
  shared:
    external: true
    name: my-shared-network

services:
  web:
    networks:
      - shared
      - default

Docker for Common Web Stacks

WordPress Development

services:
  wordpress:
    image: wordpress:6-apache
    ports:
      - "8080:80"
    environment:
      WORDPRESS_DB_HOST: db
      WORDPRESS_DB_USER: wp
      WORDPRESS_DB_PASSWORD: wp_password
      WORDPRESS_DB_NAME: wordpress
    volumes:
      - ./wp-content/themes/my-theme:/var/www/html/wp-content/themes/my-theme
  db:
    image: mariadb:11
    environment:
      MYSQL_ROOT_PASSWORD: root_password
      MYSQL_DATABASE: wordpress
      MYSQL_USER: wp
      MYSQL_PASSWORD: wp_password
    volumes:
      - db_data:/var/lib/mysql
volumes:
  db_data:

PHP/Laravel Development

services:
  app:
    build: .
    volumes:
      - .:/var/www/html
    depends_on:
      - db
      - cache
  nginx:
    image: nginx:alpine
    ports:
      - "8080:80"
    volumes:
      - ./nginx.conf:/etc/nginx/conf.d/default.conf
      - .:/var/www/html
  db:
    image: mysql:8
    environment:
      MYSQL_DATABASE: laravel
      MYSQL_ROOT_PASSWORD: secret
    volumes:
      - db_data:/var/lib/mysql
  cache:
    image: redis:7-alpine
volumes:
  db_data:

From Development to Production

The same Docker images you build for development can run in production, with a few important adjustments:

  • Do not use bind mounts: In production, application code is baked into the image. There is no host filesystem to mount.
  • Use environment variables for configuration: Database credentials, API keys, and feature flags should come from environment variables, not hardcoded values.
  • Set resource limits: In production, set memory and CPU limits on containers to prevent a single runaway process from consuming all server resources.
  • Use a proper orchestrator: For production deployments beyond a single server, Docker Swarm or Kubernetes manages scaling, load balancing, and automatic recovery.

Pair Docker with a CI/CD pipeline that builds images on every merge, runs tests inside containers, and pushes verified images to a container registry. The production deployment then pulls and runs the tested image with zero manual intervention.

Troubleshooting Common Issues

  • Port conflicts: If docker compose up fails with “port already in use,” another process is using that port on your host. Find it with lsof -i :3000 and stop it, or change the host port in your Compose file.
  • Permission issues on Linux: Files created inside containers may be owned by root on the host. Add a non-root user in your Dockerfile with matching UID/GID, or use user: in your Compose service.
  • Slow file sync on macOS: Docker Desktop on macOS uses a filesystem bridge that can be slow for projects with many files. Use the :cached or :delegated volume consistency options, or switch to Docker’s VirtioFS file sharing backend in Docker Desktop settings.
  • Out of disk space: Docker accumulates unused images, stopped containers, and orphaned volumes over time. Run docker system prune -a periodically to reclaim space.

Alternatives to Docker Desktop

While Docker remains the standard, alternatives exist. Podman is a daemonless, rootless container engine that is compatible with Docker commands and Dockerfiles. OrbStack is a lightweight Docker Desktop alternative for macOS that uses fewer resources. Colima provides Docker-compatible container runtimes on macOS using Lima. All of these run the same container images, so your Dockerfiles and Compose files work across tools.

Frequently Asked Questions

Does Docker replace understanding my server platform?

No. Docker packages your application and its dependencies, but you still need to understand how Node.js, PHP, or Python works, how your database is configured, and how web servers handle requests. Docker makes the deployment consistent, but it does not abstract away the need for platform knowledge.

Is Docker overkill for small projects?

Even for small projects, Docker provides value through environment consistency and simplified setup. A docker-compose.yml file that spins up your app and database takes five minutes to write and saves hours of “it works on my machine” debugging over the life of a project. The overhead is minimal; the benefits are immediate.

How does Docker fit into a modern development workflow?

Docker sits at the foundation of the development and deployment pipeline. Developers run applications in containers locally. CI/CD builds and tests inside containers. Production runs the same container images. This consistency from development through deployment eliminates an entire category of bugs: environment-related failures. Combined with a good code editor and version control, Docker completes the core infrastructure every web developer needs.

Can I use Docker with hot reload during development?

Yes. Bind-mount your source code into the container and use your framework’s development server with hot reload enabled. Changes you make on your host are instantly visible inside the container. The Compose example in this article demonstrates this pattern with the .:/app volume mount. Most modern frameworks support this workflow out of the box.

The .dockerignore File

Just as .gitignore prevents unwanted files from entering your repository, .dockerignore prevents unwanted files from being copied into your Docker image. Without it, COPY . . sends everything in your project directory to the Docker daemon, including node_modules, .git, test files, documentation, and local environment files.

# .dockerignore
node_modules
.git
.gitignore
.env
.env.*
docker-compose.yml
Dockerfile
README.md
.vscode
coverage
tests
*.md

A proper .dockerignore reduces build context size, speeds up image builds, and prevents secrets like .env files from accidentally being baked into your images. Create one for every project that uses Docker.

Docker has become the standard infrastructure tool for web development. Start by containerizing your current project: write a Dockerfile, create a Compose file for your services, and run docker compose up. Once you experience the consistency and simplicity of container-based development, you will never go back to installing dependencies directly on your host machine.