Dockerfile Best Practices: Building Leaner, Faster, and More Secure Docker Images

Written By: author avatar Seahawk
author avatar Seahawk
Dockerfile Best Practices Building Leaner, Faster, and More Secure Docker Images

Docker has become an indispensable tool in modern software development, enabling teams to build, ship, and run applications with unprecedented consistency and ease. At the core of this containerization magic lies the Dockerfile, a simple yet powerful script that acts as the blueprint for creating your Docker images.

Crafting an effective Dockerfile build process is more than just a technical exercise; it’s a pathway to faster development cycles, smaller and more secure application footprints, and more reliable deployments. Whether new to Docker or looking to refine your skills, this guide will walk you through the essentials of writing practical, efficient, and secure Dockerfiles.

Understanding the Docker Ecosystem: A Quick Refresher

Before diving into Dockerfiles, let’s briefly touch upon key Docker concepts:

  • Images: An image is a lightweight, standalone, executable package that includes everything needed to run software, including the code, a runtime, libraries, environment variables, and config files. Images are immutable templates.
  • Containers: A container is a runnable instance of an image. You can create, start, stop, move, or delete containers. They provide isolated environments for your applications.
  • Dockerfile: This is our focus. A Dockerfile is a text document containing a sequence of commands Docker uses to assemble an image automatically.
  • Docker Hub/Registries: These are repositories for storing and sharing Docker images, similar to GitHub for code.

A well-written Dockerfile aims to produce an image that is as lean, fast to build, and secure as possible.

Docker Build Flags Explained

  • -t myapp:1.0: This flag tags your image with a name (myapp) and a version (1.0). Tagging helps with version control and makes it easier to refer to specific image builds later, especially when deploying or pushing to a registry.
  • . (dot): This refers to the build context, the directory where Docker should look for the Dockerfile and any other files required during the build (e.g., files to COPY into the image).
    Docker compresses and sends the contents of this directory to the Docker daemon. It’s important to note that only files inside the build context can be accessed during the build process.

Essential Dockerfile Instructions: The Building Blocks

Essential Dockerfile Instructions: The Building Blocks

Let’s explore the most common instructions and how to use them effectively.

  1. FROM: Every Dockerfile must begin with a FROM instruction. It specifies the base image upon which your image will be built.
    • Purpose: To select a starting point, often an operating system (like Ubuntu:22.04) or a pre-configured application runtime (like node:18-alpine).
    • Example: FROM python:3.9-slim
    • Best Practice: Choose the minimal base image that meets your application’s needs. Alpine versions are tiny but use musl libc, which might have compatibility issues for some C-dependent packages. Slim versions are a good compromise, offering a stripped-down version of a standard distribution (like Debian) with glibc.
  2. WORKDIR: This setting sets the working directory for any subsequent RUN, CMD, ENTRYPOINT, COPY, and ADD instructions.
    • Purpose: To define the current directory context within the image for subsequent file operations and command executions. If the directory doesn’t exist, Docker creates it.
    • Example: WORKDIR /usr/src/app
    • Best Practice: Use absolute paths for WORKDIR. Changing directories using WORKDIR multiple times is generally cleaner than chaining cd commands within RUN instructions.
  3. COPY: Copies files or directories from your build context into the image’s filesystem.
    • Purpose: Add your application code, configuration files, and other necessary assets to the image.
    • Example:

Dockerfile

WORKDIR /usr/src/app
COPY package.json ./
COPY src/ ./src/

  • Best Practice: For simple file copying, prefer COPY over ADD. COPY is more transparent. ADD has extra features like URL downloading and tar extraction, which can be less predictable. For clarity and security, it’s often better to use RUN with curl, wget, and tar if you need to download and extract.
  1. RUN: Executes commands in a new layer on top of the current image and commits the results. This is used for installing software, creating directories, compiling code, etc.
    • Purpose: To modify the image’s filesystem by installing packages, running build scripts, or setting up configurations.
    • Example:

Dockerfile

RUN apt-get update && apt-get install -y –no-install-recommends \
nginx \
curl \
&& rm -rf /var/lib/apt/lists/*

  • Best Practice: Chain related commands using && and clean up temporary files or package manager caches (like rm -rf /var/lib/apt/lists/* for Debian/Ubuntu or yum clean all for CentOS/RHEL) in the same RUN instruction. This minimizes the number of layers and reduces image size, as each RUN instruction creates a new layer.
  1. ENV: Sets environment variables available during the build process (after they are defined) and when containers are run from the image.
    • Purpose: To provide configuration values, paths, or settings needed by your application or build scripts.
    • Example:

Dockerfile

ENV NODE_ENV=production
ENV APP_PORT=3000

  • Best Practice: Use ENV for non-sensitive configuration data. Runtime injection methods should be used instead of baking them into the image with ENV for secrets.
  1. ARG: Defines a build-time variable users can pass using the– build-arg flag during Docker build.
    • Purpose: To allow parameterization of the build process without modifying the Dockerfile.
    • Example: Dockerfile

Dockerfile

ARG APP_VERSION=1.0.0
ENV APP_VERSION_ENV=${APP_VERSION}
RUN echo “Building version ${APP_VERSION_ENV}”

  • Build with: bash docker build– build-arg APP_VERSION=1.2.3 -t myapp.
  • Note: ARG variables are unavailable in the running container unless explicitly set as an ENV variable, as shown above.
  1. EXPOSE: Informs Docker that the container listens on the specified network ports at runtime.
    • Purpose: This document primarily serves as documentation for the image builder and user. It doesn’t publish the port.

Example:

Dockerfile

EXPOSE 8080

  • Note: To make the port accessible from the host, you use the -p or -P flag with docker run (e.g., docker run -p 8080:8080 myimage).
  1. CMD and ENTRYPOINT: Define what command gets executed when a container starts.
    • CMD [“executable”, “param1”, “param2”]: Provides defaults for an executing container. These defaults can be easily overridden by appending a command to docker run. If you have multiple CMDs, only the last one takes effect.
    • ENTRYPOINT [“executable”, “param1”, “param2”]: Configures a container to run as an executable. Arguments passed to docker run are appended to the ENTRYPOINT command.
    • Example (typical pattern):

Dockerfile

ENTRYPOINT [“python”, “app.py”] # Main command
CMD [“–help”] # Default argument if none provided on docker run

  • Best Practice: Use CMD if you want an easily overridable default command. Use ENTRYPOINT to create an image that behaves like a specific executable, often using CMD to supply default arguments. For web applications, CMD [“npm”, “start”] or CMD [“python”, “manage.py”, “runserver”] are standard.

Key Best Practices for Optimized Dockerfiles

Writing a functional Dockerfile is just the start. Optimizing it brings significant benefits.

  • Leverage the Build Cache Effectively: Docker builds images in layers, and it tries to reuse layers from previous builds if possible (caching). To maximize cache hits:
    • Order instructions from least frequently changing to most frequently changing. For instance, install dependencies (which change less often) before copying your application source code (which changes frequently).

Dockerfile

# Good Caching Example for a Node.js app
FROM node:18-alpine
WORKDIR /app
COPY package.json package-lock.json ./ # Dependencies change less often
RUN npm ci –omit=dev # This layer is cached if package files don’t change
COPY . . # Source code changes often, so it’s last
CMD [“node”, “server.js”]

  • Keep Your Images Small: Smaller images are faster to pull, push, and deploy, and have a reduced attack surface.
    • Use Minimal Base Images: Alpine, slim, or distroless variants are significantly smaller than full OS images.
    • Clean Up in the Same RUN Layer: Remove unnecessary caches or temporary files using the same RUN instruction after installing packages. For example:
      • Debian/Ubuntu: apt-get clean && rm -rf /var/lib/apt/lists/*
      • CentOS/RHEL: yum clean all or dnf clean all
      • Alpine: rm -rf /var/cache/apk/*
  • Embrace Multi-Stage Builds: This is one of the most effective ways to reduce image size, especially for compiled languages or applications with build steps (like JavaScript frontends).
    • Concept: Use one stage (a FROM block) with all your build tools and development dependencies to compile/build your application. Then, start a new stage from a minimal runtime base image and use COPY– from=<build_stage_name_or_index> … to copy only the necessary compiled artifacts into this final, clean stage.
    • Example (Simplified Go Application): Dockerfile

Dockerfile

# Stage 1: Build
FROM golang:1.20 AS builder
WORKDIR /app
COPY . .
RUN go build -o myapp

# Stage 2: Runtime
FROM debian:bullseye-slim
WORKDIR /app
COPY –from=builder /app/myapp .
CMD [“./myapp”]

  • The builder stage has the Go SDK, but the final image only contains the compiled binary and the minimal Alpine OS.
  • Prioritize Security:
    • Run as a Non-Root User: By default, containers run as root. This is a security risk. Create a dedicated, unprivileged user and group in your Dockerfile and use the USER instruction to switch to it. Dockerfile

Dockerfile

RUN addgroup -S appgroup && adduser -S appuser -G appgroup
# … COPY files and chown them to appuser:appgroup …
USER appuser

  • Install Only Necessary Packages: Every package is a potential vulnerability. Avoid installing debugging tools or development utilities in production images (use multi-stage builds for this).
  • Don’t Hardcode Secrets: Avoid putting passwords, API keys, or other secrets in your Dockerfile (e.g., in ENV variables). Use runtime injection methods like Docker secrets, Kubernetes secrets, or environment variables provided securely at runtime.
  • Use .dockerignore Effectively: Create a .dockerignore file at the root of your build context (at the same level as your Dockerfile). List files and directories to exclude from being sent to the Docker daemon (e.g., .git, node_modules if installed in the image, local IDE configs, *.log).
  • Benefits: Speed up the Docker build process by reducing the build context’s size and preventing sensitive or unnecessary files from being included in your image.

Beyond the Basics: Further Enhancements

Once you’re comfortable with the above, consider these for even better Dockerfiles:

  • BuildKit: Docker’s newer build engine, often enabled by default. It offers better performance (parallel builds), improved caching, and advanced features like build secrets (RUN– mount=type=secret,…) and cache mounts (RUN– mount=type=cache,…) for package managers. Ensure it’s active or enable it with DOCKER_BUILDKIT=1.
  • Linting Dockerfiles: Use tools like Hadolint (hadolint Dockerfile) to analyze your Dockerfile statically for errors, style violations, and adherence to best practices before you build.

Common Troubleshooting Quick Tips

  • “File not found” during COPY or ADD: Double-check the source path (it’s relative to the build context root) and ensure the file isn’t excluded by .dockerignore.
  • Slow Builds: Review instruction order for cache optimization. Combine RUN commands where possible. Ensure your .dockerignore is comprehensive.
  • Large Images: Use multi-stage builds! Clean up in RUN layers. Choose minimal base images.

Dockerfiles in Your DevOps Workflow

A Dockerfile is a key piece of “Infrastructure as Code.”

  • Version Control: Always commit your Dockerfile to your Git repository alongside your application code.
  • CI/CD Integration: Automate your Docker build and image pushing process within your Continuous Integration/Continuous Delivery pipelines (e.g., GitHub Actions, Jenkins, GitLab CI). This ensures consistent, repeatable builds and deployments.

Conclusion: Building a Foundation for Success

Creating effective Dockerfiles is an investment that yields significant benefits in development speed, operational reliability, and application security. By mastering fundamental instructions, implementing best practices such as multi-stage builds and careful management of image layers, and prioritizing security, you can produce Docker images that are lean, efficient, and robust.

This guide provides a solid foundation. As you continue your Docker journey, continue exploring, experimenting, and refining your Dockerfiles.

Ready to elevate your Docker and DevOps practices?

At Seahawk Media, we specialize in helping businesses leverage the full potential of containerization, cloud technologies, and streamlined CI/CD pipelines. Our expert team is here to assist if you want to optimize your application deployments, improve security, or accelerate your development lifecycle.

Related Posts

wp_is_mobile() in WordPress: Still Useful or Outdated?

Back in 2012, WordPress 3.4 introduced a function that developers could use to check if

Best Wine Templates for WordPress Websites

Best Wine Templates for WordPress Websites

A great wine deserves a website that tells its story and captures its essence. Whether

Master-figma-exports-pdf-png-jpg-and-more-like-a-professional

Master Figma Exports: PDF, PNG, JPG, and More Like a Professional

Figma is one of the most popular cloud-based design tools, trusted by designers and developers

Get started with Seahawk

Sign up in our app to view our pricing and get discounts.