Guides / Docker builds that never use the cache

Recomputing the same work

Docker builds that never use the cache

By Keith Mazanec, Founder, CostOps ยท Updated January 30, 2026

A developer pushes a one-line fix. The CI workflow runs docker build, and the entire image rebuilds from scratch: base image pull, dependency install, compilation, everything. Eight minutes later, the job finishes. The same eight minutes it took yesterday. And the day before. GitHub Actions runners are ephemeral. When the job ends, the runner is destroyed, and Docker's local layer cache goes with it. Without an external cache strategy, every build starts cold. This is one of the highest-impact areas to speed up your builds.

Symptoms

How to tell if your Docker builds are rebuilding from scratch

Check your workflow logs for the docker build or docker/build-push-action step. If you see these patterns, you're paying for cold builds:

  • Stable, long build times. Docker build duration is nearly identical across runs, whether the commit changed one file or fifty. A cached build should be significantly faster when only application code changed. If your builds are consistently 6–10 minutes regardless of what changed, the cache is not being used.

  • No “CACHED” lines in build output. BuildKit logs each layer as CACHED when it reuses from cache. If every layer shows DONE with a non-zero duration, nothing is being reused. Look for lines like [2/6] RUN apt-get install ... 45.2s done, which is a layer that should be cached on most runs.

  • Dependency install runs every time. The npm install, bundle install, or pip install step executes on every build, even when the lockfile hasn't changed. This is the most expensive symptom because dependency installation often accounts for 40–70% of total build time.

Metrics

What uncached Docker builds cost

A typical application Docker build takes 6–10 minutes without cache. With a properly configured cache backend and Dockerfile ordering, most runs complete in 1–3 minutes. Here's the math for a team running 20 Docker builds per day on Linux runners:

Before optimization

Builds/day 20
Minutes/build 8
Monthly minutes 3,520
Monthly cost $21/mo

At $0.006/min (Linux 2-core) · 22 working days

After optimization (75% faster builds)

Builds/day 20
Minutes/build 2
Monthly minutes 880
Monthly cost $5/mo

Save $16/mo · $192/year · per workflow

That's one workflow building one image. If you build multiple images (frontend, backend, worker) or use a build matrix, multiply accordingly. Consider whether you can build once and reuse the artifact instead of rebuilding in every workflow. And if those builds run on macOS runners at $0.062/min, the uncached scenario costs $218/mo, and cutting build time 75% saves $163/mo from Dockerfile and cache configuration alone.


Fix 1

Reorder your Dockerfile for layer caching

Docker builds layers sequentially. When a layer's inputs change, that layer and every layer after it rebuild from scratch. This cascade effect means a single misplaced COPY instruction can invalidate the entire cache.

The fix is to copy dependency files first, install dependencies, then copy application code last. This way, the expensive dependency installation layer is only invalidated when the lockfile actually changes, not on every code commit.

Cache invalidated every push
FROM node:20-alpine
WORKDIR /app
COPY . .              # all files, including source
RUN npm ci            # reinstalls every time
RUN npm run build
CMD ["node", "dist/server.js"]
Dependencies cached across builds
FROM node:20-alpine
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci            # cached unless lockfile changes
COPY . .              # source code last
RUN npm run build
CMD ["node", "dist/server.js"]

The same pattern applies to any language. For Ruby, copy Gemfile and Gemfile.lock before bundle install. For Python, copy requirements.txt before pip install. The principle is always the same: dependency manifests first, source code last.

One caveat: this only helps when you have an external cache backend (see Fix 2). On GitHub Actions, the runner's local Docker cache is destroyed after every job. Layer ordering determines which layers can be reused, but you still need a cache backend to persist them between runs.

Fix 2

Use the GitHub Actions cache backend for Docker builds

The docker/build-push-action supports several cache backends. The simplest for GitHub Actions is type=gha, which stores build layers in the GitHub Actions cache. It requires no external registry or storage, so you only need to add cache-from and cache-to to your build step.

Set mode=max to cache all layers, including intermediate stages from multi-stage builds. The default mode=min only caches layers in the final image, which means your builder stage layers rebuild from scratch every time.

.github/workflows/build.yml
name: Build

on:
  push:
    branches: [main]
  pull_request:

jobs:
  docker:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - uses: docker/setup-buildx-action@v3

      - uses: docker/build-push-action@v6
        with:
          push: false
          tags: myapp:${{ github.sha }}
          cache-from: type=gha
          cache-to: type=gha,mode=max

The docker/setup-buildx-action step is required because it enables BuildKit, which is the build engine that supports external cache backends. Without it, cache-from and cache-to are silently ignored.

One caveat: the GitHub Actions cache is limited to 10 GB per repository. Large images with many layers can exceed this. When the cache is full, GitHub evicts the least recently used entries. If your images are large, consider the registry cache backend (Fix 3) instead, which has no size limit. Docker layer caches share the same 10 GB quota as dependency caches and artifacts.

Branch scoping matters: GitHub Actions cache is scoped by branch. A PR branch can read cache from its base branch (usually main) but not from other PR branches. This means the first build on a new PR branch pulls cache from main, and subsequent pushes to that branch use its own cache.

Fix 3

Use registry-based cache for large images

If your images exceed the 10 GB GitHub Actions cache limit, or if you want cache shared across repositories, push cache layers to a container registry. The type=registry backend stores cache as a separate image tag, accessible from any runner that can pull from that registry.

.github/workflows/build.yml
- uses: docker/build-push-action@v6
  with:
    push: true
    tags: ghcr.io/myorg/myapp:latest
    cache-from: type=registry,ref=ghcr.io/myorg/myapp:cache
    cache-to: type=registry,ref=ghcr.io/myorg/myapp:cache,mode=max

The cache is stored at a separate tag (:cache) from your actual image. This keeps cache metadata out of your production image. Use mode=max to cache intermediate multi-stage layers.

For workflows building multiple images, use the scope parameter with the GHA backend, or separate cache tags with the registry backend, to avoid one image's cache evicting another's:

Scoped caching for multiple images
# Image 1: frontend
- uses: docker/build-push-action@v6
  with:
    context: ./frontend
    tags: myorg/frontend:latest
    cache-from: type=gha,scope=frontend
    cache-to: type=gha,mode=max,scope=frontend

# Image 2: backend
- uses: docker/build-push-action@v6
  with:
    context: ./backend
    tags: myorg/backend:latest
    cache-from: type=gha,scope=backend
    cache-to: type=gha,mode=max,scope=backend

Fix 4

Reduce build context with .dockerignore

Before Docker builds any layer, it sends the entire build context (usually your repository root) to the build engine. Every file in that context is hashed to determine cache validity for COPY instructions. Files you never use in the image, such as the .git directory, node_modules, test fixtures, and documentation, still get sent and hashed, slowing down the build and causing unnecessary cache invalidation.

A .dockerignore file excludes irrelevant files from the build context. This has two effects: the context transfer is faster, and COPY . . instructions are less likely to invalidate the cache when unrelated files change.

.dockerignore
.git
.github
node_modules
tmp
log
coverage
.env*
*.md
docs
test
spec
.dockerignore
Dockerfile*
docker-compose*
.vscode
.DS_Store

The .git directory alone can add hundreds of megabytes to the build context. Excluding it is the single highest-impact line in your .dockerignore. If your repository has a full Git history, this can shave seconds off context transfer on every build. For a deeper dive on context optimization, see our guide on reducing huge Docker build contexts.


Reference

Cache backend comparison

BuildKit supports several cache storage backends. Here's how they compare for GitHub Actions workflows:

Backend Setup Size limit Multi-stage
type=gha No config needed 10 GB/repo mode=max
type=registry Registry login required Unlimited mode=max
type=inline No config needed Image size mode=min only
type=local actions/cache step 10 GB/repo mode=max

For most teams, type=gha with mode=max is the right starting point. It requires no external services and works out of the box. Switch to type=registry when you hit the 10 GB cache limit or need cross-repository cache sharing.

Avoid type=inline for multi-stage builds because it only supports mode=min, which means only the final stage's layers are cached. Your builder stage (where dependencies are installed) won't be cached, and that's usually where most of the build time is.

Related guides

Guides / Docker builds that never use the cache

See which builds waste the most minutes

CostOps tracks per-workflow build times and cost trends so you can spot uncached Docker builds before they dominate your bill.

Free for 1 repo. No credit card. No code access.

Built by engineers who've managed CI spend at scale.