Recomputing the same work
Hidden cost of huge Docker build contexts
By Keith Mazanec, Founder, CostOps ยท Updated January 30, 2026
A developer runs docker build . in CI. Docker packages the entire working directory, including node_modules, .git history, test fixtures, and local binaries, then sends it all to the daemon before a single layer is built. On a typical Node.js project, that context is 200–500 MB. On a monorepo, it can exceed a gigabyte. Every CI run pays for that transfer in wall-clock time and billed minutes, and the files are never used in the image.
Symptoms
How to tell if your build context is too large
Docker prints the context size at the start of every build. If you're not watching for it, the waste hides in plain sight.
-
Slow “Sending build context” step. Docker logs Sending build context to Docker daemon X MB before any build work starts. If X is above 10 MB for a typical app, you're sending files the build doesn't need. Context over 100 MB adds 10–30 seconds of pure transfer overhead per build.
-
No .dockerignore file in the repository. Without a .dockerignore, Docker sends everything in the build path. The .git directory alone can be 50–500 MB on repos with long histories. Add node_modules (100–400 MB), vendor/, and test data, and you're routinely sending a gigabyte.
-
Build time is stable but slow. If your Docker build takes the same 3–5 minutes every run regardless of what changed, the bottleneck is likely context transfer and full rebuilds rather than your application code. Cached layers can't help if the context itself takes 30 seconds to transmit.
Metrics
What a bloated build context costs
Context transfer is dead time where the runner is billed but no build work happens. A 500 MB context on a standard GitHub Actions runner adds roughly 15–30 seconds per build. That time multiplies across every Docker build step in every workflow run.
Before optimization
At $0.006/min (Linux 2-core)
After optimization (99% smaller context)
Save $2.64/mo · $31.68/year · per workflow
Context transfer cost alone looks small on Linux. But the real impact compounds: a bloated context also prevents effective layer caching (because Docker checksums the context to detect changes), inflates the checkout step, and forces unnecessary file I/O. When you factor in full rebuild time caused by context-triggered cache busts, the total waste is often 2–5 minutes per build, which turns that $2.64/mo into $53–$132/mo on Linux, or $550–$1,364/mo on macOS runners at $0.062/min.
Fix 1
Add a .dockerignore file
The .dockerignore file works like .gitignore and tells Docker which files to exclude before sending the context to the daemon. Without one, Docker sends everything in the build path. The Dockerfile itself reinstalls dependencies from lockfiles, so your local node_modules, vendor/, and build artifacts are redundant baggage.
A well-configured .dockerignore typically reduces context by 90–99%. One real-world example: a Vue.js project went from 118 MB to 0.38 MB (a 99.7% reduction) by excluding node_modules and .git.
# Version control .git .gitignore # Dependencies (reinstalled in Dockerfile) node_modules vendor/ .bundle # Build output dist/ build/ tmp/ coverage/ # CI/CD and tooling .github/ .vscode/ .idea/ # Documentation and tests docs/ *.md spec/ test/ __tests__/ # Environment and secrets .env* *.pem *.key # Docker files (prevent recursive builds) Dockerfile* docker-compose*
One caveat: if your Dockerfile COPYs test files or docs into the image (some CI builds do this), you'll need to remove those patterns from .dockerignore. Audit your Dockerfile's COPY and ADD instructions first, because every path they reference must be present in the context.
Fix 2
Use shallow checkouts to shrink .git
Even with a .dockerignore that excludes .git, the checkout step itself still downloads the full repository history by default. On repositories with thousands of commits, the .git directory can be 50–500 MB. That data hits the runner's disk and slows down the checkout step, consuming billed minutes before the Docker build even starts.
The actions/checkout action defaults to fetch-depth: 1 (shallow clone) as of v4. But if your workflow overrides this to fetch-depth: 0 (full history), or if you need git history for versioning, the .git directory balloons. For Docker builds, you almost never need history, so lock fetch-depth to 1.
steps: - uses: actions/checkout@v4 with: fetch-depth: 0 # full history # .git = 200+ MB on large repos # Checkout step: 30-90 sec
steps: - uses: actions/checkout@v4 with: fetch-depth: 1 # single commit # .git = 1-5 MB # Checkout step: 2-5 sec
If you need a version number derived from git tags, use fetch-depth: 0 only for the job that computes the version, and pass it as an output to downstream Docker build jobs. Don't pay for full history in every job.
Fix 3
Pass a targeted build context path
docker build . sends the entire working directory as context. In a monorepo with multiple services, this means every service's Docker build receives every other service's code. If you have 5 services averaging 100 MB each, each build sends 500 MB when it only needs 100 MB.
Instead, pass the specific subdirectory as context and point to the Dockerfile with -f. This gives Docker only the files relevant to that service.
# Sends everything to the daemon docker build -t myapp . # Context: 500 MB (full monorepo)
# Sends only the api/ directory docker build -f api/Dockerfile -t myapp api/ # Context: 80 MB (just this service)
When using docker/build-push-action in GitHub Actions, set the context input to the subdirectory:
steps: - uses: actions/checkout@v4 - uses: docker/build-push-action@v6 with: context: ./api # only send api/ as context file: ./api/Dockerfile push: true tags: myapp:latest
One caveat: if your Dockerfile references shared files outside the service directory (common in monorepos, such as shared proto files and common libraries), you'll need to either copy those into the service directory before build, or use a parent directory as context with a tighter .dockerignore. Docker cannot access files outside the context path. For broader monorepo CI optimization strategies, see monorepo CI optimization.
Fix 4
Use a Dockerfile-specific ignore file
Docker supports per-Dockerfile ignore files. If your Dockerfile is named Dockerfile.ci, Docker looks for Dockerfile.ci.dockerignore first, falling back to .dockerignore if the specific file doesn't exist. This lets you maintain aggressive exclusions for CI builds without affecting local development builds.
# CI-specific: exclude everything not needed for production build * # Allow only what the Dockerfile needs !src/ !package.json !package-lock.json !tsconfig.json !Gemfile !Gemfile.lock
The * pattern excludes everything, then the ! prefix re-includes only what the build actually needs. This inverted approach is safer than maintaining a deny-list because new files are excluded by default, so context size can't silently grow.
Use this pattern in your GitHub Actions workflow by specifying the CI Dockerfile:
steps: - uses: docker/build-push-action@v6 with: file: Dockerfile.ci # uses Dockerfile.ci.dockerignore context: . push: true tags: myapp:latest
Reference
Common build context bloat sources
To estimate your own savings, check which of these are present in your build context. The sizes below are typical for a mid-size project.
| Source | Typical size | Needed in build? |
|---|---|---|
| node_modules/ | 100–400 MB | No |
| .git/ | 50–500 MB | No |
| vendor/ (Ruby, Go) | 50–200 MB | No |
| dist/ build/ tmp/ | 10–100 MB | No |
| test/ spec/ __tests__/ | 5–50 MB | Rarely |
| .env* *.pem *.key | <1 MB | Never |
Dependencies are the most common offender. Your Dockerfile runs npm install or bundle install from lockfiles, so the local node_modules or vendor/ directory is never used in the image. The .git directory is the second largest source, and it's easy to miss because it's hidden. Secret files (.env, *.pem) are small but dangerous because they can end up baked into image layers if not excluded.
For more on optimizing what's cached between builds, see our guides on caching too much and speeding up builds.
Related guides
Docker Builds Never Cached
Enable BuildKit layer caching to stop rebuilding every layer on every CI run.
Caching Too Much
Split and prune oversized caches that slow down restore and save steps.
Speed Up Builds
Optimize build caching, compilation, and artifact sharing to cut build times.
Monorepo CI Runs Everything
Path-based filtering and dependency-graph tools for monorepo CI optimization.