Artifacts & storage
GitHub Actions artifacts uploaded every run are burning your storage quota
By Keith Mazanec, Founder, CostOps ยท Updated February 17, 2026
A workflow finishes. It uploads a 200 MB build artifact, a 15 MB test report, and a 50 MB coverage bundle. All of this happens regardless of whether the run passed or failed and regardless of whether anyone will ever download them. Multiply that across 30 runs per day and 90 days of default retention. You're storing 21 TB of artifacts per year, most of which nobody looks at. This is fixable with four changes: conditional uploads, context-aware retention, an org-level retention cap, and proper compression.
Symptoms
How to tell if artifact uploads are costing you money
Go to your organization's Settings → Billing → Plans and usage page and check the storage usage. Artifact and cache storage share a single quota across your entire account. Then look at your Actions tab for these patterns:
-
Storage quota warnings or errors. GitHub shows "Artifact storage quota has been hit" and blocks new uploads. This quota is shared across every repository in your account, so one busy repo's PR artifacts can block deploys in a completely unrelated repo. On the Free plan, that's just 500 MB. On Team, it's 2 GB. Once you hit the limit, workflows that depend on artifact uploads start failing.
-
Artifacts on every successful run. Open a few passing workflow runs. If each one has an "Artifacts" section with test reports, coverage data, or build outputs attached, you're uploading on every run. Most of these are never downloaded. Test reports are only useful when something fails. Build artifacts are only useful for deployment or downstream jobs.
-
Long upload steps in your workflow logs. Check the timing breakdown in your workflow runs. If the upload-artifact step consistently takes 30–90 seconds, you're uploading large files on every run. That upload time itself costs billable minutes, on top of the storage cost.
-
No retention-days in your workflow YAML. If your upload-artifact steps don't specify retention-days, you're keeping every artifact for 90 days. A test report from a PR that merged two months ago is consuming storage for no reason.
-
PR artifacts dominate your storage. If your team merges 10+ PRs per day and each uploads artifacts, PR builds likely account for the majority of your stored artifact bytes. These artifacts have near-zero value after the PR merges, but they accumulate for 90 days by default.
Metrics
Quantify GitHub Actions storage and compute waste
Artifact costs hit you twice: storage fees for keeping the files, and compute minutes spent uploading them. Here's a typical scenario for a team running 30 CI jobs per day on Linux, each uploading a 200 MB build artifact unconditionally:
Before optimization
At $0.25/GB per month (GitHub storage overage rate)
After optimization
Save $134/mo · $1,608/year · per workflow
The math: 30 uploads/day × 200 MB × 90 days = 540 GB at steady state. At $0.25/GB per month for storage overage, that's $135/mo. Uploading only on failure (assume 10% failure rate) with 7-day retention: 3 uploads/day × 200 MB × 7 days = 4.2 GB, costing roughly $1/mo. You also save 27 upload steps per day at 45 seconds each on Linux, which adds up to an extra 20 billable minutes per day eliminated. Note that flaky tests compound this problem: every rerun re-uploads artifacts, so fixing flakiness reduces both compute and storage waste.
These numbers are per workflow. Multiply across 10-20 active repos in an organization and storage overage alone can reach $300-700/mo. On the Free plan (500 MB shared quota), even moderate artifact upload patterns fill your entire allowance in a single day, blocking all artifact uploads org-wide.
Gathering these numbers manually across many repositories is tedious. CI cost tracking tools like CostOps can surface per-workflow artifact upload frequency, size, and storage accumulation automatically from your GitHub Actions data, making it easier to identify which workflows to optimize first.
Fix 1
Upload GitHub Actions artifacts only on failure
The most impactful change is the simplest. Test reports, screenshots, and debug logs are only useful when something goes wrong. Use the if: failure() expression to skip the upload-artifact step entirely when the run succeeds. No artifact is created, no storage is consumed, and no upload time is billed.
The failure() function returns true when any previous step in the job has failed. Apply it directly to the upload-artifact step:
steps: - uses: actions/checkout@v4 - run: npm test - uses: actions/upload-artifact@v4 with: name: test-report path: reports/ # Uploads 200 MB on every run # 30 runs/day × 90 days = 540 GB
steps: - uses: actions/checkout@v4 - run: npm test - uses: actions/upload-artifact@v4 if: failure() with: name: test-report path: reports/ # Uploads only when tests fail # 3 failures/day × 90 days = 54 GB
If your failure rate is around 10%, this single line cuts artifact volume by 90%. For build artifacts that downstream jobs or deployments need, use if: success() explicitly. This is the default behavior, but being explicit documents the intent. For artifacts you need regardless of outcome (like Playwright traces), use if: always() instead. If your E2E suite runs on every push, the artifact volume compounds quickly, so see our guide on E2E tests running too often.
One caveat: failure() only triggers if a previous step in the same job failed. If you need to capture artifacts from a failed job in a multi-job workflow, use if: always() on the upload step and check the outcome of the upstream job via needs.<job_id>.result == 'failure'.
Fix 2
Set short retention periods per artifact
GitHub retains artifacts for 90 days by default. For most CI artifacts, that's far too long. A test report from a merged PR has zero diagnostic value after a few days. Coverage reports are stale within hours. Even build artifacts for deployment are typically consumed within minutes and never needed again.
The retention-days input on actions/upload-artifact@v4 sets a per-artifact expiration. Use it aggressively:
# Test reports - only useful for debugging recent failures - uses: actions/upload-artifact@v4 if: failure() with: name: test-report-${{ github.run_id }} path: reports/ retention-days: 3 # Build artifact - consumed by deploy job, then useless - uses: actions/upload-artifact@v4 with: name: build-output path: dist/ retention-days: 1 # Coverage data - only needed for the current PR review - uses: actions/upload-artifact@v4 with: name: coverage-${{ github.run_id }} path: coverage/ retention-days: 5
Reducing retention from 90 days to 3 days cuts steady-state storage by 97% for that artifact type. The minimum is 1 day. You can also set a repository-wide default in Settings → Actions → General → Artifact and log retention, but per-artifact overrides in the workflow give you more granular control.
One caveat: the retention-days value cannot exceed the maximum set at the repository or organization level. If your org admin has capped retention at 30 days, setting retention-days: 90 in the workflow will silently use 30. The workflow won't error; it just uses the lower value.
Fix 3
Use different retention for PR builds vs. main branch
Not all builds are equal. A PR build artifact is consumed during the workflow run and stale the moment the PR merges. A main branch build artifact may be needed for rollback investigation days later. A release artifact may be re-downloaded for weeks. Treating all of these the same wastes storage on the short-lived ones and risks losing the long-lived ones.
Use a GitHub Actions expression to set context-aware retention in a single line:
- uses: actions/upload-artifact@v4 with: name: build-output-${{ github.run_id }} path: dist/ retention-days: ${{ github.event_name == 'pull_request' && 1 || 30 }} # PR builds: 1 day (consumed by downstream jobs, then useless) # Main/release builds: 30 days (may need for rollback investigation)
The expression ${{ github.event_name == 'pull_request' && 1 || 30 }} evaluates to 1 for PR runs and 30 for everything else (push to main, scheduled, manual dispatch). This cuts PR artifact storage by 97% compared to the 90-day default while preserving longer retention where it matters.
For teams merging 15 PRs per day, this pattern alone drops PR artifact storage from 135 GB at steady state (15 uploads × 100 MB × 90 days) to 1.5 GB (15 × 100 MB × 1 day). Combined with if: failure(), it drops further to about 150 MB.
Fix 4
Set a repository or organization-level retention cap
Per-artifact retention-days in workflow YAML works well, but it requires every workflow author to remember to set it. A more reliable approach is to set a blanket maximum at the repository or organization level. This acts as a ceiling: workflows can request shorter retention, but not longer.
Navigate to Organization Settings → Actions → General → Artifact and log retention (or the equivalent at the repository level). Set it to 7 days. Workflows that explicitly set retention-days: 1 still get 1-day retention. Workflows that don't specify anything get 7 days instead of 90.
# Organization Settings → Actions → General # "Artifact and log retention" # # Set to: 7 days # # Effect: # - Workflows without retention-days: 7-day retention (not 90) # - Workflows with retention-days: 1 → 1 day (still honored) # - Workflows with retention-days: 30 → 7 days (capped) # # Only applies to new artifacts. Existing artifacts keep their original retention.
This is the most reliable way to enforce storage hygiene across all repositories in an organization. It catches workflows that don't set explicit retention and prevents any workflow from accidentally storing artifacts for 90 days. For release artifacts that genuinely need long retention, upload them to a dedicated storage service (S3, GCS, or GitHub Releases) rather than relying on Actions artifact retention.
One caveat: this setting only applies to new artifacts. Changing it does not retroactively delete or shorten retention on existing artifacts. If you're already at your storage limit, you'll need to manually delete old artifacts (or wait for them to expire) before the new policy takes full effect. GitHub's REST API for deleting artifacts can help automate cleanup. Note that GitHub recalculates freed storage every 6-24 hours, so quota relief is not immediate.
Fix 5
Compress GitHub Actions artifacts before upload
The upload-artifact action applies Zlib compression automatically, but the default compression level (6) is a middle ground. For large artifacts that compress well (HTML reports, JSON coverage data, text logs), increasing the compression level reduces both storage footprint and upload time. For binary artifacts that don't compress well (compiled binaries, images), dropping compression to 0 speeds up the upload significantly.
# High compression for text-heavy artifacts (HTML, JSON, logs) - uses: actions/upload-artifact@v4 if: failure() with: name: test-report path: reports/ compression-level: 9 # Max compression, smaller file retention-days: 3 # No compression for pre-compressed or binary artifacts - uses: actions/upload-artifact@v4 with: name: docker-image path: image.tar.gz compression-level: 0 # Already compressed, skip re-compression retention-days: 1
The compression level accepts values from 0 (no compression) to 9 (maximum compression). For text-heavy artifacts like HTML test reports, level 9 can reduce file size by 70–80%. For already-compressed files (tarballs, zip files, Docker images), level 0 avoids wasting CPU time on re-compression and can significantly speed up the upload step.
You can also pre-compress artifacts yourself before uploading. If you're uploading a large directory of test results, tar czf it first, then upload the single archive with compression-level: 0. This gives you full control over what's included and how it's compressed.
Reference
Complete optimized GitHub Actions workflow
Here's a real-world workflow with all fixes applied. Build artifacts upload on success for the deploy job with context-aware retention. See our guide on building once and reusing everywhere for more on this pattern. Test reports upload only on failure with short retention. Everything specifies explicit compression.
name: CI on: push: branches: [main] pull_request: jobs: test: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - uses: actions/setup-node@v4 with: node-version: 20 - run: npm ci - run: npm test # Fix 1: Upload test report only on failure # Fix 2+3: Short retention, PR-aware - uses: actions/upload-artifact@v4 if: failure() with: name: test-report-${{ github.run_id }} path: reports/ retention-days: 3 compression-level: 9 # Fix 5: Max compression for HTML build: needs: [test] runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - uses: actions/setup-node@v4 with: node-version: 20 - run: npm ci - run: npm run build # Fix 3: PR gets 1-day retention, main gets 30 - uses: actions/upload-artifact@v4 with: name: build-output path: dist/ retention-days: ${{ github.event_name == 'pull_request' && 1 || 30 }} compression-level: 6 # Default, good balance deploy: needs: [build] if: github.ref == 'refs/heads/main' runs-on: ubuntu-latest steps: - uses: actions/download-artifact@v4 with: name: build-output - run: ./deploy.sh
Reference
GitHub Actions storage quotas by plan
Artifact storage is shared with GitHub Actions cache storage and GitHub Packages across your entire account or organization. A single busy repo's artifacts can exhaust the quota for all repos. Public repositories are exempt from storage billing entirely.
| Plan | Included storage | Overage rate |
|---|---|---|
| Free | 500 MB | $0.25/GB/mo |
| Pro | 1 GB | $0.25/GB/mo |
| Team | 2 GB | $0.25/GB/mo |
| Enterprise Cloud | 50 GB | $0.25/GB/mo |
The retention-days input on upload-artifact is available on all plans, including Free. There is no plan restriction on setting short retention. The minimum is 1 day. GitHub Enterprise Server supports up to 400 days; github.com caps at 90 days for public repos. Per GitHub Actions billing, storage is billed hourly based on actual usage, then converted to GB-months for billing. Deleting artifacts reduces future charges, but recalculation of freed storage can take 6-48 hours. Artifacts and caches share the same storage quota, so bloated artifacts can also prevent your dependency caches from being stored. If you suspect your caches are also oversized, see our guide on caching too much.
Reference
Recommended retention by build context
Match retention to the artifact's useful life. PR artifacts are disposable within days. Release artifacts may need to persist for weeks. Use the table below to set appropriate values per artifact type and build context:
| Build context / artifact type | Retention | Upload when |
|---|---|---|
| PR build output (job-to-job) | 1 day | always |
| PR test report / screenshots | 3 days | failure() |
| PR coverage report | 3 days | always |
| E2E screenshots / traces | 3 days | failure() |
| Main branch build output | 7 days | always |
| Binary installers | 7 days | success() |
| Release / tag artifacts | 30 days | success() |
For release artifacts that need to persist indefinitely, consider uploading them to a dedicated storage service (S3, GCS, or GitHub Releases) rather than relying on Actions artifact retention. GitHub Actions artifacts are designed for ephemeral CI data, not long-term storage.
FAQ
Frequently asked questions about GitHub Actions artifact storage
How much storage do GitHub Actions artifacts use by default?
Without any optimization, artifacts are retained for 90 days. A team uploading 200 MB per run across 30 daily CI runs accumulates 540 GB at steady state, costing $135/month at GitHub's $0.25/GB overage rate. Artifacts and caches share the same storage quota, so bloated artifacts can also prevent dependency caches from being stored.
What is the GitHub Actions artifact storage limit by plan?
Free plans include 500 MB, Pro includes 1 GB, Team includes 2 GB, and Enterprise Cloud includes 50 GB. Storage is shared across artifacts, caches, and GitHub Packages for your entire account or organization. Overage is billed at $0.25/GB per month on all plans. Public repositories are exempt from storage billing.
How do I upload GitHub Actions artifacts only when tests fail?
Add if: failure() to your upload-artifact step. The failure() function returns true when any previous step in the same job has failed. This single line typically cuts artifact volume by 90% since most runs pass. For multi-job workflows, use if: always() on the upload step and check the upstream job outcome via needs.<job_id>.result == 'failure'.
What retention-days should I set for GitHub Actions artifacts?
Match retention to the artifact's useful life. PR build outputs passed between jobs need 1 day. Test reports and screenshots need 3 days. Coverage reports need 3-5 days. Main branch build outputs need 7 days for rollback investigation. Release artifacts need 30 days. The default 90-day retention is almost always too long.
Can I set different artifact retention for PR builds vs. main branch builds?
Yes. Use a GitHub Actions expression like ${{ github.event_name == 'pull_request' && 1 || 30 }} in your retention-days input. This evaluates to 1 day for PR runs and 30 days for pushes to main, scheduled runs, and manual dispatches. This single expression cuts PR artifact storage by 97% compared to the 90-day default.
How do I set an organization-wide artifact retention cap?
Navigate to Organization Settings → Actions → General → Artifact and log retention. Set it to 7 days. Workflows that explicitly set retention-days: 1 still get 1-day retention. Workflows that don't specify anything get 7 days instead of 90. This setting only applies to new artifacts; existing artifacts keep their original retention.
Does the upload-artifact compression level affect storage costs?
Yes. The default compression level is 6. For text-heavy artifacts like HTML test reports and JSON coverage data, level 9 can reduce file size by 70-80%. For already-compressed files like Docker images or tarballs, set compression-level to 0 to avoid wasting CPU time on re-compression while speeding up the upload step.
Related guides