Scaling Monorepo CI/CD: Patterns that Saved Us 40% in GitHub Actions Costs
Building a monorepo doesn't have to mean 60-minute CI runs. Learn how to implement dynamic matrices, path-based filtering, and incremental builds using GitHub Actions.

The 45-Minute PR Nightmare
Last month, our main monorepo hit a breaking point. Every time a developer pushed a 2-line CSS fix for the marketing landing page, GitHub Actions triggered the entire test suite for 42 microservices, 3 mobile apps, and the shared UI library. We were burning 1,200 CI minutes per PR, and developers were staring at spinning wheels for nearly an hour. This wasn't just a cost issue; it was a velocity killer. If you are managing more than five projects in a single repository, the 'naive' CI approach—where you run everything on every push—is your biggest technical debt.
In 2026, monorepos are the standard for high-performing teams using tools like Turborepo 2.x or Nx 21. However, the glue that holds them together—the CI/CD pipeline—is often the weakest link. To fix this, we need to move away from static workflows and embrace dynamic, path-aware pipelines that only execute what is strictly necessary.
Pattern 1: Dynamic Matrix Generation
The most common mistake is hardcoding service names in your .github/workflows/ci.yml. This leads to constant maintenance as you add new services. Instead, you should use a 'discovery' job that outputs a JSON array of changed projects, which then feeds into a dynamic GitHub Actions matrix.
We use a custom script or a tool like tj-actions/changed-files to identify which directories have changed relative to the base branch. Here is how we implement the discovery phase:
name: CI
on:
pull_request:
branches: [main]
jobs:
detect-changes:
runs-on: ubuntu-latest
outputs:
projects: ${{ steps.filter.outputs.changes }}
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Get changed files
id: filter
uses: dorny/paths-filter@v3
with:
filters: |
auth-service: 'services/auth/**'
order-service: 'services/order/**'
shared-ui: 'libs/shared-ui/**'
test:
needs: detect-changes
if: ${{ needs.detect-changes.outputs.projects != '[]' }}
runs-on: ubuntu-latest
strategy:
matrix:
project: ${{ fromJson(needs.detect-changes.outputs.projects) }}
steps:
- uses: actions/checkout@v4
- name: Run Tests for ${{ matrix.project }}
run: |
cd services/${{ matrix.project }}
npm install
npm test
> **Pro Tip:** If you're using Turborepo, don't manually map paths. Use `turbo run test --filter=...[origin/main]` to let the build system calculate the dependency graph for you.
Pattern 2: The 'Check-In' Job for Required Statuses
GitHub's 'Required Status Checks' feature has a major flaw when combined with path filtering: if a job is skipped because of a path filter, the PR cannot be merged because the status check never reports success.
To solve this, we implement a 'Gatekeeper' job. This job depends on all potential matrix items but always runs. It evaluates whether the necessary tests passed or were correctly skipped.
ci-gatekeeper:
needs: [test]
runs-on: ubuntu-latest
if: always()
steps:
- name: Check CI Status
if: contains(needs.*.result, 'failure') || contains(needs.*.result, 'cancelled')
run: exit 1
You then set ci-gatekeeper as your required status check in GitHub settings. This pattern ensures that if test was skipped because no code changed, the gatekeeper still completes successfully, unblocking the PR.
Pattern 3: Remote Caching and Content-Addressable Storage
Even with path filtering, you'll eventually run into shared dependencies. If libs/shared-ui changes, you still have to rebuild everything. This is where remote caching becomes non-negotiable. In 2026, we've moved beyond the standard actions/cache. We now use specialized build-kit storage or S3-backed caches for Turborepo and Nx.
When we integrated a remote cache (using an S3 bucket in our VPC), our 'worst-case' build time dropped from 45 minutes to 8 minutes. The CI runner doesn't actually perform the work; it just verifies the hash of the local files, checks the remote cache, and downloads the build artifacts if they exist.
Gotchas: What the Docs Don't Tell You
- The 256-Job Limit: GitHub Actions has a hard limit of 256 jobs in a matrix. In a massive monorepo with 300+ packages, a dynamic matrix will eventually crash. In these cases, you must group your packages into 'buckets' or batches during the discovery phase.
- Shallow Clones: By default,
actions/checkoutperforms a shallow clone (fetch-depth: 1). Path-based detection scripts usually need the full history to compare the current HEAD against the target branch. Always setfetch-depth: 0in your discovery job. - Docker Layer Caching: If your CI builds Docker images, the GHA cache is notoriously slow for large layers. Use
type=ghain yourdocker/build-push-action, but be aware that for multi-stage builds, you might needmode=maxto cache intermediate stages.
Takeaway
Stop paying for wasted compute. Today, audit your GitHub Actions usage. If your CI runs for more than 10 minutes on a project that wasn't touched in a PR, implement a dynamic discovery job using paths-filter and move your required status checks to a unified gatekeeper job.