Argo ApplicationSets Are Easy. Making Your App Deployable Is Not.

Argo ApplicationSets Are Easy. Making Your App Deployable Is Not.

Per-branch preview environments on every PR. The wiring is trivial. The reason most teams never get there has nothing to do with Argo.

I recently set up per-PR preview environments using Argo CD's ApplicationSets, and the result is magical: open a pull request, slap a preview label on it, and 5 minutes later there's a fully isolated copy of the app running at https://pr-42.<repo>.preview.org.dev, behind SSO, with TLS, on its own namespace. PR closes, environment vanishes. No leftover resources, no manual cleanup, no Slack message asking who's still using staging-3.

The pipeline itself? Not that hard. ApplicationSets with a matrix of a Git generator and a PullRequest generator. A generic preview-overlay chart that every preview reuses for ingress and OAuth2Proxy. A two-line YAML file per onboarded app. That's the whole setup. And honestly, in 2026, you don't even have to write most of it yourself. Point Claude Code or your agent of choice at the Argo docs and it'll happily produce a working ApplicationSet manifest in a single session.

But here's the thing: the Argo part is the easy 20%. The other 80% is making your app actually deployable on its own. And most apps aren't. That part (the one that requires knowing every implicit assumption your codebase makes about its environment) is exactly the kind of work an agent can't do for you in one shot.

What ApplicationSets Actually Give You

Quick context if you haven't used them. An ApplicationSet is a controller that generates Argo Application resources from a template plus one or more generators. The generators we care about here:

  • PullRequest generator: queries GitHub (or GitLab) for open PRs, optionally filtered by label. One parameter set per PR.
  • Git generator: reads files or directories from a repo. Useful for "list of opted-in apps".

Combine them in a matrix and you get the cross-product: one Application per (app × labeled PR). The template uses the parameters to set the namespace, image tag, host, whatever you need.

generators:
  - matrix:
      generators:
        - git:
            repoURL: https://github.com/org/argo-apps.git
            files:
              - path: apps/repos/*.yaml
        - pullRequest:
            github:
              owner: org
              repo: '{{ .name }}'
              labels: [preview]

The Application this template generates is multi-source: Source 1 is the app's own Helm chart (Deployment, Service, etc.), Source 2 is a generic preview-overlay chart that every preview shares: Istio Gateway, VirtualService, OAuth2Proxy, sealed credentials. Both render into the same per-PR namespace.

The contract for an app to participate is brutally simple: produce a Service named entrypoint on port 80, ship a Docker image with a predictable tag, and add one row to a YAML file. The platform handles the rest.

When this works, it's beautiful. App teams don't write a single line of Istio, cert-manager, or OAuth config. They get auth, TLS, ingress, and isolation for free.

One annoying gotcha: namespace cleanup

While we're being honest about the wiring: Argo's CreateNamespace=true sync option does what it says. It creates the namespace if it doesn't exist. What it does not do is delete the namespace when the Application is removed. So when a PR closes and the ApplicationSet drops its Application, you're left with an empty namespace lingering forever.

The fix is mildly annoying: put the Namespace resource in a Helm chart so Argo manages it like any other resource (with proper Prune=true). And because Argo refuses to prune a namespace that still has resources in it, that means splitting it into a second chart that owns nothing but the namespace, synced separately, pruned last. It's the kind of detail that takes an afternoon to figure out and 30 seconds to write down. Now you know.

And Yet Most Apps Can't Use It

Here's where the post gets honest. I've onboarded apps that took 20 minutes. I've onboarded apps that took much longer. The difference had nothing to do with Argo, Helm, or Kubernetes.

It had everything to do with whether the app could boot in isolation.

A preview environment, by definition, is an isolated copy. New namespace, new database, no cached state, no shared anything. If the first request to your app needs:

  • a row that only exists in your team's shared dev database,
  • an OAuth callback registered on a fixed staging URL,
  • a webhook from Stripe pointing somewhere,
  • a file in S3 that someone uploaded six months ago,
  • or a friendly "ask James for the API key" handshake

…then your preview env is going to come up looking great and immediately 500 the moment anyone touches it.

The Real Work Is the Dev Setup

Per-PR previews are a forcing function. They expose every implicit assumption your app makes about its environment. And there are usually a lot.

This is the work that actually matters:

Bundle the database in the chart. Not "use the shared dev RDS instance". An actual Postgres pod, in the namespace, with a PVC if you need persistence (you usually don't, since preview envs are ephemeral and ephemeral storage is fine). It boots when the namespace boots. It dies when the PR closes. No coordination required.

Seed it on startup. A Job, an init container, a one-shot migrator, whatever fits your stack. It needs to run automatically, and the data needs to be enough that someone can open the app and click around without immediately hitting "no records found". This is non-negotiable. A preview env that requires manual seeding is a preview env nobody uses.

Mock the external APIs. Every external integration needs a mode where it doesn't actually call out. Some options, from worst to best:

  • A feature flag that returns canned responses. Fine for read paths, awkward for anything stateful.
  • A local mock server (WireMock, Mockoon, or a small Express app) deployed alongside the app in the same chart. Better, since it exercises the actual HTTP layer.
  • A contract-test-driven mock that's known to match the real API's behavior. Best, since mocks rot otherwise.

Stop hardcoding URLs. OAuth callback URLs, webhook receivers, "go back to" links. If they're hardcoded to staging.example.com, you're done before you started. They have to be templated from host or constructed from the request.

Make secrets boring. Preview envs need some secrets (JWT signing keys, mock OAuth client IDs, a database password). Generate them per-namespace, or seal them once cluster-wide and reuse. Do not require an engineer to manually copy-paste anything to spin up a preview.

None of this is glamorous. None of it is in a screenshot of an Argo dashboard. But it's the work that determines whether your fancy preview pipeline is a productivity multiplier or a haunted demo.

The Underrated Win: Local Development Gets Fixed Too

Here's the part I didn't expect when I started this. Every single thing on the list above (bundled database, seed data, mocked APIs, no hardcoded URLs) is also exactly what you need to make docker compose up just work for someone joining the team.

A new dev clones the repo, runs one command, and 90 seconds later they have the whole system running on their laptop. Not "running, except for the auth bit, you'll need to bug James for credentials". Not "running, but you have to point it at the shared dev DB or it'll crash". Just running. Click around, break things, learn the system, ship a PR by lunch.

This is the bar. And in a tech landscape where the cycle time on new tools, frameworks, and entire paradigms keeps compressing, the team that can stand up its system in 90 seconds runs circles around the team that needs a half-day onboarding ritual. Fast iteration isn't a luxury. It's how you stay competitive when the ground keeps shifting under you.

Preview environments and great local development are the same problem. Solve it once, get both. Skip it, and you're stuck explaining to every new hire why "it works in prod" while they wait for someone to remember which Vault path the staging credentials are at.

Why It's Worth Doing

Once an app is preview-deployable, the value compounds in ways that surprised me:

  • PR review actually involves clicking the thing. Reviewers stop guessing what a UI change looks like. Designers can leave comments on the actual deployed URL.
  • Stakeholder demos move from "let me set up a meeting" to "here, click this link".
  • Bugs get reproduced in the bug report. "It breaks when X" comes with a live URL where it breaks.
  • Onboarding gets faster. New devs don't need a working local setup on day one, but they probably have one anyway, because the work to get there was the same work.

The Bottom Line

ApplicationSets with the PullRequest generator are great. The pattern is solid, the tooling is mature, and the result feels like the future. Set it up. Get an agent to set it up. The wiring is no longer where the work is.

But don't kid yourself about where the work now is. The hard part (the part that determines whether this actually works) is making your application boot, seed, and run in a brand new namespace with no human in the loop. Bundled database. Seeded data. Mocked external APIs. No hardcoded URLs.

Do that work. The Argo part will take an afternoon. The dev-setup part might take a quarter. But on the other side of it, every PR is its own URL, every new hire is productive on day one, and you'll wonder how you ever shipped anything without it.