No Easy Answers: A Conversation About Professional DB Migration Workflows

We came across a GitHub post asking whether a Supabase CI/CD migration workflow was up to professional standards. Rather than answer with a quick checklist, we sat down and talked it through honestly — and what emerged was less a definitive answer and more a map of the tradeoffs.

Author:

Jenny Berglund

Published on:

April 1, 2026

No Easy Answers: A Conversation About Professional DB Migration Workflows

The following reflects a real conversation between engineers grappling honestly with a question that doesn't have a clean answer — and that's kind of the point.

Someone posted a question on GitHub recently that prompted a discussion among our engineers. They were asking whether their Supabase CI/CD migration workflow — make changes in the UI, diff against local, generate a migration file, reset local, push to prod — was professional-grade. It's a thoughtful question, and one without an easy answer, so we discussed it.

What followed wasn't a tidy checklist of best practices. It was more like an honest audit of tools we've tried, instincts we've developed, and sharp edges we've all bled on. Here's how it went.

DB Diff-ing

The first thing that came up was the db diff approach at the heart of the GitHub poster's workflow. The reaction was immediate: don't rely on it.

The concern isn't just accuracy, though that's real. It's philosophical. Diffing encourages a mental model where you mutate state and then retroactively capture what changed, rather than treating every change as an intentional, authored migration from the start. The diff becomes the migration, and that inversion matters.

The ideal, as one of us put it, is that migrations are "an append-only, immutable record of every change." They should live next to the code. Tests should exercise them. Every environment should apply the full history from scratch. That's the discipline the workflow should enforce — not work around.

That said, the Supabase pattern does have a certain elegance to it. You make changes in their excellent UI (or SQL console), then run db reset locally to verify the full migration history replays cleanly. You're still arriving at the right destination — clean, reproducible migrations — just via a different starting point. If you're comfortable in the Supabase UI, that's not nothing.

Atlas

If you've thought about this problem for long, you've probably come across Atlas, which bills itself as something like Terraform for database migrations. The pitch is compelling: give it a desired end state, and it figures out how to get there — in the right order, handling the drop-and-recreate dependencies that make PostgreSQL so fiddly.

In theory, this is exactly what you want. Changing a column's data type on a table fronted by a view? You need to drop the view, alter the table, recreate the view, and regenerate any functions that depend on it. Getting that sequence right manually is error-prone. Atlas could, in principle, own that complexity.

In practice, though... it's been a recurring exercise in almost-but-not-quite. The tool has been tried more than once, and each time, something specific breaks it — a row-level security policy dependent on a view, a circular dependency it can't express in its syntax. The Atlas team is responsive and has addressed past blockers, but new ones emerge, often at the edges of PostgreSQL-specific functionality.

And that's the deeper critique: Atlas supports every major SQL database, which means it supports all of them incompletely. If you're doing anything "weird" with your database — and if you're using PostgreSQL seriously, you probably are — you'll eventually hit a wall. The tool would be most useful precisely in the cases it struggles most with.

There was also a more conceptual objection brought up. A database table isn't just infrastructure — it's a communication interface. When you manipulate it declaratively, you may be obscuring the cascading effects of that change on everything consuming that interface. A stored procedure that breaks after a table alteration won't fail at migration time; it'll fail silently at runtime, possibly deep in a conditional branch you don't hit often. The migrations-as-serial-log model at least makes you confront each change explicitly and gives tests something concrete to exercise.

The Schema Snapshot as a Secondary Artifact

One practical pain point with stacked migrations: you never have a clean, current picture of your database in the codebase. You'd have to mentally compose 50 (or 500) migration files to understand the present state. That makes it hard to reason about changes, and it makes working with LLMs genuinely frustrating — you don't want an AI reading every migration file trying to reconstruct your schema; you want it to read one authoritative file and leave the migrations folder alone.

A pattern that came up: after applying migrations, run pg_dump with schema-only flags in a canonicalized, ordered format and commit (or surface) the result as a derived artifact. This gives you a diffable snapshot of the full schema state after each migration. It won't catch everything — but it will catch subtle traps like using CREATE IF NOT EXISTS when you thought you were creating something fresh, only to discover in the diff that nothing changed because the object already existed.

There was not consensus about whether this artifact should be committed to the repo or generated as part of CI (surfaced in a PR comment, Terraform-plan-style). The committed version is useful for local LLM tooling. The CI artifact is cleaner from a repo hygiene perspective. Neither is obviously correct.

pgroll

One of the more interesting tangents was around a tool called pgroll, from Xata. It implements an expand-contract (or blue-green) schema migration pattern: both the old and new schema versions coexist simultaneously, existing callers keep working against version N, and new callers migrate to version N+1. Eventually you deprecate the old version and move on.

The appeal is real, especially for mobile apps or distributed systems where you can't atomically update all consumers at once. The tooling abstracts away a lot of the sharp edges — table locks, constraint validation, data backfills — and tries to ensure migrations are applied safely.

But the group was cautious. The tool takes on a much bigger role than "SQL files in a directory." You're handing it significant control over how your schema evolves, expressed in a constrained JSON vocabulary rather than raw SQL. That indirection is a tradeoff: you get safer migrations, but you lose the full expressiveness of the SQL language and take on a meaningful vetting obligation for the tool itself.

There's also a philosophical concern about tooling that makes complexity easier. The pitch "you can make breaking schema changes whenever you want" sounds good, but it might just be hiding complexity, not eliminating it. What happens during a hotfix when you're mid-rollout? What does N-version support look like when N gets large? These questions don't disappear — they just become someone else's problem, until they aren't.

The verdict: genuinely interesting, worth watching, but not ready to recommend without kicking the tires seriously.

Data Migrations: The Underappreciated Problem

The GitHub poster asked about data migrations specifically, and it's a question that deserves more than a footnote. Schema migrations and data migrations have different characteristics — a DDL change might be an atomic rename with near-zero cost, while a DML change that rewrites millions of rows can bring a table to its knees.

The pattern that came up for handling them in tests: maintain a parallel set of local-only seed files that interleave with the migration history. Not just a dump of current state, but data that represents what would exist at specific points in the migration timeline — so when migration 21 transforms a column's values, there's data in place from migration 6 that it can actually operate on. It's more bookkeeping, but it's the only way to genuinely test that a data migration works against realistic historical state.

For production-scale validation — especially for expensive migrations like adding an index to a massive table — the honest answer is: you need a prod-scale dataset to test against. The workflow of cloning a snapshot, running the migration, observing, and tearing it down is manual and heavy. There were ideas floated about an interesting possible solution involving anonymized read replicas, EBS snapshots, and copy-on-write semantics that could make prod-scale dev environments cheap and fast to spin up — but that's a longer conversation for another day.

So, What Should the Person on GitHub Actually Do?

If we were to distill something actionable out of all of this, it would probably look like:

If you like the Supabase UI and want to stay in that workflow, stick with the diff-and-reset pattern — but add a step where you generate a canonicalized schema snapshot after each migration. Use your Git diff viewer to confirm that what changed is what you intended.

If you prefer a code-first approach (and the poster clearly does — they said as much), write migrations by hand from the start. Don't make changes in the dashboard and pull them down. Treat the migration files as the source of truth.

In either case: write tests. Not in PGTAP if you can avoid it — the PostgreSQL testing story is, charitably, underdeveloped. Test a layer higher, in the language you're already using, against a local Supabase instance. It's more work, but the tooling is better and the feedback loop is faster.

And for data migrations: keep them in a separate migration history, interleaved with schema changes, only run when testing and with enough seed data to meaningfully test them.

There's no single right answer here — only a set of tradeoffs you choose to live with. The best workflow is probably the one your team will actually follow consistently, instrumented well enough that you catch mistakes before they reach production.

The conversation continues.

Let’s Get Started

Ready to tackle your challenges and cut unnecessary costs?
Let’s talk about the right solutions for your business.
Contact us