Skip to content
cd ..

One Schema, Ten Backends: Cross Database Migration

// · 5 min read

AgenticMail Enterprise supports ten database backends. SQLite, PostgreSQL, MySQL, MariaDB, MongoDB, DynamoDB, Turso, Neon, Supabase, and PlanetScale. They all implement the same adapter interface. Migrations run against all of them from a single schema definition. Getting here was an adventure.

The Adapter Interface

Every database backend implements the same contract: connect, disconnect, query, execute, migrate, and a set of CRUD operations for each domain entity (agents, emails, credentials, audit logs, etc.). The interface is strict enough that swapping backends is a configuration change, not a code change.

This matters because enterprise customers have strong opinions about databases. Some run Postgres and nothing else. Some are locked into AWS and want DynamoDB. Some need the simplicity of SQLite for small deployments. Telling a customer “you must use Postgres” is a great way to lose a deal.

Writing Migrations Once

The migration system starts with SQLite DDL as the canonical format. SQLite’s CREATE TABLE syntax is simple and unambiguous, which makes it a good source of truth. From there, a dialect converter transforms each statement into the target database’s syntax.

For relational databases (Postgres, MySQL, MariaDB, Neon, Supabase, PlanetScale), the conversion is mostly straightforward. INTEGER PRIMARY KEY AUTOINCREMENT becomes SERIAL PRIMARY KEY in Postgres or INT AUTO_INCREMENT PRIMARY KEY in MySQL. TEXT stays TEXT almost everywhere. DATETIME maps to TIMESTAMP WITH TIME ZONE in Postgres.

The interesting cases are the non relational backends. MongoDB doesn’t have tables or columns, so the migration converter transforms CREATE TABLE statements into collection creation with JSON Schema validation. Each column becomes a field in the validation schema, with type mappings from SQL types to BSON types. DynamoDB is even more different. Table creation maps to CreateTable calls with explicit key schema definitions, and secondary indexes are derived from any UNIQUE constraints in the original DDL.

Versioned Migration Tracking

Each backend maintains a migrations tracking table (or collection, or item, depending on the database). The tracker stores a version number, a hash of the migration content, a timestamp, and a status flag. Before running any migration, the system checks what’s already been applied and only runs what’s new.

The version numbering is sequential but the system also verifies content hashes. If someone modifies a previously applied migration file, the system catches the mismatch and refuses to proceed. This prevents subtle schema drift where the migration history says version 5 was applied but the actual DDL that ran was different from what’s in the current codebase.

Rollbacks exist but are intentionally manual. Automatic rollbacks in production databases are dangerous. Instead, the system generates reverse migration scripts that an operator can review and apply explicitly.

Smart Auto Configuration

This is one of my favorite features. When you provide a connection string, the system detects what you’re connecting to and configures itself accordingly. A postgresql:// URL starting with a Neon hostname automatically enables Neon’s serverless driver with connection pooling optimized for their infrastructure. A Supabase connection string triggers the Supabase adapter with their specific auth header requirements.

The detection logic examines hostnames, port numbers, and connection string parameters. db.xxxx.supabase.co on port 5432 means Supabase. *.neon.tech means Neon. *.turso.io means Turso. PlanetScale is identified by its aws.connect.psdb.cloud hostname pattern.

This means customers can paste a connection string from their provider’s dashboard and the system just works. No manual adapter selection, no reading documentation to figure out which driver to use.

Testing Ten Backends

Testing is the expensive part. The CI pipeline spins up containers for Postgres, MySQL, and MariaDB. SQLite runs in memory. MongoDB gets its own container. DynamoDB uses a local emulator. The cloud hosted backends (Turso, Neon, Supabase, PlanetScale) run against test instances with cleanup after each run.

Every migration is tested against every backend on every commit. The full matrix takes about four minutes, which is acceptable given what it validates. The alternative, finding out in production that a migration breaks on MySQL but works on Postgres, is significantly more expensive.

Lessons Learned

SQL dialects are 90% the same and the remaining 10% will consume 90% of your time. MongoDB’s schema validation is more capable than most people realize. DynamoDB’s single table design patterns are elegant once you internalize them but the mental model shift from relational thinking is real.

The adapter pattern works if you keep the interface honest. Don’t leak backend specific features into the interface. Accept that some operations will be slightly less optimal on some backends. Consistency across ten backends beats perfection on one.

Source Code

The DatabaseAdapter abstract class defines the contract every backend must fulfill, from connection lifecycle to agent CRUD and audit logging:

export type DatabaseType =
  | 'postgres' | 'mysql' | 'mongodb' | 'sqlite'
  | 'turso' | 'dynamodb' | 'cockroachdb'
  | 'planetscale' | 'supabase' | 'neon';

export abstract class DatabaseAdapter {
  abstract readonly type: DatabaseType;
  abstract connect(config: DatabaseConfig): Promise<void>;
  abstract disconnect(): Promise<void>;
  abstract migrate(): Promise<void>;
  abstract createAgent(input: AgentInput): Promise<Agent>;
  abstract getAgent(id: string): Promise<Agent | null>;
  abstract listAgents(options?: { status?: Agent['status'] }): Promise<Agent[]>;
  abstract logEvent(event: Omit<AuditEvent, 'id' | 'timestamp'>): Promise<void>;
}

View the full source on GitHub

// share

// subscribe

New posts and updates straight to your inbox. No noise.

cd ..