Migration Strategy
Approach: parallel run
Section titled “Approach: parallel run”Both systems (old Supabase + new CF/PlanetScale) run simultaneously during migration. Cutover happens only after parity validation passes.
┌─────────────────┐ ┌─────────────────┐│ Old system │ │ New system ││ (Supabase + │ │ (CF Workers + ││ Express) │ │ PlanetScale) │└────────┬────────┘ └────────┬────────┘ │ │ ▼ ▼ ┌─────────┐ ┌─────────┐ │ Compare │◄──────────►│ Compare │ └────┬────┘ └────┬────┘ │ │ ▼ ▼ Parity gate: do outputs match? │ YES: cutover to new system NO: investigate + fixPhase sequence
Section titled “Phase sequence”Phase 0: Infrastructure + data layer (Days 1–2)
Section titled “Phase 0: Infrastructure + data layer (Days 1–2)”- Provision PlanetScale database + dev branch
- Create Cloudflare Workers (dashboard-api, ingest)
- Configure Hyperdrive, R2, CF Access, Queue, Cron Triggers
- Run dbmate migrations (ontologically-named schema)
- Implement Drizzle PostgreSQL schema with ontological naming conventions
- Port Shopify client module (consolidated)
- Implement bulk sync ProcedureExecution + mart refresh
- Validate: raw data matches Supabase source
Phase 1: API surface + background jobs (Days 3–7)
Section titled “Phase 1: API surface + background jobs (Days 3–7)”- Port all CRUD routes to Hono (all 14 scenarios)
- Implement CF Access middleware
- Port all RPC logic from Supabase stored procs to TypeScript
- Implement queue consumers + producers (ProcedureExecution dispatch)
- Wire Cron Triggers
- Validate: API responses and ProcedureExecution outputs match old system
Phase 2: Frontend swap + parity run (Days 8–10)
Section titled “Phase 2: Frontend swap + parity run (Days 8–10)”- Swap API base URL to Worker
- Remove Supabase auth + Passport auth code
- Deploy to CF Pages
- Run parity checks
- Staff validates key workflows
Phase 3: Cutover (Days 11–14)
Section titled “Phase 3: Cutover (Days 11–14)”- DNS switch to new system
- Monitor for 48 hours
- Verify observability alerts
Buffer week (Days 15–21)
Section titled “Buffer week (Days 15–21)”- Fix edge cases from production usage
- Tune performance
- Decommission old infrastructure (after 2-week holding period)
Parity checks
Section titled “Parity checks”During parallel run, compare key aggregates using new ontological table names:
| Check | Method |
|---|---|
| Performance Measurement Dataset count | COUNT(*) on both marts (performance_measurement_dataset vs old mart_performance_metrics) |
| Top 100 MaterialArtifacts by sales | Compare aggregate_measurement_value values |
| Nominal Classification distribution | Count per nominal_classification bucket |
| Denotation relation coverage | Compare denotation_relation counts vs old product_asset_mappings |
| Tag ICE count | Compare tag_content_entity rows vs old tag_classifications |
Acceptable tolerance: < 1% variance (due to timing differences in sync ProcedureExecutions).
Rollback plan
Section titled “Rollback plan”| Trigger | Action |
|---|---|
| Parity check fails | Stay on old system; investigate discrepancy |
| New system error rate > threshold | DNS revert to old system (< 5 min) |
| Data corruption detected | Replay from R2-archived JSONL (IBEs); restore PlanetScale from branch |
| Post-cutover regression | PlanetScale branch restore; DNS revert |
Key enabler: old Supabase remains running until parity acceptance gate is met. Do not decommission until 2 weeks post-cutover with no rollback triggers.
Data migration
Section titled “Data migration”One-time migration of existing Supabase data to PlanetScale:
- Export via
pg_dumpor Supabase CLI - Align schema conventions (ontological naming, index strategy, column defaults)
- Import via dbmate seed scripts or direct
INSERTbatches (old names → new names) - Validate row counts and spot-check data integrity