Skip to content

Migration Strategy

Both systems (old Supabase + new CF/PlanetScale) run simultaneously during migration. Cutover happens only after parity validation passes.

┌─────────────────┐ ┌─────────────────┐
│ Old system │ │ New system │
│ (Supabase + │ │ (CF Workers + │
│ Express) │ │ PlanetScale) │
└────────┬────────┘ └────────┬────────┘
│ │
▼ ▼
┌─────────┐ ┌─────────┐
│ Compare │◄──────────►│ Compare │
└────┬────┘ └────┬────┘
│ │
▼ ▼
Parity gate: do outputs match?
YES: cutover to new system
NO: investigate + fix

Phase 0: Infrastructure + data layer (Days 1–2)

Section titled “Phase 0: Infrastructure + data layer (Days 1–2)”
  • Provision PlanetScale database + dev branch
  • Create Cloudflare Workers (dashboard-api, ingest)
  • Configure Hyperdrive, R2, CF Access, Queue, Cron Triggers
  • Run dbmate migrations (ontologically-named schema)
  • Implement Drizzle PostgreSQL schema with ontological naming conventions
  • Port Shopify client module (consolidated)
  • Implement bulk sync ProcedureExecution + mart refresh
  • Validate: raw data matches Supabase source

Phase 1: API surface + background jobs (Days 3–7)

Section titled “Phase 1: API surface + background jobs (Days 3–7)”
  • Port all CRUD routes to Hono (all 14 scenarios)
  • Implement CF Access middleware
  • Port all RPC logic from Supabase stored procs to TypeScript
  • Implement queue consumers + producers (ProcedureExecution dispatch)
  • Wire Cron Triggers
  • Validate: API responses and ProcedureExecution outputs match old system

Phase 2: Frontend swap + parity run (Days 8–10)

Section titled “Phase 2: Frontend swap + parity run (Days 8–10)”
  • Swap API base URL to Worker
  • Remove Supabase auth + Passport auth code
  • Deploy to CF Pages
  • Run parity checks
  • Staff validates key workflows
  • DNS switch to new system
  • Monitor for 48 hours
  • Verify observability alerts
  • Fix edge cases from production usage
  • Tune performance
  • Decommission old infrastructure (after 2-week holding period)

During parallel run, compare key aggregates using new ontological table names:

CheckMethod
Performance Measurement Dataset countCOUNT(*) on both marts (performance_measurement_dataset vs old mart_performance_metrics)
Top 100 MaterialArtifacts by salesCompare aggregate_measurement_value values
Nominal Classification distributionCount per nominal_classification bucket
Denotation relation coverageCompare denotation_relation counts vs old product_asset_mappings
Tag ICE countCompare tag_content_entity rows vs old tag_classifications

Acceptable tolerance: < 1% variance (due to timing differences in sync ProcedureExecutions).

TriggerAction
Parity check failsStay on old system; investigate discrepancy
New system error rate > thresholdDNS revert to old system (< 5 min)
Data corruption detectedReplay from R2-archived JSONL (IBEs); restore PlanetScale from branch
Post-cutover regressionPlanetScale branch restore; DNS revert

Key enabler: old Supabase remains running until parity acceptance gate is met. Do not decommission until 2 weeks post-cutover with no rollback triggers.

One-time migration of existing Supabase data to PlanetScale:

  1. Export via pg_dump or Supabase CLI
  2. Align schema conventions (ontological naming, index strategy, column defaults)
  3. Import via dbmate seed scripts or direct INSERT batches (old names → new names)
  4. Validate row counts and spot-check data integrity