Skip to content

PlanetScale Constraints

PlanetScale enforces system-level safety limits that directly affect how the pipeline ProcedureExecutions and scenarios are implemented.

ConstraintLimitImpact
Rows returned per query100,000Paginate large reads; chunk SELECT for performance_measurement_dataset rebuild
Rows affected per statement100,000Batch INSERT/UPDATE/DELETE operations
Result set size64 MiBLimit column selection; avoid SELECT * on JSON-heavy tables
Transaction timeout20 secondsKeep transactions short; no multi-stage transactions
Autocommit timeout900 secondsSingle statements can run longer, but should still be bounded

All pipeline ProcedureExecution stages that write multiple rows must chunk to stay under limits:

const BATCH_SIZE = 5000; // well under 100k limit
for (let i = 0; i < rows.length; i += BATCH_SIZE) {
const batch = rows.slice(i, i + BATCH_SIZE);
await db.insert(commercialTransactionRecord)
.values(batch)
.onConflictDoUpdate({
target: commercialTransactionRecord.orderId,
set: { lastModifiedAt: sql`EXCLUDED.last_modified_at` },
});
}

Instead of wrapping an entire ProcedureExecution in a transaction, each chunk commits independently. If a chunk fails, the execution records its last successful position and can resume:

// Each chunk is its own autocommit statement
await insertChunk(batch);
await updateExecutionProgress(runId, { processedCount: i + batch.length });

Every pipeline Step (PKO) is designed to be idempotent and restartable:

  • Raw upserts use ON CONFLICT DO UPDATE on commercial_transaction_record
  • Sales measurement rebuilds are keyed by (artifact_identifier, day) composite primary key
  • performance_measurement_dataset refresh overwrites entire rows by primary key
  • Watermark advances only after successful ProcedureExecution completion

Large queries use cursor-based pagination rather than OFFSET:

let cursor: string | null = null;
while (true) {
const rows = await db.select()
.from(commercialTransactionRecord)
.where(cursor ? gt(commercialTransactionRecord.orderId, cursor) : undefined)
.orderBy(commercialTransactionRecord.orderId)
.limit(10000);
if (rows.length === 0) break;
cursor = rows[rows.length - 1].orderId;
// process rows...
}

PlanetScale’s branching workflow supports safe schema changes:

  1. Create a development branch from production
  2. Apply migrations via dbmate on the dev branch
  3. Create a deploy request (like a PR for schemas)
  4. Review and merge — PlanetScale applies changes non-blockingly
  5. Rollback available if needed (undo deployment without data loss)

Safe migrations should be enabled on production branches to prevent destructive changes.

Cloudflare Hyperdrive sits between Workers and PlanetScale to provide:

  • Connection pooling — avoids per-request TCP/TLS handshake overhead
  • Query caching — optional, for read-heavy dashboard queries
  • Edge proximity — connections are maintained closer to Worker execution

Hyperdrive is the recommended way to connect Workers to PlanetScale.