PlanetScale Constraints
PlanetScale enforces system-level safety limits that directly affect how the pipeline ProcedureExecutions and scenarios are implemented.
Key limits
Section titled “Key limits”| Constraint | Limit | Impact |
|---|---|---|
| Rows returned per query | 100,000 | Paginate large reads; chunk SELECT for performance_measurement_dataset rebuild |
| Rows affected per statement | 100,000 | Batch INSERT/UPDATE/DELETE operations |
| Result set size | 64 MiB | Limit column selection; avoid SELECT * on JSON-heavy tables |
| Transaction timeout | 20 seconds | Keep transactions short; no multi-stage transactions |
| Autocommit timeout | 900 seconds | Single statements can run longer, but should still be bounded |
Design rules
Section titled “Design rules”Chunked batch writes
Section titled “Chunked batch writes”All pipeline ProcedureExecution stages that write multiple rows must chunk to stay under limits:
const BATCH_SIZE = 5000; // well under 100k limit
for (let i = 0; i < rows.length; i += BATCH_SIZE) { const batch = rows.slice(i, i + BATCH_SIZE); await db.insert(commercialTransactionRecord) .values(batch) .onConflictDoUpdate({ target: commercialTransactionRecord.orderId, set: { lastModifiedAt: sql`EXCLUDED.last_modified_at` }, });}No long transactions
Section titled “No long transactions”Instead of wrapping an entire ProcedureExecution in a transaction, each chunk commits independently. If a chunk fails, the execution records its last successful position and can resume:
// Each chunk is its own autocommit statementawait insertChunk(batch);await updateExecutionProgress(runId, { processedCount: i + batch.length });Restartable operations
Section titled “Restartable operations”Every pipeline Step (PKO) is designed to be idempotent and restartable:
- Raw upserts use
ON CONFLICT DO UPDATEoncommercial_transaction_record - Sales measurement rebuilds are keyed by
(artifact_identifier, day)composite primary key performance_measurement_datasetrefresh overwrites entire rows by primary key- Watermark advances only after successful ProcedureExecution completion
Read pagination
Section titled “Read pagination”Large queries use cursor-based pagination rather than OFFSET:
let cursor: string | null = null;while (true) { const rows = await db.select() .from(commercialTransactionRecord) .where(cursor ? gt(commercialTransactionRecord.orderId, cursor) : undefined) .orderBy(commercialTransactionRecord.orderId) .limit(10000);
if (rows.length === 0) break; cursor = rows[rows.length - 1].orderId; // process rows...}Schema migration
Section titled “Schema migration”PlanetScale’s branching workflow supports safe schema changes:
- Create a development branch from production
- Apply migrations via dbmate on the dev branch
- Create a deploy request (like a PR for schemas)
- Review and merge — PlanetScale applies changes non-blockingly
- Rollback available if needed (undo deployment without data loss)
Safe migrations should be enabled on production branches to prevent destructive changes.
Hyperdrive
Section titled “Hyperdrive”Cloudflare Hyperdrive sits between Workers and PlanetScale to provide:
- Connection pooling — avoids per-request TCP/TLS handshake overhead
- Query caching — optional, for read-heavy dashboard queries
- Edge proximity — connections are maintained closer to Worker execution
Hyperdrive is the recommended way to connect Workers to PlanetScale.