Cloudflare Workers
Workers as compute
Section titled “Workers as compute”Both the dashboard-api and ingest Workers run on Cloudflare’s edge network. Key execution constraints:
| Constraint | Paid plan limit |
|---|---|
| CPU time per invocation | Default 30s, configurable up to 5 minutes |
| Memory | 128 MB |
| Subrequest limit | 1,000 per invocation |
| Script size | 10 MB (after compression) |
Design implication: Background tasks must be chunkable and restartable — database transaction timeouts (20s) can become the bottleneck even when Worker CPU limits are higher.
Hono framework
Section titled “Hono framework”Both Workers use Hono as the web framework:
import { Hono } from 'hono';
type Bindings = { R2_BUCKET: R2Bucket; INGEST_QUEUE: Queue; HYPERDRIVE: Hyperdrive; SHOPIFY_ACCESS_TOKEN: string;};
const app = new Hono<{ Bindings: Bindings }>();
app.use('*', cfAccessMiddleware()); // validates JWTapp.route('/api/performance-metrics', performanceRoutes);app.route('/api/combo-logs', comboRoutes);// ...
export default app;Queues
Section titled “Queues”Cloudflare Queues provide the job dispatch mechanism between Workers:
# wrangler.toml for dashboard-api (producer)[[queues.producers]]queue = "ingest-queue"binding = "INGEST_QUEUE"
# wrangler.toml for ingest (consumer)[[queues.consumers]]queue = "ingest-queue"max_batch_size = 10max_batch_timeout = 5Semantics
Section titled “Semantics”- Batch delivery: messages arrive in batches (configurable size/timeout)
- Explicit ack:
msg.ack()prevents redelivery even if later messages fail - Retry: default 3 retries; messages exceeding max retries go to DLQ if configured
- Delayed retry:
msg.retry({ delaySeconds })for rate-limit backoff (up to 12 hours) - Dead Letter Queue: recommended for poison message handling
Message dispatch
Section titled “Message dispatch”export default { async queue(batch: MessageBatch, env: Bindings) { for (const msg of batch.messages) { const job = IngestJobSchema.parse(msg.body); switch (job.job_type) { case 'SHOPIFY_BULK_SYNC_START': await handleBulkSyncStart(job, env); break; case 'ASSET_INGEST': await handleAssetIngest(job, env); break; // ... } msg.ack(); } }};Cron Triggers
Section titled “Cron Triggers”Scheduled handlers that enqueue jobs on a fixed schedule:
export default { async scheduled(event: ScheduledEvent, env: Bindings) { switch (event.cron) { case '0 6 * * *': // daily 6am UTC await env.INGEST_QUEUE.send({ job_type: 'SHOPIFY_BULK_SYNC_START', ... }); break; case '0 * * * *': // hourly await env.INGEST_QUEUE.send({ job_type: 'MART_REFRESH', ... }); break; } }};Cron triggers execute on UTC time. See Physical Architecture for the full schedule.
Bindings and secrets
Section titled “Bindings and secrets”| Binding | Type | Purpose |
|---|---|---|
R2_BUCKET | R2 Bucket | Asset storage, JSONL archives, CSV exports |
INGEST_QUEUE | Queue | Job dispatch |
HYPERDRIVE | Hyperdrive | PlanetScale connection pooling |
INGEST | Service Binding | dashboard-api → ingest direct calls |
SHOPIFY_ACCESS_TOKEN | Secret | Shopify API auth |
CF_ACCESS_AUD | Variable | Access application audience tag for JWT validation |
Service bindings
Section titled “Service bindings”For direct Worker-to-Worker calls without network hops:
[[services]]binding = "INGEST"service = "ingest"Runs on the same thread of the same server — zero overhead. Used for health aggregation and internal coordination.