Performance Metrics & Classification
Overview
Section titled “Overview”The performance metrics system is the primary analytical surface of the platform. It retrieves metrics derived from the daily sync pipeline, applies performance tier classification based on configurable classification thresholds, and serves a rich dashboard with multi-dimensional filtering.
Goal: Enable Ops and Creative teams to quickly identify which products are performing (winners, top risers) and which need action (bananas, archive candidates).
API endpoints
Section titled “API endpoints”The scenario touches two API surfaces: the commerce analytics spec (read-only R2/DuckDB queries) and the internal performance-metrics endpoints (CRUD + classification settings, not yet in a formal OpenAPI spec).
Commerce analytics (specced)
Section titled “Commerce analytics (specced)”| Method | Path | Spec | Description |
|---|---|---|---|
| GET | /analytics/commerce/overview | analytics.yaml | Headline KPIs: revenue, order count, AOV |
| GET | /analytics/commerce/products | analytics.yaml | Top products by revenue/units (ranked) |
Performance metrics (internal, not yet specced)
Section titled “Performance metrics (internal, not yet specced)”| Method | Path | Description |
|---|---|---|
| GET | /api/performance-metrics | List metrics with filters, pagination, sorting |
| GET | /api/performance-metrics/:id | Single metric by artifact identifier |
| POST | /api/performance-metrics | Create (rare — typically populated by sync pipeline) |
| PUT | /api/performance-metrics/:id | Update metric |
| DELETE | /api/performance-metrics/:id | Delete metric |
| GET | /api/performance-metrics/stats/summary | Aggregate counts by classification tier + totals |
| GET | /api/performance-metrics/settings/classification | Current classification thresholds |
| PUT | /api/performance-metrics/settings/classification | Update thresholds (takes effect on next mart refresh) |
Schema tables
Section titled “Schema tables”| Table | Module | Role |
|---|---|---|
measurement.performance (R2 Iceberg) | Analytics | Read: serving dataset rebuilt by mart refresh |
macrodata_artifact (kind: classification_spec) | Macrodata | Read/Write: configurable thresholds governing classification tiers |
sync.telemetry (R2) | Analytics | Read: audit trail for sync execution history |
Infrastructure
Section titled “Infrastructure”| Resource | Type | Purpose |
|---|---|---|
dashboard-api | CF Worker (internal) | API surface for metrics + threshold management |
measurement.performance | R2 Iceberg table | Read-optimized metric store (rebuilt by mart refresh) |
sync.telemetry | R2 object store | Sync procedure audit trail |
| Mart refresh job | CF Cron Trigger | Rebuilds measurement.performance with fresh classification tiers |
| Daily sync | CF Workflow | Feeds raw sales data that powers the metric dataset |
Participants
Section titled “Participants”| Actor | Role |
|---|---|
| Staff user (Ops/Creative) | Browses metrics, adjusts classification thresholds, triggers sync |
| Dashboard API Worker | Serves CRUD endpoints, summary stats, threshold management |
| Mart refresh job | Rebuilds measurement.performance on R2 with fresh classification tiers |
| Daily sync | Feeds raw sales data that powers the metric dataset |
Classification logic
Section titled “Classification logic”Products are auto-classified during mart refresh based on configurable thresholds stored in macrodata_artifact (kind: classification_spec):
| Performance tier | Rule |
|---|---|
top_riser | sales_first_7_days >= top_riser_7_day_threshold OR sales_first_14_days >= top_riser_14_day_threshold |
winner | sales_last_30_days >= winner_30_day_threshold |
banana | Product is in the bottom N by aggregate_measurement_value (where N = banana_count) |
archive_candidate | is_active = false AND sales_last_30_days = 0 |
none | Does not match any rule |
Products can carry multiple classification flags simultaneously (is_banana, is_winner, is_top_riser, is_archive) in addition to the primary nominal_classification column.
Query parameters (list endpoint)
Section titled “Query parameters (list endpoint)”| Param | Type | Description |
|---|---|---|
classification | string | Filter by performance tier: top_riser, winner, banana, archive_candidate, all |
search | string | Case-insensitive search on product name and artifact identifier |
page | number | Page number (default: 1) |
pageSize | number | Items per page (default: 50) |
Default sort: nominal_classification DESC, then aggregate_measurement_value DESC.
Summary response DTO
Section titled “Summary response DTO”interface PerformanceMetricsSummary { total_artifacts: number; // Product count top_risers: number; // Classification tier counts winners: number; bananas: number; aggregate_measurement_total: number; // sum of aggregate_measurement_value sales_last_30_days: number;}Dashboard features
Section titled “Dashboard features”The performance tracking page provides:
- Stat cards — clickable cards filtering by classification tier
- Multi-dimensional filters — classification, search, product type, vendor, tags (by category), date range, active status
- Sortable columns — click column headers to toggle sort direction
- Pagination — first/prev/page-input/next/last controls
- Classification badges — color-coded: banana (yellow), winner (blue), top riser (green), archive (red)
- Threshold dialog — edit thresholds for
top_riser_7_day_threshold,top_riser_14_day_threshold,winner_30_day_threshold,banana_count - Sync history — collapsible table showing last 10
sync.telemetryrecords from R2 with duration and counts
Acceptance criteria
Section titled “Acceptance criteria”- List endpoint returns paginated metrics with correct filter application
- Threshold update persists and takes effect on next mart refresh
- Summary metrics match actual data (counts and sums are consistent)
- Search works case-insensitively across product name and artifact identifier
- Multiple classification badges display correctly when a product matches multiple rules