Write and query high-cardinality event data at scale with SQL. Load when tracking user events, billing metrics, per-tenant analytics, A/B testing, API usage, or custom telemetry.
Write high-cardinality event data at scale and query it with SQL. Perfect for user events, billing metrics, per-tenant analytics, and custom telemetry.
wrangler analytics-engine create my-dataset
Add binding in wrangler.jsonc:
{
"analytics_engine_datasets": [
{
"binding": "USER_EVENTS",
"dataset": "my-dataset"
}
]
}
| Use Case | Why Analytics Engine |
|---|---|
| User behavior tracking | High-cardinality data (userId, sessionId, etc.) |
| Billing/usage metrics | Per-tenant aggregation with doubles |
| Custom telemetry | Non-blocking writes, queryable with SQL |
| A/B test metrics | Index by experiment ID, query results |
| API usage tracking | Count requests per customer/endpoint |
| Operation | API | Notes |
|---|---|---|
| Write event | env.DATASET.writeDataPoint({ ... }) |
Non-blocking, do NOT await |
| Metrics | doubles: [value1, value2] |
Up to 20 numeric values |
| Labels | blobs: [label1, label2] |
Up to 20 text values |
| Grouping | indexes: [userId] |
1 index per datapoint (max 96 bytes) |
| Query data | SQL API via REST | GraphQL also available |
Analytics Engine stores datapoints with three types of fields:
| Field Type | Purpose | Limit | Example |
|---|---|---|---|
| doubles | Numeric metrics (counters, gauges, latency) | 20 per datapoint | [response_time, bytes_sent] |
| blobs | Text labels (URLs, names, IDs) | 20 per datapoint | [path, event_name] |
| indexes | Grouping key (userId, tenantId, etc.) | 1 per datapoint | [userId] |
Key concept: The index is the primary key that represents your app, customer, merchant, or tenant. Use it to group and filter data efficiently in SQL queries. For multiple dimensions, use blobs or create a composite index.
interface Env {
USER_EVENTS: AnalyticsEngineDataset;
}
export default {
async fetch(req: Request, env: Env): Promise<Response> {
let url = new URL(req.url);
let path = url.pathname;
let userId = url.searchParams.get("userId");
// Write a datapoint for this visit, associating the data with
// the userId as our Analytics Engine 'index'
env.USER_EVENTS.writeDataPoint({
// Write metrics data: counters, gauges or latency statistics
doubles: [],
// Write text labels - URLs, app names, event_names, etc
blobs: [path],
// Provide an index that groups your data correctly.
indexes: [userId],
});
return Response.json({
hello: "world",
});
},
};
interface Env {
API_METRICS: AnalyticsEngineDataset;
}
export default {
async fetch(req: Request, env: Env): Promise<Response> {
const start = Date.now();
const url = new URL(req.url);
const apiKey = req.headers.get("x-api-key") || "anonymous";
const endpoint = url.pathname;
try {
// Handle API request...
const response = await handleApiRequest(req);
const duration = Date.now() - start;
// Track successful request
env.API_METRICS.writeDataPoint({
doubles: [duration, response.headers.get("content-length") || 0],
blobs: [endpoint, "success", response.status.toString()],
indexes: [apiKey],
});
return response;
} catch (error) {
const duration = Date.now() - start;
// Track failed request
env.API_METRICS.writeDataPoint({
doubles: [duration, 0],
blobs: [endpoint, "error", error.message],
indexes: [apiKey],
});
return new Response("Error", { status: 500 });
}
},
};
IMPORTANT: Do NOT await calls to writeDataPoint(). It is non-blocking and returns immediately.
// ❌ WRONG - Do not await
await env.USER_EVENTS.writeDataPoint({ ... });
// ✅ CORRECT - Fire and forget
env.USER_EVENTS.writeDataPoint({ ... });
This allows your Worker to respond quickly without waiting for the write to complete.
Analytics Engine data is accessible via REST API with SQL queries:
Endpoint: https://api.cloudflare.com/client/v4/accounts/{account_id}/analytics_engine/sql
SELECT
timestamp,
blob1 AS path,
index1 AS userId
FROM USER_EVENTS
WHERE timestamp > NOW() - INTERVAL '1' DAY
ORDER BY timestamp DESC
LIMIT 100
SELECT
index1 AS apiKey,
COUNT(*) AS request_count,
AVG(double1) AS avg_duration_ms,
SUM(double2) AS total_bytes
FROM API_METRICS
WHERE timestamp > NOW() - INTERVAL '7' DAY
GROUP BY apiKey
ORDER BY request_count DESC
curl "https://api.cloudflare.com/client/v4/accounts/{account_id}/analytics_engine/sql" \
--header "Authorization: Bearer <API_TOKEN>" \
--data "SHOW TABLES"
Fields are automatically numbered based on write order:
double1, double2, ... double20blob1, blob2, ... blob20index1, index2, ... index20Use AS aliases to make queries readable:
SELECT
double1 AS response_time,
blob1 AS endpoint,
index1 AS user_id
FROM my_dataset
{
"name": "analytics-engine-example",
"main": "src/index.ts",
"compatibility_date": "2025-02-11",
"analytics_engine_datasets": [
{
"binding": "USER_EVENTS",
"dataset": "user-events"
},
{
"binding": "API_METRICS",
"dataset": "api-metrics"
}
]
}
interface Env {
// Analytics Engine dataset binding
USER_EVENTS: AnalyticsEngineDataset;
}
// Datapoint structure
interface AnalyticsEngineDataPoint {
doubles?: number[]; // Up to 20 numeric values
blobs?: string[]; // Up to 20 text values
indexes?: string[]; // Up to 20 grouping keys
}
writeDataPoint() is non-blocking for maximum performanceenv.SESSIONS.writeDataPoint({
doubles: [sessionDuration, pageViews, eventsCount],
blobs: [browser, country, deviceType],
indexes: [userId, sessionId],
});
env.ERRORS.writeDataPoint({
doubles: [1], // Error count
blobs: [errorType, errorMessage.slice(0, 256), endpoint],
indexes: [userId, appVersion],
});
env.REVENUE.writeDataPoint({
doubles: [amountCents, taxCents, discountCents],
blobs: [productId, currency, paymentMethod],
indexes: [customerId, merchantId],
});
See references/limits.md for complete details.
// Before: D1
await env.DB.prepare("INSERT INTO events (userId, event) VALUES (?, ?)")
.bind(userId, event)
.run();
// After: Analytics Engine
env.EVENTS.writeDataPoint({
blobs: [event],
indexes: [userId],
}); // Non-blocking, no await
Analytics Engine provides: