Skip to main content

Query API

Execute SPL2 queries against LynxDB. Supports synchronous, asynchronous, and hybrid execution modes, NDJSON streaming export, query explain, and full async job management with progress tracking.

POST /query

Core search endpoint. Executes any SPL2 pipeline including search, aggregation, and management commands.

Request Body

FieldTypeRequiredDefaultDescription
qstringYes--SPL2 query string
fromstringNo"-15m"Start time: relative (-1h, -7d) or ISO 8601
tostringNo"now"End time: relative or ISO 8601
limitintegerNo1000Max events to return (max 50,000)
offsetintegerNo0Offset for pagination (tabular results only)
formatstringNo"json"Output format: json, csv, raw
waitnumberNonullExecution mode control (see below)

Execution Modes

The wait parameter controls sync/async behavior:

wait valueModeBehaviorResponse
null (default)SyncBlock until complete or server timeout (30s)200 with results, or 408 on timeout
0AsyncReturn immediately with a job handle202 with job handle
N (seconds)HybridWait up to N seconds200 if done in time, 202 + job handle otherwise

Hybrid mode is ideal for Web UI -- fast queries return instantly, slow queries degrade gracefully to async with progress tracking.

Sync Query (Default)

curl -s localhost:3100/api/v1/query \
-d '{
"q": "level=error",
"from": "-1h",
"limit": 100
}' | jq .

Response -- events result (200):

{
"data": {
"type": "events",
"events": [
{
"_id": "01JKNM3VXQP...",
"_timestamp": "2026-02-14T14:52:01.234Z",
"_source": "nginx",
"level": "error",
"status": 502,
"uri": "/api/v1/users",
"method": "GET",
"duration_ms": 12
}
],
"total": 1247,
"has_more": true
},
"meta": {
"took_ms": 89,
"scanned": 12400000,
"query_id": "qry_7f3a..."
}
}

Aggregation Query

curl -s localhost:3100/api/v1/query \
-d '{
"q": "source=nginx status>=500 | stats count by uri | sort -count | head 10",
"from": "-1h",
"to": "now"
}' | jq .

Response -- aggregate result (200):

{
"data": {
"type": "aggregate",
"columns": ["uri", "count"],
"rows": [
["/api/v1/users", 1247],
["/api/v1/orders", 893],
["/health", 412]
],
"total_rows": 42
},
"meta": {
"took_ms": 34,
"scanned": 12400000,
"query_id": "qry_8b2c..."
}
}

Timechart Query

curl -s localhost:3100/api/v1/query \
-d '{
"q": "level=error | timechart count span=5m",
"from": "-6h"
}' | jq .

Response -- timechart result (200):

{
"data": {
"type": "timechart",
"interval": "5m",
"columns": ["_time", "count"],
"rows": [
["2026-02-14T14:00:00Z", 42],
["2026-02-14T14:05:00Z", 87],
["2026-02-14T14:10:00Z", 156]
]
},
"meta": {
"took_ms": 45,
"scanned": 12400000,
"query_id": "qry_9c1d..."
}
}

MV-Accelerated Query

When a materialized view covers the query, LynxDB automatically uses it. The meta.accelerated_by field indicates acceleration:

{
"data": {
"type": "aggregate",
"columns": ["source", "count"],
"rows": [
["nginx", 142847],
["api-gw", 89234]
],
"total_rows": 5
},
"meta": {
"took_ms": 3,
"scanned": 142847,
"query_id": "qry_d4e1...",
"accelerated_by": {
"view": "mv_errors_5m",
"original_scan": 12400000,
"speedup": "~400x"
}
}
}

If the MV is still backfilling, you get partial results with coverage info:

{
"meta": {
"accelerated_by": {
"view": "mv_errors_5m",
"status": "backfilling",
"coverage_percent": 66.7
}
}
}

Hybrid Mode

Wait up to 5 seconds, then fall back to async:

curl -s localhost:3100/api/v1/query \
-d '{
"q": "* | stats count by source, status",
"from": "-30d",
"wait": 5
}' | jq .

If the query finishes within 5 seconds, you get 200 with results. If not, you get 202 with a job handle:

Response -- hybrid fallback (202):

{
"data": {
"type": "job",
"job_id": "qry_7f3a2b",
"status": "running",
"query": "* | stats count by source",
"from": "-30d",
"to": "now",
"progress": {
"phase": "scanning",
"scanned": 2100000000,
"total_estimate": 10400000000,
"percent": 20.2,
"events_matched": 847291,
"elapsed_ms": 5000,
"eta_ms": 19700
},
"partial_results": {
"type": "aggregate",
"columns": ["source", "count"],
"rows": [
["nginx", 142000],
["api-gw", 71000]
],
"note": "Based on 20% of data. Final values will change."
}
}
}

Async Mode

Return a job handle immediately:

curl -s localhost:3100/api/v1/query \
-d '{
"q": "* | stats dc(user_id) by source",
"from": "-90d",
"wait": 0
}' | jq .

Response (202):

{
"data": {
"type": "job",
"job_id": "qry_9c1d4e",
"status": "running",
"query": "* | stats dc(user_id) by source",
"from": "-90d",
"to": "now",
"progress": {
"phase": "scanning",
"scanned": 0,
"total_estimate": 84700000000,
"percent": 0,
"events_matched": 0,
"elapsed_ms": 0,
"eta_ms": null
}
}
}

Poll progress with GET /query/jobs/{job_id} or subscribe to SSE with GET /query/jobs/{job_id}/stream.

SPL2 Management Commands

Management commands also flow through this endpoint:

# Create a materialized view
curl -s localhost:3100/api/v1/query \
-d '{
"q": "level=error | stats count, avg(duration) by source, time_bucket(timestamp, '\''5m'\'') AS bucket | materialize \"mv_errors_5m\" retention=90d"
}'

# Query a materialized view
curl -s localhost:3100/api/v1/query \
-d '{"q": "| from mv_errors_5m | where source=\"nginx\" | sort -count | head 10"}'

Error Responses

StatusCodeDescription
400INVALID_QUERYSPL2 syntax error (includes suggestion field)
408QUERY_TIMEOUTQuery exceeded server timeout
429RATE_LIMITEDToo many concurrent queries

GET /query

GET convenience variant for simple queries. Use POST for complex or long queries.

Query Parameters

ParameterRequiredDefaultDescription
qYes--SPL2 query string
fromNo"-15m"Start time
toNo"now"End time
limitNo1000Max events (max 50,000)
formatNo"json"Output format: json, csv, raw
curl -s "localhost:3100/api/v1/query?q=level%3Derror&from=-1h&limit=10" | jq .

POST /query/stream

NDJSON streaming export. Same input as POST /query, but returns one JSON object per line with Transfer-Encoding: chunked. Designed for large exports, piping to files, and data pipelines.

curl -s localhost:3100/api/v1/query/stream \
-d '{"q": "level=error", "from": "-24h"}'

Response (200, application/x-ndjson):

{"_id":"01JKN...","_timestamp":"2026-02-14T14:52:01Z","level":"error","message":"timeout"}
{"_id":"01JKN...","_timestamp":"2026-02-14T14:51:58Z","level":"error","message":"refused"}
{"__meta":{"total":8432,"scanned":12400000,"took_ms":342}}

Behavior

  • No default limit -- streaming is for full export. Client disconnect cancels the query.
  • The wait parameter is ignored -- streaming always blocks until complete.
  • The last line is always {"__meta": {...}} with stream summary stats.

Piping to a File

curl -s localhost:3100/api/v1/query/stream \
-d '{"q": "source=nginx", "from": "-7d"}' > nginx_export.ndjson
Streaming vs. Job SSE

POST /query/stream produces NDJSON data export (one event per line, for curl | jq and pipelines). GET /query/jobs/\{id\}/stream produces SSE progress events (for Web UI real-time updates). They serve different purposes.


GET /query/explain

Parse and explain a query without executing it. Returns the parsed pipeline, estimated cost, fields involved, and materialized view acceleration availability.

curl -s "localhost:3100/api/v1/query/explain?q=source%3Dnginx+%7C+stats+count+by+uri" | jq .

Response -- valid query with MV acceleration (200):

{
"data": {
"parsed": {
"pipeline": [
{
"type": "search",
"filters": [{"field": "source", "op": "=", "value": "nginx"}]
},
{
"type": "stats",
"aggregations": [{"fn": "count"}],
"group_by": ["uri"]
}
],
"result_type": "aggregate",
"estimated_cost": "low",
"uses_full_scan": false,
"fields_read": ["source", "uri"],
"fields_produced": ["uri", "count"]
},
"acceleration": {
"available": true,
"view": "mv_nginx_parsed",
"reason": "MV covers filter (source=nginx) and GROUP BY (uri) with count aggregate",
"estimated_speedup": "~200x"
},
"is_valid": true
}
}

Response -- invalid query (200):

{
"data": {
"is_valid": false,
"errors": [
{
"position": 24,
"length": 6,
"message": "Unknown command 'staats'",
"suggestion": "stats"
}
]
}
}

This endpoint powers autocomplete, red-underline validation, and query planner UI in the Web UI.


GET /query/jobs

List active and recently completed query jobs.

Query Parameters

ParameterRequiredDefaultDescription
statusNo--Filter by status: running, complete, failed, cancelled
curl -s localhost:3100/api/v1/query/jobs | jq .

Response (200):

{
"data": {
"jobs": [
{
"job_id": "qry_9c1d4e",
"status": "running",
"query": "* | stats dc(user_id) by source",
"from": "-90d",
"to": "now",
"created_at": "2026-02-14T14:50:00Z",
"progress": {
"phase": "scanning",
"percent": 45.2,
"elapsed_ms": 18000,
"eta_ms": 21800
}
},
{
"job_id": "qry_7f3a2b",
"status": "complete",
"query": "level=error | stats count by source",
"from": "-7d",
"to": "now",
"created_at": "2026-02-14T14:48:12Z",
"completed_at": "2026-02-14T14:48:49Z",
"expires_at": "2026-02-14T14:53:49Z",
"progress": {
"phase": "complete",
"percent": 100,
"elapsed_ms": 37200
}
}
],
"meta": {
"max_concurrent": 10,
"active": 1
}
}
}

Recently completed jobs are kept for job_ttl (default 5 minutes) before garbage collection.


GET /query/jobs/{jobId}

Poll a specific job for status, progress, and results.

The response shape depends on the job status:

StatusContains
runningprogress + optional partial_results
completeprogress + results (final)
failedprogress + error
cancelledprogress at time of cancellation
curl -s localhost:3100/api/v1/query/jobs/qry_9c1d4e | jq .

Response -- running with partial results (200):

{
"data": {
"type": "job",
"job_id": "qry_9c1d4e",
"status": "running",
"query": "* | stats dc(user_id) by source, status",
"from": "-90d",
"to": "now",
"created_at": "2026-02-14T14:50:00Z",
"progress": {
"phase": "scanning",
"scanned": 42350000000,
"total_estimate": 84700000000,
"percent": 50.0,
"events_matched": 423000000,
"elapsed_ms": 18400,
"eta_ms": 18400
},
"partial_results": {
"type": "aggregate",
"columns": ["source", "status", "dc(user_id)"],
"rows": [
["nginx", 200, 892341],
["nginx", 404, 42891],
["api-gw", 200, 612044]
],
"note": "Based on 50% of data. Final values will change."
}
}
}

Response -- completed (200):

{
"data": {
"type": "job",
"job_id": "qry_9c1d4e",
"status": "complete",
"query": "* | stats dc(user_id) by source, status",
"from": "-90d",
"to": "now",
"created_at": "2026-02-14T14:50:00Z",
"completed_at": "2026-02-14T14:50:37Z",
"expires_at": "2026-02-14T14:55:37Z",
"progress": {
"phase": "complete",
"scanned": 84700000000,
"total_estimate": 84700000000,
"percent": 100,
"events_matched": 847000000,
"elapsed_ms": 37200,
"eta_ms": 0
},
"results": {
"type": "aggregate",
"columns": ["source", "status", "dc(user_id)"],
"rows": [
["nginx", 200, 1784682],
["nginx", 404, 85762],
["api-gw", 200, 1224088],
["api-gw", 500, 12044]
],
"total_rows": 14
}
},
"meta": {
"took_ms": 37200,
"scanned": 84700000000
}
}

Response -- failed (200):

{
"data": {
"type": "job",
"job_id": "qry_d4e1f2",
"status": "failed",
"query": "* | stats count by uri",
"from": "-365d",
"to": "now",
"created_at": "2026-02-14T14:50:00Z",
"failed_at": "2026-02-14T14:50:42Z",
"progress": {
"phase": "scanning",
"scanned": 62100000000,
"total_estimate": 310000000000,
"percent": 20.0,
"events_matched": 62100000,
"elapsed_ms": 42000,
"eta_ms": null
},
"error": {
"code": "QUERY_MEMORY_EXCEEDED",
"message": "Query exceeded 512 MB memory limit at 20% scan. Too many unique 'uri' values (>2M) for GROUP BY.",
"suggestion": "Add a filter to reduce cardinality, or increase max_query_memory_mb in /config."
}
}
}

Error Responses

StatusCodeDescription
404NOT_FOUNDJob ID not found
410JOB_EXPIREDJob completed but results have expired past TTL

DELETE /query/jobs/{jobId}

Cancel a running query job. If the job is already complete, returns the completed status. Partial results scanned up to the cancellation point are preserved.

curl -X DELETE localhost:3100/api/v1/query/jobs/qry_9c1d4e | jq .

Response -- cancelled (200):

{
"data": {
"type": "job",
"job_id": "qry_9c1d4e",
"status": "cancelled",
"progress": {
"phase": "scanning",
"scanned": 42350000000,
"total_estimate": 84700000000,
"percent": 50.0,
"elapsed_ms": 18400
},
"partial_results": {
"type": "aggregate",
"columns": ["source", "count"],
"rows": [
["nginx", 284000],
["api-gw", 139000]
],
"note": "Partial results at cancellation (50% scanned)."
}
}
}

GET /query/jobs/{jobId}/stream

Server-Sent Events (SSE) stream for real-time job progress tracking. Preferred over polling for Web UI.

curl -N localhost:3100/api/v1/query/jobs/qry_9c1d4e/stream

SSE event stream:

event: progress
data: {"phase":"scanning","scanned":2100000000,"total_estimate":10400000000,"percent":20.2,"events_matched":847291,"elapsed_ms":5000,"eta_ms":19700}

event: partial
data: {"type":"aggregate","columns":["source","count"],"rows":[["nginx",142000],["api-gw",71000]],"note":"Based on 20% of data. Final values will change."}

event: progress
data: {"phase":"scanning","scanned":5200000000,"total_estimate":10400000000,"percent":50.0,"events_matched":2100000,"elapsed_ms":12500,"eta_ms":12500}

event: partial
data: {"type":"aggregate","columns":["source","count"],"rows":[["nginx",355000],["api-gw",178000]],"note":"Based on 50% of data. Final values will change."}

event: progress
data: {"phase":"aggregating","scanned":10400000000,"total_estimate":10400000000,"percent":92.0,"events_matched":4200000,"elapsed_ms":23100,"eta_ms":2000}

event: complete
data: {"type":"aggregate","columns":["source","count"],"rows":[["nginx",712345],["api-gw",356789]],"total_rows":5}

Event Types

EventWhenData
progressEvery ~1s while runningPhase, percent, scanned, events matched, ETA
partialPeriodically (~every 10% progress)Intermediate results (same shape as final)
completeQuery finishedFinal results
failedQuery erroredError object
cancelledJob was cancelledProgress at cancellation

JavaScript Example

const es = new EventSource("/api/v1/query/jobs/qry_xxx/stream");

es.addEventListener("progress", (e) => {
updateProgressBar(JSON.parse(e.data));
});

es.addEventListener("partial", (e) => {
updateTable(JSON.parse(e.data));
});

es.addEventListener("complete", (e) => {
showFinalResults(JSON.parse(e.data));
es.close();
});

es.addEventListener("failed", (e) => {
showError(JSON.parse(e.data));
es.close();
});

Progress Phases

PhaseDescription
scanningReading raw segments, matching filters
aggregatingComputing stats/group-by after scan
sortingSorting results (for `
completeDone

Response Data Types

The data.type field in query responses determines how results should be rendered:

TypeDescriptionRendering
eventsRaw log eventsLog viewer
aggregateStats/group-by resultsTable
timechartTime-series dataLine/area chart
view_createdMaterialized view confirmationStatus message
jobAsync job handleProgress bar + polling