Skip to main content

Ingest API

LynxDB exposes multiple ingest endpoints. They are not interchangeable:

EndpointUse it forRequest format
POST /api/v1/ingestStructured event payloadsJSON top-level array
POST /api/v1/ingest/rawRaw log linesNewline-delimited text
POST /api/v1/ingest/hecSplunk HEC sendersOne HEC event per line
POST /api/v1/ingest/bulkElasticsearch bulk producersElasticsearch _bulk NDJSON alias

If you are sending arbitrary JSON documents from Elasticsearch- or OTLP-style pipelines, use the compatibility endpoints instead of POST /ingest.

POST /ingest

Primary structured ingest endpoint.

POST /ingest accepts a JSON array of event payloads. It does not accept:

  • a single JSON object
  • NDJSON
  • plain text log lines

Request Schema

Each array element maps to this payload shape:

FieldTypeRequiredDescription
eventstringYesRaw event text stored in _raw
timenumberNoUnix timestamp in seconds; fractional seconds are accepted
sourcestringNoEvent source label
sourcetypestringNoEvent format/source type label
hoststringNoHost label
indexstringNoTarget index name
fieldsobjectNoAdditional typed fields copied onto the event

fields currently preserves string, numeric, and boolean values.

Headers

HeaderRequiredDescriptionDefault
Content-TypeNoMust be JSONapplication/json

Behavior

  • The request body must start with [ and end with ].
  • time is optional. If omitted, LynxDB assigns the server receive time.
  • event becomes the _raw field.
  • fields values are added as queryable fields on the event.

Structured Batch Example

curl -X POST localhost:3100/api/v1/ingest \
-H "Content-Type: application/json" \
-d '[
{
"time": 1760000000,
"event": "user login",
"source": "auth-api",
"sourcetype": "app",
"host": "web-01",
"fields": {
"user_id": 42,
"level": "info",
"ip": "10.0.1.5"
}
}
]'

Response (200):

{
"data": {
"accepted": 1,
"failed": 0
}
}

Multiple Events

curl -X POST localhost:3100/api/v1/ingest \
-H "Content-Type: application/json" \
-d '[
{
"event": "request started",
"source": "api",
"fields": {"trace_id": "abc123", "level": "info"}
},
{
"event": "request completed",
"source": "api",
"fields": {"trace_id": "abc123", "duration_ms": 45, "level": "info"}
}
]'

Common Validation Error

curl -X POST localhost:3100/api/v1/ingest \
-H "Content-Type: application/json" \
-d '{"event":"not wrapped in an array"}'

This fails because POST /ingest requires a top-level JSON array.

POST /ingest/raw

Use POST /ingest/raw for raw log files, stdout streams, and other newline-delimited text.

Each non-empty line becomes a separate event. The line is stored in _raw and then passed through the configured ingest pipeline.

Headers

HeaderRequiredDescriptionDefault
Content-TypeNoRaw body typetext/plain
X-SourceNoSource labelhttp
X-Source-TypeNoSource type labelraw
X-IndexNoTarget indexmain

Raw Text Example

curl -X POST localhost:3100/api/v1/ingest/raw \
-H "Content-Type: text/plain" \
-H "X-Source: nginx" \
-H "X-Source-Type: combined" \
-d '192.168.1.1 - - [14/Feb/2026:14:23:01 +0000] "GET /api/users HTTP/1.1" 200 1234
192.168.1.2 - - [14/Feb/2026:14:23:02 +0000] "POST /api/orders HTTP/1.1" 500 89'

Piping from Files or Commands

# Send a log file
curl -X POST localhost:3100/api/v1/ingest/raw \
-H "Content-Type: text/plain" \
-H "X-Source: web-01" \
--data-binary @/var/log/nginx/access.log

# Pipe from a command
kubectl logs deploy/api --since=1h | \
curl -X POST localhost:3100/api/v1/ingest/raw \
-H "Content-Type: text/plain" \
-H "X-Source: api-gateway" \
--data-binary @-

If the request body is truncated by ingest.max_body_size, the response still returns 200 with truncated: true and a warning message.

POST /ingest/hec

Splunk HEC-compatible ingest path. The request body is read line by line, and each line is parsed as a Splunk HEC event object.

curl -X POST localhost:3100/api/v1/ingest/hec \
-d '{"event":"user login","source":"auth-api","sourcetype":"json","fields":{"level":"info"}}'

See Compatibility for migration-oriented examples.

POST /ingest/bulk

Elasticsearch bulk ingest alias. This is the same handler as the Elasticsearch compatibility endpoints.

curl -X POST localhost:3100/api/v1/ingest/bulk \
-H "Content-Type: application/x-ndjson" \
-d '{"index": {"_index": "logs"}}
{"message": "hello from filebeat", "@timestamp": "2026-02-14T12:00:00Z"}'

For Filebeat, Logstash, Vector, Fluentd, and other Elasticsearch-compatible clients, prefer the /api/v1/es compatibility base documented in Compatibility. /api/v1/ingest/bulk remains available as an alias.

Error Responses

StatusCodeDescription
401AUTH_REQUIREDAuthentication enabled but no token provided
400INVALID_JSON / INVALID_REQUESTMalformed JSON array or unreadable request body
404NOT_FOUNDWrong endpoint path
413PAYLOAD_TOO_LARGERequest exceeds configured body limit
429RATE_LIMITEDIngest rate limit exceeded
503BACKPRESSURE / SHUTTING_DOWNServer is under ingest pressure or shutting down