Ingest Data
LynxDB accepts data through multiple paths: the lynxdb ingest CLI command, the REST API, structured import, and drop-in compatibility endpoints for existing log pipelines. This guide covers each method with practical examples.
Prerequisites
Start a LynxDB server (or use pipe mode for local-only workflows):
lynxdb server
See the server mode guide for persistent storage options.
Ingest from the CLI
The lynxdb ingest command sends log files or stdin to a running server.
Ingest a file
lynxdb ingest access.log
Ingest with metadata
Tag events with --source and --sourcetype so you can filter on them later:
lynxdb ingest access.log --source web-01 --sourcetype nginx
lynxdb ingest app.log --source api-server --index production
Ingest from stdin
Pipe any output directly into LynxDB:
cat events.json | lynxdb ingest
kubectl logs deploy/api --since=1h | lynxdb ingest --source k8s-api
docker logs myapp 2>&1 | lynxdb ingest --source docker-myapp
Tune batch size
For large files, increase the batch size to reduce HTTP round-trips:
lynxdb ingest huge.log --batch-size 10000
The default batch size is 5000 lines per request.
Ingest via the REST API
The POST /api/v1/ingest endpoint accepts JSON, NDJSON, or plain text. No Content-Type header is required (defaults to application/json).
Send a single JSON event
curl -X POST localhost:3100/api/v1/ingest \
-d '{"message": "user login", "user_id": 42, "ip": "10.0.1.5"}'
Send multiple events as NDJSON
Newline-delimited JSON is the most efficient format for batch ingestion:
curl -X POST localhost:3100/api/v1/ingest \
-H "Content-Type: application/x-ndjson" \
--data-binary @- <<'EOF'
{"level": "info", "message": "request started", "path": "/api/users"}
{"level": "error", "message": "connection refused", "service": "redis"}
{"level": "info", "message": "request completed", "duration_ms": 45}
EOF
Send raw text
For unstructured log lines, post the raw text:
echo '192.168.1.1 - - [14/Feb/2026:14:23:01 +0000] "GET /api HTTP/1.1" 200 1234' \
| curl -X POST localhost:3100/api/v1/ingest --data-binary @-
Or send an entire file:
curl -X POST localhost:3100/api/v1/ingest \
-H "Content-Type: text/plain" \
--data-binary @access.log
Structured import
The lynxdb import command handles structured formats (NDJSON, CSV, Elasticsearch bulk exports) and preserves field types and timestamps.
Import NDJSON
lynxdb import events.json
lynxdb import events.ndjson
Import CSV
lynxdb import splunk_export.csv
lynxdb import data.csv --source web-01 --index nginx
Import Elasticsearch bulk export
lynxdb import es_dump.json --format esbulk
Validate before importing
Use --dry-run to check the file without writing any data:
lynxdb import events.json --dry-run
Transform during import
Apply an SPL2 pipeline to filter or reshape data as it is imported:
lynxdb import events.json --transform '| where level!="DEBUG"'
Timestamp auto-detection
LynxDB automatically detects timestamps from these commonly used field names:
_timestamptimestamp@timestamptimetsdatetime
If none of these fields are present, LynxDB assigns the server receive time. You do not need to configure timestamp parsing.
Drop-in compatibility endpoints
LynxDB provides compatibility endpoints so you can migrate existing log pipelines without changing your shipper configuration.
Filebeat / Logstash / Vector (Elasticsearch _bulk API)
Point any tool that speaks the Elasticsearch _bulk protocol at LynxDB:
# filebeat.yml
output.elasticsearch:
hosts: ["http://lynxdb:3100/api/v1/es"]
# vector.toml
[sinks.lynxdb]
type = "elasticsearch"
endpoints = ["http://lynxdb:3100/api/v1/es"]
The _index field from the bulk request is mapped to the _source tag in LynxDB. No other configuration is needed. See the compatibility API reference for details.
OpenTelemetry Collector (OTLP)
Send logs from an OpenTelemetry Collector using the OTLP/HTTP exporter:
# otel-collector-config.yaml
exporters:
otlphttp:
endpoint: http://lynxdb:3100/api/v1/otlp
Splunk HEC (HTTP Event Collector)
If you have existing Splunk forwarders, point them at the HEC-compatible endpoint:
http://lynxdb:3100/api/v1/hec
Pipe mode (no server)
You do not need a running server to analyze logs. LynxDB can ingest data into an ephemeral in-memory engine and query it in one step:
cat app.log | lynxdb query '| stats count by level'
lynxdb query --file '/var/log/nginx/*.log' '| where status>=500 | top 10 uri'
Data is not persisted. The engine starts, ingests, queries, prints results, and exits. See the pipe mode guide for more details.
Monitoring ingestion
After ingesting data, verify it landed correctly:
# Check server stats
lynxdb status
# Count recently ingested events
lynxdb count --since 5m
# Peek at a sample of events
lynxdb sample 5
# See all discovered fields
lynxdb fields
See the lynxdb status and lynxdb fields commands for more options.
Next steps
- Search and filter logs -- query the data you just ingested
- Run aggregations -- compute statistics across your logs
- REST API: Ingest -- full API reference for the ingest endpoint
- CLI:
ingest-- complete flag reference for the ingest command