Skip to main content

Ingest Settings

The ingest section controls how the HTTP ingest endpoint accepts and processes incoming data.

Max Body Size

The maximum size of a single HTTP request body for ingest endpoints.

Config Keyingest.max_body_size
Env VarLYNXDB_INGEST_MAX_BODY_SIZE
Default10mb
ingest:
max_body_size: "10mb"

Requests exceeding this limit receive a 413 Payload Too Large response. Increase for bulk imports or if your log lines are very large:

LYNXDB_INGEST_MAX_BODY_SIZE=50mb lynxdb server
tip

For large bulk imports, use the CLI lynxdb ingest or lynxdb import commands which automatically chunk data into batches, rather than sending one huge HTTP request.

Max Batch Size

The maximum number of events in a single ingest batch.

Config Keyingest.max_batch_size
Env VarLYNXDB_INGEST_MAX_BATCH_SIZE
Default1000
ingest:
max_batch_size: 1000

This limits the number of events accepted in a single HTTP request. The CLI --batch-size flag controls the client-side batch size:

# Client-side batching (default: 5000 lines per batch)
lynxdb ingest access.log --batch-size 10000

WAL Settings

The WAL (Write-Ahead Log) ensures data durability. Every event is written to the WAL before entering the memtable.

WAL settings are in the storage section but directly affect ingest behavior:

storage:
wal_sync_mode: "write" # none, write, or fsync
wal_sync_interval: "100ms" # Batch sync interval
wal_sync_bytes: "0" # Sync after N bytes (0 = interval-only)
wal_max_segment_size: "256mb" # WAL segment rotation size

Choosing a Sync Mode

ModeData at RiskThroughputUse Case
noneAll data since last OS flushHighestDevelopment, ephemeral data
writeUp to wal_sync_interval (100ms default)HighProduction (default)
fsyncNoneLowestMission-critical compliance workloads

For most production workloads, write mode with the default 100ms interval provides a good balance between durability and performance. At worst, you lose 100ms of data on a crash.

# Maximum durability
LYNXDB_STORAGE_WAL_SYNC_MODE=fsync lynxdb server

# Maximum throughput (development)
LYNXDB_STORAGE_WAL_SYNC_MODE=none lynxdb server

Ingest Endpoints

LynxDB accepts data through multiple HTTP endpoints:

EndpointFormatDescription
POST /api/v1/ingestJSON, NDJSON, plain textPrimary ingest endpoint
POST /api/v1/ingest/bulkElasticsearch _bulk formatDrop-in Elasticsearch compatibility
OTLP/HTTPOpenTelemetry protobufNative OTLP receiver
Splunk HECSplunk HEC JSONSplunk forwarder compatibility

Timestamp Auto-Detection

LynxDB automatically detects timestamps from these fields (in order): _timestamp, timestamp, @timestamp, time, ts, datetime. If no timestamp field is found, the current server time is used.

Ingest via CLI

The CLI provides two commands for sending data to a running server:

lynxdb ingest -- Raw Log Lines

# From file
lynxdb ingest access.log
lynxdb ingest access.log --source web-01 --sourcetype nginx

# From stdin
cat events.json | lynxdb ingest

# Custom batch size
lynxdb ingest huge.log --batch-size 10000

lynxdb import -- Structured Data

# NDJSON
lynxdb import events.ndjson

# CSV with headers
lynxdb import splunk_export.csv

# Elasticsearch _bulk export
lynxdb import es_dump.json --format esbulk

# Dry run (validate without importing)
lynxdb import events.json --dry-run

# Apply transform during import
lynxdb import events.json --transform '| where level!="DEBUG"'

Complete Example

ingest:
max_body_size: "50mb"
max_batch_size: 5000

storage:
wal_sync_mode: "write"
wal_sync_interval: "100ms"
wal_max_segment_size: "256mb"

Tuning Guidelines

ScenarioRecommendation
High-throughput ingest (>100K events/sec)Increase max_body_size to 50mb, use wal_sync_mode: write
Bulk import from filesUse lynxdb import with --batch-size 10000
Mission-critical audit logsUse wal_sync_mode: fsync
Development/testingUse wal_sync_mode: none for maximum speed
Large log lines (>100KB each)Increase max_body_size accordingly

Next Steps