Skip to main content

Compatibility API

Drop-in compatibility endpoints for migrating from existing log pipelines. No code changes required in your log shippers -- just point them at LynxDB.

Elasticsearch Compatibility

POST /es/_bulk

Elasticsearch _bulk API compatible endpoint. Works out of the box with Filebeat, Logstash, Vector, Fluentd, and any Elasticsearch client library.

Mapping

  • _index is accepted but mapped to _source tag (LynxDB is single-index by design).
  • _type is ignored.
  • Timestamp aliases are auto-detected: @timestamp, timestamp, time, ts, datetime.

Example

curl -X POST localhost:3100/api/v1/es/_bulk \
-H "Content-Type: application/x-ndjson" \
-d '{"index": {"_index": "logs"}}
{"message": "hello from filebeat", "@timestamp": "2026-02-14T12:00:00Z", "level": "info"}
{"index": {"_index": "metrics"}}
{"message": "cpu usage high", "@timestamp": "2026-02-14T12:00:01Z", "host": "web-01"}'

Response (200):

{
"took": 12,
"errors": false,
"items": [
{
"index": {
"_id": "01JKNM3VXQP...",
"status": 201
}
},
{
"index": {
"_id": "01JKNM4ABCD...",
"status": 201
}
}
]
}

Filebeat Configuration

Point Filebeat at LynxDB with zero changes to your existing configuration:

# filebeat.yml
output.elasticsearch:
hosts: ["http://lynxdb:3100/api/v1/es"]

Logstash Configuration

# logstash.conf
output {
elasticsearch {
hosts => ["http://lynxdb:3100/api/v1/es"]
}
}

Vector Configuration

# vector.toml
[sinks.lynxdb]
type = "elasticsearch"
endpoints = ["http://lynxdb:3100/api/v1/es"]

Fluentd Configuration

<!-- fluentd.conf -->
<match **>
@type elasticsearch
host lynxdb
port 3100
path /api/v1/es
</match>

POST /es/{index}/_doc

Elasticsearch single-document ingest endpoint.

Path Parameters

ParameterRequiredDescription
indexYesIndex name (mapped to _source tag)
curl -X POST localhost:3100/api/v1/es/logs/_doc \
-d '{"message": "single event", "@timestamp": "2026-02-14T12:00:00Z", "level": "info"}'

Response (201):

{
"_id": "01JKNM3VXQP...",
"result": "created"
}

GET /es/

Minimal Elasticsearch cluster info for client handshake compatibility. Filebeat and other ES clients query this endpoint on startup to verify connectivity.

curl -s localhost:3100/api/v1/es/ | jq .

Response (200):

{
"name": "lynxdb",
"cluster_name": "lynxdb",
"version": {
"number": "8.0.0",
"build_flavor": "default"
},
"tagline": "LynxDB \u2014 Splunk-power log analytics in a single binary"
}

This response satisfies the Elasticsearch client handshake protocol. The version number 8.0.0 ensures compatibility with modern Elasticsearch clients.


OpenTelemetry OTLP

POST /otlp/v1/logs

Native OTLP/HTTP receiver for logs from OpenTelemetry collectors. Accepts both protobuf and JSON encoding.

Content Types

Content-TypeFormat
application/x-protobufOTLP protobuf encoding (default, most efficient)
application/jsonOTLP JSON encoding

OpenTelemetry Collector Configuration

# otel-collector-config.yaml
exporters:
otlphttp:
endpoint: http://lynxdb:3100/api/v1/otlp

service:
pipelines:
logs:
receivers: [filelog, otlp]
processors: [batch]
exporters: [otlphttp]

Example with JSON Encoding

curl -X POST localhost:3100/api/v1/otlp/v1/logs \
-H "Content-Type: application/json" \
-d '{
"resourceLogs": [
{
"resource": {
"attributes": [
{"key": "service.name", "value": {"stringValue": "api-gateway"}}
]
},
"scopeLogs": [
{
"logRecords": [
{
"timeUnixNano": "1707912000000000000",
"body": {"stringValue": "GET /api/users 200 12ms"},
"severityText": "INFO",
"attributes": [
{"key": "http.method", "value": {"stringValue": "GET"}},
{"key": "http.status_code", "value": {"intValue": "200"}}
]
}
]
}
]
}
]
}'

Response (200): Empty response body (per OTLP specification).

Field Mapping

OTLP fields are mapped to LynxDB fields as follows:

OTLP FieldLynxDB Field
timeUnixNano_timestamp
body.stringValue_raw
severityTextlevel
resource.attributes["service.name"]_source
Other attributesFlattened as top-level fields

Splunk HEC

LynxDB also supports the Splunk HTTP Event Collector (HEC) protocol for existing Splunk forwarders. Configure your Splunk forwarders to send to:

http://lynxdb:3100/api/v1/hec

Splunk Universal Forwarder Configuration

# outputs.conf
[httpout]
httpEventCollectorToken = any-token-here
uri = http://lynxdb:3100/api/v1/hec

HEC Event Format

curl -X POST localhost:3100/api/v1/hec \
-H "Authorization: Splunk any-token-here" \
-d '{
"event": "user login succeeded",
"time": 1707912000,
"host": "web-01",
"source": "auth-service",
"sourcetype": "json"
}'

Field Mapping

HEC FieldLynxDB Field
time_timestamp
event_raw (or parsed as JSON if object)
hosthost
source_source
sourcetype_sourcetype

Migration Quick Reference

SourceProtocolLynxDB EndpointConfig Change
FilebeatES bulk/api/v1/esChange hosts to LynxDB
LogstashES bulk/api/v1/esChange hosts to LynxDB
VectorES bulk/api/v1/esChange endpoints to LynxDB
FluentdES bulk/api/v1/esChange host/port/path
OTEL CollectorOTLP/HTTP/api/v1/otlpChange endpoint to LynxDB
Splunk ForwarderHEC/api/v1/hecChange uri to LynxDB
Splunk HEC clientHEC/api/v1/hecChange URL to LynxDB
Any ES clientES API/api/v1/esChange base URL