Skip to main content

Storage Settings

The storage section controls how LynxDB lays out data on disk, compacts immutable parts, and caches query results. The current write path is a direct-to-part model, so older WAL-specific settings do not apply.

Compression

Config Keystorage.compression
Env VarLYNXDB_STORAGE_COMPRESSION
Defaultlz4
storage:
compression: "lz4"

Valid values:

ValueDescription
lz4Fast ingest and compaction. Default.
zstdBetter compression ratio with more CPU cost.

Max Columns Per Part

Limit how many user-defined fields are materialized as columns in a part.

Config Keystorage.max_columns_per_part
Env VarLYNXDB_STORAGE_MAX_COLUMNS_PER_PART
Default256
storage:
max_columns_per_part: 256

Fields beyond the cap remain searchable through _raw, but they are not stored as dedicated columns.

Partitioning

Choose how LynxDB groups part files on disk.

Config Keystorage.partition_by
Env VarLYNXDB_STORAGE_PARTITION_BY
Defaultdaily
storage:
partition_by: "daily"

Valid values: daily, hourly, weekly, monthly, none.

Ingest Buffering

LynxDB no longer exposes WAL tuning under storage.*.

The write path is:

  1. accept events into an in-memory AsyncBatcher
  2. flush a batch to a temporary .lsg file
  3. optionally fsync
  4. atomically rename the part into place

Operator-facing knobs for that path now live on Ingest Settings, especially:

  • ingest.max_body_size
  • ingest.max_batch_size
  • ingest.fsync

Compaction

Compaction merges smaller parts into larger ones to reduce query fan-out.

Scheduler

Config Keystorage.compaction_interval
Env VarLYNXDB_STORAGE_COMPACTION_INTERVAL
Default30s
storage:
compaction_interval: "30s"

Workers

Config Keystorage.compaction_workers
Env VarLYNXDB_STORAGE_COMPACTION_WORKERS
Default2
storage:
compaction_workers: 2

Rate Limit

Config Keystorage.compaction_rate_limit_mb
Env VarLYNXDB_STORAGE_COMPACTION_RATE_LIMIT_MB
Default100
storage:
compaction_rate_limit_mb: 100

Level Thresholds

Config KeyEnv VarDefaultDescription
storage.l0_thresholdLYNXDB_STORAGE_L0_THRESHOLD4L0 parts before L0-to-L1 compaction
storage.l1_thresholdLYNXDB_STORAGE_L1_THRESHOLD4L1 parts before L1-to-L2 compaction
storage.l2_target_sizeLYNXDB_STORAGE_L2_TARGET_SIZE1gbTarget size for L2 parts
storage:
l0_threshold: 4
l1_threshold: 4
l2_target_size: "1gb"

Query Cache

Config KeyEnv VarDefaultDescription
storage.cache_max_bytesLYNXDB_STORAGE_CACHE_MAX_BYTES1gbMaximum on-disk query cache size
storage.cache_ttlLYNXDB_STORAGE_CACHE_TTL5mCache entry TTL
storage:
cache_max_bytes: "1gb"
cache_ttl: "5m"

Complete Example

storage:
compression: "lz4"
max_columns_per_part: 256
partition_by: "daily"
compaction_interval: "30s"
compaction_workers: 2
compaction_rate_limit_mb: 100
l0_threshold: 4
l1_threshold: 4
l2_target_size: "1gb"
cache_max_bytes: "1gb"
cache_ttl: "5m"

Next Steps