Skip to main content

Storage Settings

The storage section controls how LynxDB writes, compresses, and compacts data on disk. These settings affect write throughput, disk usage, and query performance.

Compression

Config Keystorage.compression
Env VarLYNXDB_STORAGE_COMPRESSION
Defaultlz4
storage:
compression: "lz4"

Valid values:

ValueDescription
lz4Fast compression, good for high-throughput ingest. Default.
zstdHigher compression ratio, slightly more CPU. Good for long-term storage.

LZ4 is recommended for most workloads. Switch to zstd if disk space is a primary concern and you can afford slightly higher CPU usage during compaction.

Row Group Size

Controls how many rows are stored per column chunk in segments.

Config Keystorage.row_group_size
Env VarLYNXDB_STORAGE_ROW_GROUP_SIZE
Default65536
storage:
row_group_size: 65536

Larger values improve compression but increase memory usage during reads. The default of 65536 is a good balance for most workloads.

Flush Threshold

The memtable is flushed to a segment on disk when it reaches this size.

Config Keystorage.flush_threshold
Env VarLYNXDB_STORAGE_FLUSH_THRESHOLD
Default512mb
storage:
flush_threshold: "512mb"

A larger threshold means fewer, larger segments (better for queries) but higher memory usage and more data at risk during crashes. A smaller threshold means more frequent flushes with smaller segments.

Memtable Shards

Number of concurrent memtable shards for parallel ingestion.

Config Keystorage.memtable_shards
Env VarLYNXDB_STORAGE_MEMTABLE_SHARDS
Default0 (auto = number of CPUs)
storage:
memtable_shards: 0

Set to 0 for auto-detection (one shard per CPU core). This provides lock-free concurrent ingestion.

Max Immutable Memtables

Maximum number of immutable memtables waiting to be flushed before backpressure is applied to ingestion.

Config Keystorage.max_immutable
Env VarLYNXDB_STORAGE_MAX_IMMUTABLE
Default2
storage:
max_immutable: 2

WAL (Write-Ahead Log)

The WAL ensures durability by recording every write before it enters the memtable.

Sync Mode

Config Keystorage.wal_sync_mode
Env VarLYNXDB_STORAGE_WAL_SYNC_MODE
Defaultwrite
storage:
wal_sync_mode: "write"
ValueDescriptionDurabilityPerformance
noneNo explicit sync. OS decides when to flush.Lowest -- data loss on power failureHighest throughput
writeBatch sync every wal_sync_interval. Default.Good -- at most wal_sync_interval of data at riskGood throughput
fsyncfsync after every write.Highest -- no data lossLowest throughput

Sync Interval

How often the WAL is synced to disk (when wal_sync_mode is write).

Config Keystorage.wal_sync_interval
Env VarLYNXDB_STORAGE_WAL_SYNC_INTERVAL
Default100ms
storage:
wal_sync_interval: "100ms"

Sync Bytes

Sync the WAL after this many bytes written (in addition to the interval).

Config Keystorage.wal_sync_bytes
Env VarLYNXDB_STORAGE_WAL_SYNC_BYTES
Default0 (interval-only)
storage:
wal_sync_bytes: "0"

Max Segment Size

WAL segment rotation size. When a WAL segment reaches this size, a new segment is created.

Config Keystorage.wal_max_segment_size
Env VarLYNXDB_STORAGE_WAL_MAX_SEGMENT_SIZE
Default256mb
storage:
wal_max_segment_size: "256mb"

Compaction

Compaction merges small segments into larger ones, improving query performance and reclaiming space.

LynxDB uses size-tiered compaction with three levels:

  • L0 -- Recently flushed segments (may overlap in time range)
  • L1 -- Merged, non-overlapping segments
  • L2 -- Fully compacted segments (~1GB each)

Compaction Interval

How often the compaction scheduler checks for work.

Config Keystorage.compaction_interval
Env VarLYNXDB_STORAGE_COMPACTION_INTERVAL
Default30s
storage:
compaction_interval: "30s"

Compaction Workers

Number of concurrent compaction threads.

Config Keystorage.compaction_workers
Env VarLYNXDB_STORAGE_COMPACTION_WORKERS
Default2
storage:
compaction_workers: 2

Increase for faster compaction at the cost of more CPU and I/O. Decrease to reduce resource contention on busy servers.

Compaction Rate Limit

Maximum disk write speed for compaction (in MB/s). Prevents compaction from starving queries.

Config Keystorage.compaction_rate_limit_mb
Env VarLYNXDB_STORAGE_COMPACTION_RATE_LIMIT_MB
Default0 (unlimited)
storage:
compaction_rate_limit_mb: 100

Level Thresholds

Config KeyEnv VarDefaultDescription
storage.l0_thresholdLYNXDB_STORAGE_L0_THRESHOLD4L0 files before compaction triggers
storage.l1_thresholdLYNXDB_STORAGE_L1_THRESHOLD10L1 files before L1-to-L2 compaction
storage.l2_target_sizeLYNXDB_STORAGE_L2_TARGET_SIZE1gbTarget size for L2 segments
storage:
l0_threshold: 4
l1_threshold: 10
l2_target_size: "1gb"

Query Cache

The filesystem-based segment query cache reduces repeated query costs.

Config KeyEnv VarDefaultDescription
storage.cache_max_bytesLYNXDB_STORAGE_CACHE_MAX_BYTES1gbMax cache size
storage.cache_ttlLYNXDB_STORAGE_CACHE_TTL5mCache entry TTL
storage:
cache_max_bytes: "4gb"
cache_ttl: "5m"

The cache is persistent across restarts and uses TTL + LRU eviction. Cache keys are based on (segment_id, CRC32, query_hash, time_range).

Complete Example

storage:
compression: "lz4"
row_group_size: 65536
flush_threshold: "512mb"
memtable_shards: 0
max_immutable: 2
wal_sync_mode: "write"
wal_sync_interval: "100ms"
wal_max_segment_size: "256mb"
compaction_interval: "30s"
compaction_workers: 2
compaction_rate_limit_mb: 0
l0_threshold: 4
l1_threshold: 10
l2_target_size: "1gb"
cache_max_bytes: "1gb"
cache_ttl: "5m"

Next Steps