Skip to main content

S3 Tiering

LynxDB supports automatic tiered storage with S3-compatible object stores. Segments are promoted from hot (local SSD) to warm (S3) to cold (Glacier) based on age policies. S3 acts as the source of truth for segments in cluster deployments.

How Tiering Works

Hot (local SSD)  -->  Warm (S3)  -->  Cold (Glacier)
< 7 days < 30 days < 90 days
  1. Fresh data lives on local SSD for fast query access
  2. Segments older than the hot tier threshold are uploaded to S3
  3. Segments older than the warm tier threshold are transitioned to Glacier (via S3 lifecycle rules)
  4. A local segment cache keeps frequently queried warm-tier segments on disk for performance

S3 Bucket Configuration

Bucket Name

Config Keystorage.s3_bucket
CLI Flag--s3-bucket
Env VarLYNXDB_STORAGE_S3_BUCKET
Default"" (tiering disabled)
storage:
s3_bucket: "my-lynxdb-logs"

When s3_bucket is empty, tiering is disabled and all data stays on local disk.

Region

Config Keystorage.s3_region
CLI Flag--s3-region
Env VarLYNXDB_STORAGE_S3_REGION
Defaultus-east-1
storage:
s3_region: "eu-west-1"

Key Prefix

Optional prefix for all keys in the S3 bucket. Useful for sharing a bucket across environments.

Config Keystorage.s3_prefix
CLI Flag--s3-prefix
Env VarLYNXDB_STORAGE_S3_PREFIX
Default""
storage:
s3_prefix: "production/"

Custom Endpoint (MinIO)

Override the S3 endpoint URL for S3-compatible stores like MinIO, Ceph, or R2.

Config Keystorage.s3_endpoint
Env VarLYNXDB_STORAGE_S3_ENDPOINT
Default"" (AWS S3)
storage:
s3_endpoint: "http://minio.local:9000"
s3_force_path_style: true

Force Path Style

Use path-style S3 URLs (http://host/bucket/key) instead of virtual-hosted style (http://bucket.host/key). Required for MinIO and some S3-compatible stores.

Config Keystorage.s3_force_path_style
Env VarLYNXDB_STORAGE_S3_FORCE_PATH_STYLE
Defaultfalse
storage:
s3_force_path_style: true

Tiering Interval

How often LynxDB checks for segments eligible for tier promotion.

Config Keystorage.tiering_interval
CLI Flag--tiering-interval
Env VarLYNXDB_STORAGE_TIERING_INTERVAL
Default5m
storage:
tiering_interval: "5m"

Tiering Parallelism

Number of concurrent segment uploads to S3.

Config Keystorage.tiering_parallelism
Env VarLYNXDB_STORAGE_TIERING_PARALLELISM
Default2
storage:
tiering_parallelism: 4

Increase for faster uploads when moving large volumes of data to S3. Be aware this increases network bandwidth usage.

Segment Cache

Local disk cache for warm-tier segments fetched from S3. This avoids repeated S3 downloads for frequently queried data.

Config Keystorage.segment_cache_size
Env VarLYNXDB_STORAGE_SEGMENT_CACHE_SIZE
Default1gb
storage:
segment_cache_size: "10gb"

The cache uses LRU eviction. Set this to a size that fits your most frequently queried warm-tier data.

AWS Credentials

LynxDB uses the standard AWS SDK credential chain:

  1. Environment variables (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)
  2. Shared credentials file (~/.aws/credentials)
  3. IAM instance profile (EC2, ECS, EKS)
  4. IAM role via STS (IRSA for EKS)
# Option 1: Environment variables
export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
lynxdb server --s3-bucket my-logs --s3-region us-east-1

# Option 2: IAM instance profile (no credentials needed on EC2)
lynxdb server --s3-bucket my-logs --s3-region us-east-1

MinIO Setup

For self-hosted S3-compatible storage:

storage:
s3_bucket: "lynxdb"
s3_region: "us-east-1"
s3_endpoint: "http://minio.local:9000"
s3_force_path_style: true
# MinIO credentials via environment
export AWS_ACCESS_KEY_ID=minioadmin
export AWS_SECRET_ACCESS_KEY=minioadmin

lynxdb server \
--s3-bucket lynxdb \
--s3-region us-east-1 \
--data-dir /var/lib/lynxdb

See S3/MinIO Storage Backend Setup for a detailed setup guide.

Complete Example

storage:
s3_bucket: "company-lynxdb-logs"
s3_region: "us-west-2"
s3_prefix: "production/"
tiering_interval: "5m"
tiering_parallelism: 4
segment_cache_size: "20gb"
cache_max_bytes: "4gb"
lynxdb server \
--data-dir /var/lib/lynxdb \
--s3-bucket company-lynxdb-logs \
--s3-region us-west-2 \
--cache-max-mb 4gb

Next Steps