TheDocumentation Index
Fetch the complete documentation index at: https://mintlify.com/nubskr/walrus/llms.txt
Use this file to discover all available pages before exploring further.
FsyncSchedule enum controls when write operations are synchronized to disk, allowing you to balance durability guarantees against write performance.
Enum Definition
Variants
Milliseconds
Flushes data to disk at regular time intervals.Time interval in milliseconds between fsync operations. The default is 200ms.
- Background worker performs fsync every N milliseconds
- Batches multiple writes into a single fsync operation
- Balanced durability and performance
- Data written in the last N milliseconds may be lost on crash
- Recovery window is bounded by the interval
- Good throughput with periodic disk flushes
- Amortizes fsync cost across multiple writes
- Most production workloads
- Applications with acceptable recovery windows
- Event streaming and logging systems
SyncEach
Flushes data to disk after every single write operation.- Calls fsync immediately after each write
- On FD backend: Opens files with
O_SYNCflag for synchronous writes - Maximum durability guarantee
- Every write is guaranteed to be on disk before returning
- No data loss on crash (within consistency model limits)
- Highest durability guarantee available
- Lowest throughput due to synchronous disk operations
- Each write operation waits for disk confirmation
- Not suitable for high-frequency writes
- Financial transactions
- Critical configuration updates
- Systems with zero data loss tolerance
- Compliance requirements for durability
NoFsync
Disables fsync entirely, relying on OS buffer cache.- Never calls fsync
- Data is written to OS buffer cache only
- OS controls when data is flushed to disk
- No durability guarantees
- All unflushed data may be lost on crash or power failure
- Suitable only for ephemeral or easily-reproducible data
- Maximum write throughput
- No waiting for disk operations
- Best performance for write-heavy workloads
- Development and testing
- Caches and temporary data
- Data that can be rebuilt from other sources
- Performance benchmarking
Default Configuration
When usingWalrus::new() or Walrus::with_consistency():
Choosing a Schedule
| Requirement | Recommended Schedule |
|---|---|
| Zero data loss tolerance | SyncEach |
| Balanced durability/performance | Milliseconds(200) (default) |
| High write throughput | Milliseconds(1000) or higher |
| Maximum performance | NoFsync (testing/cache only) |
| Sub-second recovery window | Milliseconds(100) or lower |
| Financial transactions | SyncEach |
| Event logging | Milliseconds(500) |
Data Loss Window
The maximum data loss window with different schedules:| Schedule | Max Data Loss Window |
|---|---|
SyncEach | 0 (no loss within consistency model) |
Milliseconds(100) | Last 100ms of writes |
Milliseconds(200) | Last 200ms of writes |
Milliseconds(1000) | Last 1 second of writes |
NoFsync | All unflushed data (potentially unbounded) |
Interaction with ReadConsistency
The fsync schedule applies to write operations, whileReadConsistency applies to read checkpoint persistence.
Maximum Durability (Both Reads and Writes)
Balanced Configuration
Maximum Throughput (Testing Only)
Backend-Specific Behavior
FD Backend with SyncEach
When using the FD (file descriptor) backend withSyncEach:
- Files are opened with the
O_SYNCflag - Every write operation is synchronous at the kernel level
- No separate fsync call needed
Mmap Backend
The mmap backend handles all schedules but may have different performance characteristics:Configuration Examples
Production Default
Low-Latency Durability
High-Throughput Logging
Related Types
- Walrus - Main WAL instance
- ReadConsistency - Controls read checkpoint persistence
- Entry (WalEntry) - Structure returned by read operations