Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/nubskr/walrus/llms.txt

Use this file to discover all available pages before exploring further.

Walrus delivers exceptional write throughput and competitive latency characteristics. This page presents comprehensive benchmark results comparing Walrus against industry-standard systems.

Storage Engine Performance

The underlying Walrus storage engine has been benchmarked against Apache Kafka (single broker) and RocksDB’s Write-Ahead Log to measure raw write performance under various durability configurations.

Benchmark Configuration

All benchmarks compare:
  • Walrus: Legacy append_for_topic() endpoint using pwrite() syscalls (without io_uring batching)
  • Kafka: Single broker with no replication and no networking overhead
  • RocksDB: Write-Ahead Log implementation
These benchmarks measure single-node storage performance. The distributed Walrus system adds additional latency for consensus operations on metadata changes only (not on the data path).

Results Without Fsync

When fsync is disabled (writes acknowledged before guaranteed disk persistence), Walrus demonstrates superior throughput: Walrus vs RocksDB vs Kafka - No Fsync
SystemAvg Throughput (writes/s)Avg Bandwidth (MB/s)Max Throughput (writes/s)Max Bandwidth (MB/s)
Walrus1,205,762876.221,593,9841,158.62
Kafka1,112,120808.331,424,0731,035.74
RocksDB432,821314.531,000,000726.53
  • Walrus outperforms Kafka by 8.4% on average throughput
  • Walrus outperforms RocksDB by 178% on average throughput
  • Peak bandwidth reaches 1,158.62 MB/s with Walrus
  • Walrus maintains consistent high throughput under sustained load

Results With Fsync

When fsync is enabled on each write (ensuring data is flushed to disk before acknowledging), all systems show similar performance characteristics due to disk I/O becoming the bottleneck: Walrus vs RocksDB vs Kafka - With Fsync
SystemAvg Throughput (writes/s)Avg Bandwidth (MB/s)Max Throughput (writes/s)Max Bandwidth (MB/s)
RocksDB5,2223.7910,4867.63
Walrus4,9803.6011,3898.19
Kafka4,9213.5711,2248.34
  • With fsync enabled, disk I/O becomes the primary bottleneck
  • All systems show comparable average throughput (within 6% of each other)
  • Walrus achieves the highest peak throughput at 11,389 writes/s
  • Performance is heavily dependent on underlying storage hardware

Distributed System Performance

The distributed Walrus platform adds the following characteristics:

Write Throughput

Single writer per segment due to lease-based write fencing. Scales horizontally across segments.

Read Throughput

Scales with replicas for sealed segments. Multiple readers can access historical data simultaneously.

Latency

Approximately 1-2 RTT for forwarded operations plus underlying storage latency.

Consensus Overhead

Metadata operations only. Data writes do not require consensus, reducing latency.

Segment Rollover Performance

  • Default threshold: 1,000,000 entries per segment
  • Typical segment size: ~100MB (varies with payload size)
  • Rollover latency: Metadata-only operation via Raft consensus
  • Data movement: None required (sealed segments remain on original leader)
Write throughput per segment is limited by the single-leader constraint. For higher aggregate throughput, distribute writes across multiple topics or wait for automatic segment rollover to balance load.

Future Optimizations

The following optimizations are planned for future releases:
  1. io_uring batching: Current benchmarks use legacy pwrite(). Enabling io_uring batch operations will significantly improve throughput.
  2. Multi-segment writes: Allow concurrent writes to multiple segments within a topic.
  3. Compression: Optional compression for sealed segments to reduce storage overhead.
  4. Read-ahead caching: Predictive caching for sequential read workloads.

Running Your Own Benchmarks

To reproduce these benchmarks or test performance on your hardware:
cd walrus-rust
cargo bench
For distributed system benchmarks:
cd distributed-walrus
make cluster-test-stress
See the contributing guide for details on running the full test suite.