Documentation Index
Fetch the complete documentation index at: https://mintlify.com/nubskr/walrus/llms.txt
Use this file to discover all available pages before exploring further.
Choose Your Mode
Walrus can be used in two ways:
Library Mode
Embed Walrus in your Rust application for local, high-performance WAL
Cluster Mode
Deploy a distributed cluster for fault-tolerant streaming
Library Mode Quickstart
Use Walrus as a standalone library for single-node, high-performance write-ahead logging.
Add Dependency
Add Walrus to your Cargo.toml:[dependencies]
walrus-rust = "0.2.0"
Create a WAL Instance
use walrus_rust::Walrus;
fn main() -> std::io::Result<()> {
// Create a new WAL instance
let wal = Walrus::new()?;
Ok(())
}
Write Data
Write messages to topics:// Append to a topic
wal.append_for_topic("logs", b"Application started")?;
wal.append_for_topic("logs", b"User logged in")?;
wal.append_for_topic("metrics", b"CPU: 45%")?;
Read Data
Read messages back from topics:// Read with checkpoint (consumes the entry)
if let Some(entry) = wal.read_next("logs", true)? {
println!("Read: {:?}", String::from_utf8_lossy(&entry.data));
// Output: "Application started"
}
// Read next entry
if let Some(entry) = wal.read_next("logs", true)? {
println!("Read: {:?}", String::from_utf8_lossy(&entry.data));
// Output: "User logged in"
}
Batch Operations
For higher throughput, use batch operations:
use walrus_rust::Walrus;
fn main() -> std::io::Result<()> {
let wal = Walrus::new()?;
// Atomic batch write (all-or-nothing)
let batch = vec![
b"entry 1".as_slice(),
b"entry 2".as_slice(),
b"entry 3".as_slice(),
];
wal.batch_append_for_topic("events", &batch)?;
// Batch read with byte limit
let max_bytes = 1024 * 1024; // 1MB
let entries = wal.batch_read_for_topic("events", max_bytes, true)?;
for entry in entries {
println!("Read: {} bytes", entry.data.len());
}
Ok(())
}
Batch operations on Linux automatically use io_uring for parallel I/O, delivering exceptional performance. Maximum 2,000 entries and ~10GB per batch.
Consistency Models
Choose between different consistency models:
use walrus_rust::{Walrus, ReadConsistency};
// Strict consistency - every checkpoint persisted immediately
let wal = Walrus::with_consistency(ReadConsistency::StrictlyAtOnce)?;
// At-least-once delivery - persist every N reads (higher throughput)
let wal = Walrus::with_consistency(
ReadConsistency::AtLeastOnce { persist_every: 1000 }
)?;
Cluster Mode Quickstart
Deploy a fault-tolerant distributed cluster with automatic load balancing.
Clone the Repository
git clone https://github.com/nubskr/walrus.git
cd walrus/distributed-walrus
Bootstrap the Cluster
Start a 3-node cluster using Docker Compose:This command:
- Builds the distributed Walrus image
- Starts 3 nodes (ports 9091-9093)
- Initializes Raft consensus
- Waits for all nodes to be ready
Connect with CLI
Launch the interactive CLI:cargo run --bin walrus-cli -- --addr 127.0.0.1:9091
You’ll see the Walrus banner and prompt: Write Messages
Append messages to the topic:🦭 > PUT logs "Application started"
OK
🦭 > PUT logs "User authentication successful"
OK
🦭 > PUT logs "Database connection established"
OK
Read Messages
Consume messages from the topic:🦭 > GET logs
OK Application started
🦭 > GET logs
OK User authentication successful
🦭 > GET logs
OK Database connection established
🦭 > GET logs
EMPTY
Inspect Cluster State
Use the built-in commands to inspect the cluster:
🦭 > STATE logs
{
"current_segment": 1,
"leader_node": 1,
"sealed_segments": {},
"segment_leaders": {
"1": 1
}
}
Client Protocol
The cluster exposes a simple length-prefixed text protocol over TCP:
[4 bytes: length (little-endian)] [UTF-8 command]
Commands
| Command | Description | Response |
|---|
REGISTER <topic> | Create topic if missing | OK |
PUT <topic> <payload> | Append to topic | OK |
GET <topic> | Read next entry (shared cursor) | OK <data> or EMPTY |
STATE <topic> | Get topic metadata | OK <json> |
METRICS | Get Raft metrics | OK <json> |
Responses
- Success:
OK or OK <payload>
- Empty read:
EMPTY (no data available)
- Error:
ERR <message>
Using the CLI (Non-Interactive)
For scripting, use one-off commands:
cargo run --bin walrus-cli -- --addr 127.0.0.1:9091 register logs
Testing Fault Tolerance
Write Some Data
🦭 > PUT logs "Before node failure"
OK
Stop a Node
docker compose -f docker-compose.yml stop walrus-node-2
Continue Writing
The cluster automatically handles the failure:🦭 > PUT logs "After node failure"
OK
Writes continue as long as you have quorum (2 of 3 nodes). Restart the Node
docker compose -f docker-compose.yml start walrus-node-2
The node rejoins and catches up automatically.
You need at least 2 of 3 nodes running to maintain quorum and accept writes. Losing quorum makes the cluster read-only until nodes recover.
Observing Segment Rollover
Watch automatic load balancing in action:
# Write enough data to trigger rollover (default: 1M entries)
for i in {1..1000000}; do
echo "PUT logs \"Message $i\""
done | cargo run --bin walrus-cli -- --addr 127.0.0.1:9091
# Check the topic state
🦭 > STATE logs
{
"current_segment": 2,
"leader_node": 2,
"sealed_segments": {
"1": 1000000
},
"segment_leaders": {
"1": 1,
"2": 2
}
}
Notice:
- Segment 1 is sealed with 1M entries
- Segment 2 is active with leadership transferred to node 2
- Load is now distributed across nodes
Configuration
Environment variables for cluster tuning:
| Variable | Default | Description |
|---|
WALRUS_MAX_SEGMENT_ENTRIES | 1000000 | Entries before segment rollover |
WALRUS_MONITOR_CHECK_MS | 10000 | Monitor loop interval (ms) |
WALRUS_DISABLE_IO_URING | - | Use mmap instead of io_uring |
RUST_LOG | info | Log level (debug, info, warn) |
Cleaning Up
Next Steps
Installation
Detailed installation and deployment guide
Architecture
Understand the distributed system design
Configuration
Advanced configuration options
API Reference
Complete API documentation