A lightweight Raft implementation designed for embedding into Rust applications — the consensus layer for building reliable distributed systems.
Built with a simple vision: make distributed coordination accessible - cheap to run, simple to use.
Built on a core philosophy: choose simple architectures over complex ones.
Why I Built This
In my experience, adding distributed coordination to applications was always expensive – existing solutions like etcd are either too slow when embedded (gRPC overhead) or require running separate 3-node clusters.
d-engine aims to change this: make coordination cheap enough that you don't hesitate to add it when you need it.
Architecture
Single-threaded event loop: No race conditions, strict ordering. CPU-bound on single core—scale horizontally.
Role separation (SRP): Leader/Follower/Candidate/Learner each handle their own logic. Main loop just routes events.
Standard design for Raft implementations.
Two Integration Modes
Embedded mode - runs inside your Rust process:
let engine = EmbeddedEngine::start().await?;
engine.wait_ready(Duration::from_secs(5)).await?;;
let client = engine.client();
client.put(b"key".to_vec(), b"value".to_vec()).await?; // <0.1ms
- Direct memory access via mpsc channels (no serialization)
- <0.1ms latency for local operations
- Single binary deployment
- Benchmark: 203K writes/s, 279K reads/s (M2 Mac)
Standalone mode - separate server process with gRPC:
d-engine = { version = "0.2", features = ["client"], default-features = false }
- Language-agnostic (Go, Python, Java, Rust)
- Independent scaling
- Benchmark: 64K writes/s, 12K reads/s
Choose based on your language and latency requirements.
Benchmarks
Embedded mode (3-node cluster, M2 Mac localhost):
- 203K writes/s, 279K reads/s (linearizable)
Standalone mode (gRPC):
- 64K writes/s, 12K reads/s (linearizable)
Lab numbers only—production performance varies by workload and hardware.
Full details in benches/.
Design: Traits for Extension
d-engine provides working implementations (RocksDB storage, KV operations, gRPC transport). When defaults don't fit, implement the traits:
pub trait StorageEngine {
type LogStore: LogStore;
type MetaStore: MetaStore;
}
pub trait StateMachine {
async fn apply_chunk(&self, entries: Vec<Entry>) -> Result<()>;
}
Examples in repo show Sled storage backend, custom HTTP handlers with HAProxy for HA deployments.
Consistency Model
Three read policies (configurable per-operation or server-wide):
- LinearizableRead: Strongest guarantee, verify quorum (~2ms)
- LeaseRead: Leader lease-based, fast path (~0.3ms, requires NTP)
- EventualConsistency: Local read, may be stale (~0.1ms)
Trade-offs documented. Defaults are sane.
Start Simple, Scale When Needed
// Start with 1 node (auto-elected leader)
let engine = EmbeddedEngine::start().await?;
// Scale to 3 nodes later: update config, zero code changes
// See examples/single-node-expansion
No Kubernetes, no complex setup. Just Rust + config file.
What It Is
- Raft consensus implementation
- Pluggable storage (RocksDB, Sled, custom)
- Flexible consistency (linearizable/lease/eventual)
- Production-ready core, API stabilizing toward v1.0
Current State & Direction
Version 0.2
Core Raft engine is solid (1000+ tests, d-engine 0.1.x version Jepsen validated).
APIs are stabilizing toward v1.0. Pre-1.0 means breaking changes are possible
if they improve design.
No production users yet – looking for early adopters to validate real-world use cases.
Future Direction:
- Cloud-native storage backends (Cloudflare, AWS, GCP)
- Exploring: etcd-compatible API layer
- Timeline depends on community feedback
Try It
[dependencies]
d-engine = "0.2"
- GitHub: https://github.com/deventlab/d-engine
- Docs: https://docs.rs/d-engine
- Examples: https://github.com/deventlab/d-engine/tree/main/examples
If you're building distributed systems in Rust and need consensus, the code is there. Examples show embedded mode, standalone mode, custom storage, HA deployments with HAProxy.
Looking for Early Adopters
I'm looking for Rust developers building distributed systems who have real problems to solve:
- Coordination bottlenecks in your architecture (slow consensus, expensive etcd clusters)
- Specific use cases where strong consistency matters (leader election, distributed locks, metadata stores)
- Production deployments where you need cheap, simple, and reliable coordination
Particularly interested in:
- Teams with real coordination problems (not just kicking the tires)
- Cloud-native scenarios (Cloudflare Workers, AWS Lambda, serverless)
- Cost-sensitive use cases where etcd feels like overkill
What I'm offering:
- Free architecture review for your use case
- Direct access to roadmap planning
- Quick response to GitHub issues/questions
If you have a specific problem: Open an issue with your use case, or reach out directly.
Show me what's broken, what's expensive, what's too complex. Let's see if d-engine can solve it cheaply.
License: MIT or Apache-2.0 | Platforms: Linux, macOS | MSRV: Rust 1.88

Top comments (2)
I am sure that being able to run in single node mode is useful for development, evaluation, and disaster recovery, but do you have benchmarks of 3 node embedded cluster at realistic non-virtual on-prem network latencies?
If standalone beats etcd for all operations, that's a great replacement, but embedded is more important for me.
Also, do you (or plan to) support multi-Raft for leader load balancing?
Thanks for the interest! I ran a quick test with the same configuration (only difference: AWS vs GCE): imgur.com/a/c5uiZGP
This is 3-node embedded cluster across real network (not localhost).
Haven't had time to run standalone mode on the same setup yet.
Re: Multi-Raft for load balancing:
Not currently supported, but interested in exploring it. d-engine is
still young and I'm actively listening to community feedback to shape
the roadmap - would love to hear your use case and requirements. :-)