Partitioning & Sharding
LoomCache distributes large datasets automatically using a heavily optimized Consistent Hash Ring approach, ensuring no single node is ever a bottleneck.
Consistent Hash Ring
Keys are hashed to 16,384 slots mapped cleanly across virtual nodes, decoupling data from physical servers.
Slot Management
Section titled “Slot Management”Rather than hashing keys directly to nodes, LoomCache hashes them to one of 16,384 virtual slots (same as Redis Cluster). Slots are evenly distributed among the physical nodes in the cluster.
[!NOTE] Because we map to slots instead of nodes, adding or removing a node from the cluster only requires migrating the data for specific slots, drastically minimizing network impact compared to a traditional mod-N hash ring.
How Keys Map to Slots
Section titled “How Keys Map to Slots”slot = MurmurHash3(key) % 16384The ConsistentHashRing maintains the mapping from slots to physical nodes. The client SDK uses this same hash function for smart routing — sending requests directly to the owning node without an extra redirect hop.
Slot Distribution
Section titled “Slot Distribution”| Cluster Size | Slots per Node | Quorum |
|---|---|---|
| 3 nodes | ~5,461 each | 2 |
| 5 nodes | ~3,276 each | 3 |
| 7 nodes | ~2,340 each | 4 |
Partition Migration
Section titled “Partition Migration”Partition Migration (Dynamic Scaling)
Non-blocking state transfer when topologies change
When a cluster scales out, the PartitionMigrationManager negotiates a new layout via Raft and seamlessly streams the slot data in the background, pausing writes to individual slots only for the split-second transfer of ownership.
Migration Sequence
Section titled “Migration Sequence”- New node joins and announces itself via discovery
PartitionTablecomputes the optimal slot redistribution- Migration plan is committed through Raft consensus (all nodes agree)
- Data streams in the background — source node sends slot data to target
- Individual slots are briefly paused during the ownership handoff
- Client routing tables are updated — subsequent requests go to the new owner
Per-Slot Metrics
Section titled “Per-Slot Metrics”LoomCache tracks fine-grained metrics for each hash slot via SlotMetrics:
- Access count — requests hitting this slot
- Key count — number of keys stored in this slot
- Memory usage — bytes consumed by this slot’s data
Use SlotMetrics.getTopSlots(n) to identify hot partitions and rebalance proactively.
Client-Side Smart Routing
Section titled “Client-Side Smart Routing”The RequestRouter in the client SDK uses the same MurmurHash3 function and partition table to route requests directly to the correct node:
// Client internally computes:int slot = MurmurHash3.hash(key) % 16384;NodeInfo owner = partitionTable.getOwner(slot);// Sends request directly to owner — zero redirectsThis eliminates the redirect hop that simpler clients require, cutting per-request latency in half for well-distributed workloads.