Skip to content

Partitioning & Sharding

LoomCache distributes large datasets automatically using a heavily optimized Consistent Hash Ring approach, ensuring no single node is ever a bottleneck.

Consistent Hash Ring

Keys are hashed to 16,384 slots mapped cleanly across virtual nodes, decoupling data from physical servers.

N-1
N-2
N-3
Hash(Key)
Physical Node
Active Slot Route

Rather than hashing keys directly to nodes, LoomCache hashes them to one of 16,384 virtual slots (same as Redis Cluster). Slots are evenly distributed among the physical nodes in the cluster.

[!NOTE] Because we map to slots instead of nodes, adding or removing a node from the cluster only requires migrating the data for specific slots, drastically minimizing network impact compared to a traditional mod-N hash ring.

slot = MurmurHash3(key) % 16384

The ConsistentHashRing maintains the mapping from slots to physical nodes. The client SDK uses this same hash function for smart routing — sending requests directly to the owning node without an extra redirect hop.

Cluster SizeSlots per NodeQuorum
3 nodes~5,461 each2
5 nodes~3,276 each3
7 nodes~2,340 each4

Partition Migration (Dynamic Scaling)

Non-blocking state transfer when topologies change

CLUSTER STABLE
Node 1
Slot 1
Slot 2
Node 2
Slot 3
Slot 4
Node 3
Slot 5
Slot 6

When a cluster scales out, the PartitionMigrationManager negotiates a new layout via Raft and seamlessly streams the slot data in the background, pausing writes to individual slots only for the split-second transfer of ownership.

  1. New node joins and announces itself via discovery
  2. PartitionTable computes the optimal slot redistribution
  3. Migration plan is committed through Raft consensus (all nodes agree)
  4. Data streams in the background — source node sends slot data to target
  5. Individual slots are briefly paused during the ownership handoff
  6. Client routing tables are updated — subsequent requests go to the new owner

LoomCache tracks fine-grained metrics for each hash slot via SlotMetrics:

  • Access count — requests hitting this slot
  • Key count — number of keys stored in this slot
  • Memory usage — bytes consumed by this slot’s data

Use SlotMetrics.getTopSlots(n) to identify hot partitions and rebalance proactively.

The RequestRouter in the client SDK uses the same MurmurHash3 function and partition table to route requests directly to the correct node:

// Client internally computes:
int slot = MurmurHash3.hash(key) % 16384;
NodeInfo owner = partitionTable.getOwner(slot);
// Sends request directly to owner — zero redirects

This eliminates the redirect hop that simpler clients require, cutting per-request latency in half for well-distributed workloads.