Data Structures
LoomCache natively implements 7 distinct, distributed data topologies on top of Raft consensus. This means you aren’t just getting a Key-Value store; you get a robust toolkit for state coordination, locking, and stream processing. Every operation is linearizable — replicated through the Raft log and persisted to WAL before acknowledgment.
Distributed Data Structures
7 built-in primitives. Fast, durable, and strictly linearizable.
TTL: 30s
Map (LoomMap)
Section titled “Map (LoomMap)”A highly optimized distributed key-value store with atomic operations, TTL eviction (LRU/LFU/Random), and entry listeners. Supports cursor-based scanning for large datasets and near-cache integration for sub-millisecond reads.
LoomMap<String, String> users = client.getMap("users");users.put("alice", "Alice Smith");String name = users.get("alice"); // near cache → 0.01msusers.putIfAbsent("bob", "Bob Jones"); // atomicusers.addEntryListener("users", (k, old, v) -> System.out.println("Changed: " + k));Key features: get, put, delete, contains, putIfAbsent, replace, scan, clear, size, keys, values, entry listeners.
Queue (LoomQueue)
Section titled “Queue (LoomQueue)”A strictly ordered, multi-consumer FIFO queue. Designed for distributed task distribution with guaranteed ordering across all cluster nodes.
LoomQueue<String> tasks = client.getQueue("tasks");tasks.offer("process-order-123");String next = tasks.poll(); // dequeue headString peek = tasks.peek(); // view without removinglong size = tasks.size();Key features: offer, poll, peek, size. Blocks natively on empty polls using Virtual Threads.
Topic (LoomTopic)
Section titled “Topic (LoomTopic)”A pub/sub broadcast mechanism. Fire-and-forget message delivery to all active listeners. Multiple subscribers receive every published message with filtering support.
LoomTopic<String> events = client.getTopic("events");events.subscribe("sub-1", msg -> System.out.println("Event: " + msg));events.publish("user-login");events.unsubscribe("sub-1");Key features: publish, subscribe, unsubscribe. Messages are broadcast to all active subscribers cluster-wide.
Set (LoomSet)
Section titled “Set (LoomSet)”Distributed collection ensuring uniqueness. Supports cursor-based scanning for large sets and cardinality queries.
LoomSet<String> tags = client.getSet("tags");tags.add("java");tags.add("cache");boolean exists = tags.contains("java"); // truelong count = tags.card(); // 2Key features: add, remove, contains, card, scan.
SortedSet (LoomSortedSet)
Section titled “SortedSet (LoomSortedSet)”Scored, ranked collection with O(log N) range queries. Each member has a floating-point score for ordering. Ideal for leaderboards, priority queues, and time-series indexing.
LoomSortedSet board = client.getSortedSet("leaderboard");board.add("alice", 150.0);board.add("bob", 200.0);Long rank = board.rank("alice"); // 1Set<String> top = board.revRange(0, 1); // [Bob, Alice]board.incrScore("alice", 100.0); // now 250.0Key features: add, remove, score, rank, revRank, range, revRange, incrScore, card.
Lock (LoomLock)
Section titled “Lock (LoomLock)”Fenced, reentrant distributed locks with monotonically increasing fence tokens. Safe against split-brain scenarios and clock drift. Downstream services can verify fence tokens to reject stale lock holders.
LoomLock lock = client.getLock("critical-section");long fenceToken = lock.tryLock("client-1", Duration.ofSeconds(10));if (fenceToken > 0) { try { // Pass fence token to downstream for verification } finally { lock.unlock("client-1", fenceToken); }}Key features: tryLock with timeout and client ID, unlock with fence token verification.
Counter (LoomAtomicCounter)
Section titled “Counter (LoomAtomicCounter)”CAS-backed atomic counter ideal for rate limiting, sequence generation, and cluster-wide counting. All increments are linearizable through Raft consensus.
LoomCounter counter = client.getCounter("request-count");long val = counter.incrementAndGet(); // 1counter.increment(5); // now 6long current = counter.get(); // 6counter.reset(); // 0Key features: incrementAndGet, increment(n), get, reset.
Consistency Guarantees
Section titled “Consistency Guarantees”All 7 data structures share the same consistency model:
| Guarantee | Details |
|---|---|
| Writes | Linearizable — Raft leader replicates to majority, persists to WAL |
| Leader reads | Linearizable — served with valid leader lease |
| Follower reads | Eventual — stale by ≤100ms in default config |
| Durability | All writes fsync’d to disk before acknowledgment |
| Ordering | Strict — all operations ordered by Raft term + index |