Skip to content

Data Structures

LoomCache natively implements 7 distinct, distributed data topologies on top of Raft consensus. This means you aren’t just getting a Key-Value store; you get a robust toolkit for state coordination, locking, and stream processing. Every operation is linearizable — replicated through the Raft log and persisted to WAL before acknowledgment.

Distributed Data Structures

7 built-in primitives. Fast, durable, and strictly linearizable.

"user:101"{ name: "Alice" }
"config:rateLimit"5000
"session:temp"Writing...
TTL: 30s

A highly optimized distributed key-value store with atomic operations, TTL eviction (LRU/LFU/Random), and entry listeners. Supports cursor-based scanning for large datasets and near-cache integration for sub-millisecond reads.

LoomMap<String, String> users = client.getMap("users");
users.put("alice", "Alice Smith");
String name = users.get("alice"); // near cache → 0.01ms
users.putIfAbsent("bob", "Bob Jones"); // atomic
users.addEntryListener("users", (k, old, v) ->
System.out.println("Changed: " + k));

Key features: get, put, delete, contains, putIfAbsent, replace, scan, clear, size, keys, values, entry listeners.

A strictly ordered, multi-consumer FIFO queue. Designed for distributed task distribution with guaranteed ordering across all cluster nodes.

LoomQueue<String> tasks = client.getQueue("tasks");
tasks.offer("process-order-123");
String next = tasks.poll(); // dequeue head
String peek = tasks.peek(); // view without removing
long size = tasks.size();

Key features: offer, poll, peek, size. Blocks natively on empty polls using Virtual Threads.

A pub/sub broadcast mechanism. Fire-and-forget message delivery to all active listeners. Multiple subscribers receive every published message with filtering support.

LoomTopic<String> events = client.getTopic("events");
events.subscribe("sub-1", msg ->
System.out.println("Event: " + msg));
events.publish("user-login");
events.unsubscribe("sub-1");

Key features: publish, subscribe, unsubscribe. Messages are broadcast to all active subscribers cluster-wide.

Distributed collection ensuring uniqueness. Supports cursor-based scanning for large sets and cardinality queries.

LoomSet<String> tags = client.getSet("tags");
tags.add("java");
tags.add("cache");
boolean exists = tags.contains("java"); // true
long count = tags.card(); // 2

Key features: add, remove, contains, card, scan.

Scored, ranked collection with O(log N) range queries. Each member has a floating-point score for ordering. Ideal for leaderboards, priority queues, and time-series indexing.

LoomSortedSet board = client.getSortedSet("leaderboard");
board.add("alice", 150.0);
board.add("bob", 200.0);
Long rank = board.rank("alice"); // 1
Set<String> top = board.revRange(0, 1); // [Bob, Alice]
board.incrScore("alice", 100.0); // now 250.0

Key features: add, remove, score, rank, revRank, range, revRange, incrScore, card.

Fenced, reentrant distributed locks with monotonically increasing fence tokens. Safe against split-brain scenarios and clock drift. Downstream services can verify fence tokens to reject stale lock holders.

LoomLock lock = client.getLock("critical-section");
long fenceToken = lock.tryLock("client-1", Duration.ofSeconds(10));
if (fenceToken > 0) {
try {
// Pass fence token to downstream for verification
} finally {
lock.unlock("client-1", fenceToken);
}
}

Key features: tryLock with timeout and client ID, unlock with fence token verification.

CAS-backed atomic counter ideal for rate limiting, sequence generation, and cluster-wide counting. All increments are linearizable through Raft consensus.

LoomCounter counter = client.getCounter("request-count");
long val = counter.incrementAndGet(); // 1
counter.increment(5); // now 6
long current = counter.get(); // 6
counter.reset(); // 0

Key features: incrementAndGet, increment(n), get, reset.

All 7 data structures share the same consistency model:

GuaranteeDetails
WritesLinearizable — Raft leader replicates to majority, persists to WAL
Leader readsLinearizable — served with valid leader lease
Follower readsEventual — stale by ≤100ms in default config
DurabilityAll writes fsync’d to disk before acknowledgment
OrderingStrict — all operations ordered by Raft term + index