Skip to content

Client API Reference

The LoomCache client SDK provides type-safe access to all 7 distributed data structures. It is thread-safe, supports automatic retry with exponential backoff, features a built-in near cache, and handles leader tracking transparently.

Client SDK Architecture

Smart routing, near cache, pipelining, and auto-retry built in.

LoomClientApp Thread
LIFO Pool (5-20)
node-1
node-2
node-3
LoomClient client = LoomClient.builder()
.addSeed("127.0.0.1:5701")
.addSeed("127.0.0.1:5702")
.requestTimeout(Duration.ofSeconds(10))
.maxRetries(3)
.retryBaseDelay(Duration.ofMillis(100))
.nearCacheEnabled(true)
.nearCacheTtlMs(60000)
.nearCacheMaxSize(10000)
.poolMinSize(5)
.poolMaxSize(20)
.poolIdleTimeoutMs(300000)
.pipelineBatchSize(100)
.pipelineFlushTimeoutMs(10)
.build();
client.connect();
CategoryMethodDefaultDescription
ConnectionaddSeed(host:port)Add cluster bootstrap node
TimeoutsrequestTimeout(Duration)10sPer-request timeout
RetrymaxRetries(int)3Exponential backoff attempts
RetryretryBaseDelay(Duration)100msBase delay (5s max, ±25% jitter)
Near CachenearCacheEnabled(boolean)falseEnable local caching
Near CachenearCacheTtlMs(long)60000TTL for cached entries
Near CachenearCacheMaxSize(int)10000Max locally cached keys
PoolpoolMinSize(int)5Min connections per node
PoolpoolMaxSize(int)20Max connections per node
PoolpoolIdleTimeoutMs(long)300000Idle connection eviction (5 min)
PipelinepipelineBatchSize(int)100Commands per pipeline flush
PipelinepipelineFlushTimeoutMs(long)10Max flush delay (ms)
try (LoomClient client = LoomClient.builder()
.addSeed("127.0.0.1:5701")
.build()) {
client.connect();
boolean connected = client.isConnected();
int nodeCount = client.connectedNodes();
// Use data structures...
} // close() called automatically

All client classes implement AutoCloseable.


A distributed key-value store with atomic operations, TTL, eviction, and entry listeners.

LoomMap<String, String> map = client.getMap("users");
MethodParametersReturnsDescription
mapGet(K)keyV | nullGet value. Checks near cache first.
mapPut(K, V)key, valueV | nullStore entry. Invalidates near cache. Returns previous value.
mapDelete(K)keybooleanRemove entry. Returns true if existed.
mapContains(K)keybooleanCheck existence. Uses near cache.
mapSize()intTotal entries in the map.
mapKeys()Set<K>All keys as a set.
mapValues()Collection<V>All values.
mapPutIfAbsent(K, V)key, valueV | nullAtomic put if absent. Returns existing or null.
mapReplace(K, V)key, newValueV | nullReplace if exists. Returns old value.
mapClear()voidRemove all entries.
mapScan(ScanCursor)cursorScanResultCursor-based iteration for large maps.
map.addEntryListener("users", (key, oldValue, newValue) -> {
System.out.println("Changed: " + key + "" + newValue);
});
LoomMap<String, String> users = client.getMap("users");
users.mapPut("user:1", "Alice");
users.mapPut("user:2", "Bob");
String name = users.mapGet("user:1"); // "Alice"
boolean exists = users.mapContains("user:1"); // true
users.mapPutIfAbsent("user:3", "Charlie"); // null (inserted)
users.mapPutIfAbsent("user:3", "Charlie2"); // "Charlie" (no-op)
int total = users.mapSize(); // 3

A distributed FIFO queue. Elements are inserted at the tail and removed from the head.

LoomQueue<String> tasks = client.getQueue("tasks");
MethodParametersReturnsDescription
offer(E)elementbooleanEnqueue at tail. Returns true on success.
poll()E | nullDequeue from head. Returns null if empty.
peek()E | nullView head without removing.
size()longCurrent queue length.
LoomQueue<String> tasks = client.getQueue("tasks");
tasks.offer("process-order-123");
tasks.offer("send-email");
String next = tasks.poll(); // "process-order-123"
String peek = tasks.peek(); // "send-email"
long size = tasks.size(); // 1

A distributed set ensuring uniqueness.

LoomSet<String> tags = client.getSet("tags");
MethodParametersReturnsDescription
add(E)elementbooleanAdd to set. Returns false if exists.
remove(E)elementbooleanRemove from set.
contains(E)elementbooleanCheck membership.
card()longCardinality (size).
scan(cursor, count)cursor, countScanResultCursor-based iteration.
LoomSet<String> tags = client.getSet("tags");
tags.add("java");
tags.add("cache");
boolean exists = tags.contains("java"); // true
tags.remove("cache");
long count = tags.card(); // 1

A scored, ranked collection with O(log N) range queries.

LoomSortedSet leaderboard = client.getSortedSet("leaderboard");
MethodParametersReturnsDescription
add(member, score)member, doublebooleanAdd with score.
remove(member)memberbooleanRemove member.
score(member)memberDouble | nullGet score.
rank(member)memberLong | nullRank by ascending score (0-indexed).
revRank(member)memberLong | nullRank by descending score.
range(start, stop)long, longSet<String>Range by ascending rank.
revRange(start, stop)long, longSet<String>Range by descending rank.
incrScore(member, delta)member, doubledoubleIncrement score atomically.
card()longTotal members.
LoomSortedSet board = client.getSortedSet("leaderboard");
board.add("alice", 150.0);
board.add("bob", 200.0);
board.add("charlie", 100.0);
Long rank = board.rank("alice"); // 1 (Charlie=0, Alice=1)
Double score = board.score("alice"); // 150.0
Set<String> top2 = board.revRange(0, 1); // [Bob, Alice]
board.incrScore("alice", 100.0); // Alice now 250.0

Fire-and-forget broadcast messaging to all active subscribers.

LoomTopic<String> events = client.getTopic("events");
MethodParametersReturnsDescription
publish(message)TvoidBroadcast to all subscribers.
subscribe(id, listener)subscriberId, Consumer<T>voidRegister message handler.
unsubscribe(id)subscriberIdvoidRemove handler.
LoomTopic<String> events = client.getTopic("events");
events.subscribe("sub-1", msg ->
System.out.println("Received: " + msg)
);
events.publish("user-login");
events.publish("cache-hit");
events.unsubscribe("sub-1");

Distributed lock with monotonically increasing fence tokens for linearizable operations. Safe against split-brain scenarios.

LoomLock lock = client.getLock("critical-section");
MethodParametersReturnsDescription
tryLock(clientId, timeout)String, DurationlongAcquire lock. Returns fence token (>0) on success, 0 on failure.
unlock(clientId, fenceToken)String, longvoidRelease lock. Must pass correct fence token.
LoomLock lock = client.getLock("critical-section");
long fenceToken = lock.tryLock("client-1", Duration.ofSeconds(10));
if (fenceToken > 0) {
try {
// Pass fence token to downstream services
// for linearizable operation verification
} finally {
lock.unlock("client-1", fenceToken);
}
}

Fence tokens are monotonically increasing. Downstream services can reject operations with stale fence tokens, preventing split-brain writes.


CAS-backed atomic counter for rate limiting and sequence generation.

LoomCounter counter = client.getCounter("request-count");
MethodParametersReturnsDescription
incrementAndGet()longAtomic increment, returns new value.
get()longCurrent value.
reset()voidReset to zero.
increment(n)longvoidIncrement by N.
LoomCounter counter = client.getCounter("request-count");
long val = counter.incrementAndGet(); // 1
counter.increment(5); // now 6
long current = counter.get(); // 6
counter.reset(); // 0

All operations are available asynchronously via AsyncLoomClient:

AsyncLoomClient async = new AsyncLoomClient(syncClient);
CompletableFuture<Void> put = async.mapPut("cache", "key", "value");
CompletableFuture<Object> get = async.mapGet("cache", "key");
// Chain operations
async.mapPut("users", "alice", "Alice Smith")
.thenCompose(v -> async.mapGet("users", "alice"))
.thenAccept(value -> System.out.println("Got: " + value))
.join();

Execute multiple commands atomically:

LoomBatch batch = client.newBatch();
batch.mapPut("users", "alice", "Alice Smith");
batch.mapPut("users", "bob", "Bob Jones");
batch.queueOffer("tasks", "process-batch");
batch.execute(); // All-or-nothing

Batch multiple requests in a single round-trip (10–100x latency reduction):

Pipeline pipeline = client.newPipeline();
pipeline.mapPut("cache", "key1", "value1")
.mapPut("cache", "key2", "value2")
.mapPut("cache", "key3", "value3")
.mapGet("cache", "key1");
List<Object> results = pipeline.flush();
// results[3] = "value1"

All operations go through Raft consensus:

  • Writes are linearizable — leader replicates to majority, persists to WAL
  • Linearizable reads — served from leader with valid lease
  • Eventual reads — served from followers (stale by ≤100ms)
  • During elections — client retries every 100ms for up to 15 seconds
  • RESPONSE_REDIRECT — transparent leader re-routing with cached addresses