Skip to content

Networking Layer

Speed is nothing without a frictionless delivery mechanism. LoomCache handles inter-node communication and client-server traffic through a highly optimized TCP layer built on Java NIO2 and Virtual Threads.

Rather than communicating via HTTP or REST JSON, LoomCache utilizes a hyper-efficient, custom binary protocol with 101 distinct message types.

Wire Protocol Pipelining

Zero-copy serialization with batched pipelining over persistent TCP connections.

CLIENT
OP
BIN
OP
BIN
OP
BIN
SERVER
  1. Header Structure: Every packet has a fixed-length header declaring the payload size, operation code (OP-Code), and a strict format. This enables zero-parse-ahead framing.
  2. Kryo 5.6.2 Serialization: Map entries and keys are serialized via Kryo’s high-performance binary format with object pooling to minimize GC pressure.
  3. Pipelining: Multiple commands can be flushed rapidly on the same connection (batch size configurable, default 100), reducing round-trip latency by 10–100x.
CategoryCountExamples
Data operations30+MAP_PUT, QUEUE_OFFER, SET_ADD
Raft consensus15+APPEND_ENTRIES, REQUEST_VOTE, PRE_VOTE
Cluster mgmt10+JOIN, LEAVE, PARTITION_MIGRATE
Client control10+CONNECT, REDIRECT, HEARTBEAT
RESP compat52GET, SET, DEL, LPUSH, SUBSCRIBE

LoomCache also supports 52 RESP (Redis Serialization Protocol) commands, enabling compatibility with existing Redis clients and tools. The RespServer handles decoding via RespDecoder, command routing via RespCommandHandler, and response encoding via RespEncoder.

Networking logic is separated via specialized execution arenas:

  • TcpServer — Virtual thread per connection. Blocks freely on I/O; JVM unmounts carrier thread automatically.
  • CommandExecutorPool — Dedicated pool for state machine operations. Prevents slow clients from backing up consensus.
  • BackpressureController — Watermark-based flow control. When the command queue fills, new connections are back-pressured rather than dropped.

Each connection is tracked by a ConnectionContext containing:

  • Authentication state and RBAC roles
  • Command sequence numbers (FIFO verification)
  • Per-connection metrics
  • Circuit breaker state for the remote peer

LoomCache implements watermark-based backpressure via the BackpressureController:

WatermarkBehavior
Below low markNormal processing
Between marksNew connections throttled
Above high markNew connections rejected with BackpressureException

This prevents a thundering herd of clients from overwhelming the command queue and starving Raft replication.

Production deployments demand zero-trust encryption. LoomCache natively supports Mutual TLS (mTLS) for cluster inner-node communication. Each node validates the identity of adjoining nodes before participating in Raft elections or accepting writes.

  • TLSv1.3 by default with TLSv1.2 fallback
  • PKCS12 keystores for identity and trust
  • Per-connection TLS handshake via SslContextFactory
  • Client connections support optional mTLS (requireClientAuth toggle)