Technology Stack
LoomCache is built to be blisteringly fast and massively concurrent. To achieve this, it relies on modern Java capabilities rather than pulling in external bloat.
Zero Dependency Philosophy
Section titled “Zero Dependency Philosophy”LoomCache core explicitly avoids monolithic frameworks like Spring or heavy standard libraries. The core datastore and clustering engine are written entirely in Vanilla Java.
[!TIP] By eliminating dependencies, we radically reduce our security attack surface and ensure a negligible cold-start time.
The minimal dependency footprint:
| Component | Technology | Purpose |
|---|---|---|
| Serialization | Kryo 5.6.2 | High-performance binary serialization |
| Hashing | MurmurHash3 | Consistent hashing for partition routing |
| Networking | Java NIO2 | Non-blocking I/O with virtual threads |
| Consensus | Custom Raft | No ZooKeeper, no etcd dependency |
| Metrics | Micrometer | Prometheus-compatible observability |
No Netty. No Spring (in core). No external cache libraries. The entire server is a single fat JAR with zero transitive dependency conflicts.
Virtual Threads (Project Loom)
Section titled “Virtual Threads (Project Loom)”The backbone of LoomCache’s concurrency is Java 25 Virtual Threads.
Java Virtual Threads (Project Loom)
Millions of lightweight virtual threads multiplexed over a few OS Carrier Threads.
OS Thread 1
OS Thread 2
Traditional OS threads are heavy, requiring memory per thread and context-switching overhead. Thread per request scaling has historically hit a wall at a few thousand concurrent connections.
LoomCache uses the Executors.newVirtualThreadPerTaskExecutor() model:
- Every incoming network connection gets its own extremely lightweight virtual thread.
- Blocking I/O operations (like writing to the TCP socket or reading from the WAL) do not block OS threads. They just yield the virtual thread.
- This allows LoomCache to cleanly handle millions of concurrent connections on standard hardware without complex reactive programming models.
Why This Matters
Section titled “Why This Matters”| Model | Connections | Complexity | Memory per Connection |
|---|---|---|---|
| OS Threads (Netty) | ~10,000 | High (callbacks, futures) | ~1 MB |
| Reactive (WebFlux) | ~100,000 | Very High (reactive chains) | ~10 KB |
| Virtual Threads | ~1,000,000+ | Low (blocking code) | ~1 KB |
LoomCache achieves the scalability of reactive frameworks with the simplicity of traditional blocking code. The JVM handles the scheduling — when a virtual thread blocks on socket.read() or fsync(), the carrier OS thread is instantly freed for other work.
Java 25 Features
Section titled “Java 25 Features”LoomCache leverages cutting-edge Java 25 LTS features:
- Virtual Threads — Per-connection lightweight threads
- StructuredTaskScope — 33 adoptions for structured concurrent operations
- ScopedValues — 19 usages for efficient thread-local context propagation
- ReentrantLock — Virtual-thread-compatible synchronization (not
synchronized) - JFR Custom Events — 5 custom flight recorder event types for production profiling
Project Stats
Section titled “Project Stats”| Metric | Value |
|---|---|
| Production source files | 278 |
| Total lines of code | 304,000+ |
| Test methods | 11,600+ |
| Test files | 195 |
| Maven modules | 5 |
| Message types | 101 |
| RESP commands | 52 |