zerg is an experimental, low-level TCP server framework built in C# on top of Linux io_uring. It intentionally avoids "magic" abstraction layers and gives the developer direct control over sockets, buffers, queues, and scheduling.
- Author: Diogo Martins
- License: MIT
- Repository: https://github.com/MDA2AV/zerg
- NuGet: https://www.nuget.org/packages/zerg/
- Target Frameworks: .NET 8.0, .NET 9.0, .NET 10.0
Full documentation is available at https://mda2av.github.io/zerg/
| Page | Description |
|---|---|
| Getting Started | Installation, quick start, configuration |
| Architecture | Reactor pattern, io_uring, threading model, connection lifecycle, buffer rings |
| API Reference | Engine, Connection Read/Write, ConnectionPipeReader, ConnectionStream, Configuration |
| Guides | Zero-allocation patterns, buffer management, performance tuning |
| Internals | Memory management, native interop, MPSC/SPSC queues |
- Linux (kernel 6.1+ required for multishot accept/recv, buffer rings,
DEFER_TASKRUN) - .NET 8.0, .NET 9.0, or .NET 10.0 SDK
- liburing (the native shim
liburingshim.sois bundled in the NuGet package forlinux-x64andlinux-musl-x64)
dotnet add package zerggit clone https://github.com/MDA2AV/zerg.git
cd zerg
dotnet builddotnet publish -f net10.0 -c Release /p:PublishAot=true /p:OptimizationPreference=Speedzerg follows a split architecture with one acceptor thread and N reactor threads, each owning its own io_uring instance:
βββββββββββββββββββββββββββββββββββββββββββββββ
β KERNEL SPACE β
β β
ββββββββββ TCP β TCP/IP Stack βββΊ Listening Socket β
βClient 1βββββββββββββββββββββββββββββββββββββββββΊ β
βClient 2βββββββββββββββββββββββββββββββββββββββββΊ β
βClient 3βββββββββββββββββββββββββββββββββββββββββΊ β
β ... βββββββββββββββββββββββββββββββββββββββββΊ β
ββββββββββ ββββββββββββββββ¬ββββββββββββββββββββββββββββ β
β β
βββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββββ
β β
β USER SPACE βΌ
β βββββββββββββββββββββββββββββββββββββββββ
β β ACCEPTOR THREAD β
β β β
β β io_uring βββ multishot accept β
β β (one SQE β CQE per new connection) β
β β β
β β for each accepted fd: β
β β setsockopt(fd, TCP_NODELAY) β
β β enqueue to reactor[next++ % N] β
β βββββββββ¬βββββββββββ¬βββββββββββ¬ββββββββββ
β β β β
β lock-free lock-free lock-free
β ConcurrentQ ConcurrentQ ConcurrentQ
β β β β
β βΌ βΌ βΌ
β βββββββββββββ βββββββββββββ βββββββββββββ
β β REACTOR 0 β β REACTOR 1 β β REACTOR N β
β β β β β β β
β β io_uring β β io_uring β β io_uring β
β β buf_ring β β buf_ring β β buf_ring β
β β conn_map β β conn_map β β conn_map β
β β flush_Q β β flush_Q β β flush_Q β
β β return_Q β β return_Q β β return_Q β
β β β β β β β
β β multishot β β multishot β β multishot β
β β recv+send β β recv+send β β recv+send β
β βββββββ¬ββββββ βββββββ¬ββββββ βββββββ¬ββββββ
β β β β
β ββββββββ¬βββββββββββββββββββββ
β βΌ
β Channel<ConnectionItem>
β β
β βΌ
β Engine.AcceptAsync()
β β
β βΌ
β Application Handlers
β (ReadAsync ββββΊ Write + FlushAsync)
β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
- Listens on a TCP socket and accepts new connections via
io_uringmultishot accept - Distributes accepted connections to reactor threads in round-robin order
Each reactor owns:
- Its own
io_uringinstance for recv/send operations - A pre-allocated buffer ring for zero-copy receives
- A dictionary of active connections (fd β Connection)
- Lock-free MPSC queues for cross-thread coordination
- No thread contention: Each connection belongs to exactly one reactor
- Explicit buffer lifetimes: Consumers must return buffers to the kernel after processing
- Allocation-free hot paths: Uses unmanaged memory,
ValueTask, and object pooling - Multishot operations: Single submission produces multiple completions
See the full Architecture docs for deep dives into the reactor pattern, threading model, connection lifecycle, and buffer rings.
using zerg.Engine;
using zerg.Engine.Configs;
var engine = new Engine(new EngineOptions
{
Port = 8080,
ReactorCount = 1
});
engine.Listen();
var cts = new CancellationTokenSource();
// Graceful shutdown on Enter key
_ = Task.Run(() => {
Console.ReadLine();
engine.Stop();
cts.Cancel();
});
try
{
while (engine.ServerRunning)
{
var connection = await engine.AcceptAsync(cts.Token);
if (connection is null) continue;
// Fire-and-forget connection handler
_ = HandleConnectionAsync(connection);
}
}
catch (OperationCanceledException)
{
Console.WriteLine("Server stopped.");
}static async Task HandleConnectionAsync(Connection connection)
{
while (true)
{
var result = await connection.ReadAsync();
if (result.IsClosed) break;
// Get received buffers
var rings = connection.GetAllSnapshotRingsAsUnmanagedMemory(result);
// Process data...
// Return buffers to the kernel
rings.ReturnRingBuffers(connection.Reactor);
// Write a response
connection.Write("HTTP/1.1 200 OK\r\nContent-Length: 2\r\n\r\nOK"u8);
await connection.FlushAsync();
connection.ResetRead();
}
}| Property | Type | Default | Description |
|---|---|---|---|
ReactorCount |
int |
1 |
Number of reactor threads to spawn |
Ip |
string |
"0.0.0.0" |
IP address to bind to |
Port |
ushort |
8080 |
TCP port to listen on |
Backlog |
int |
65535 |
Listen backlog for pending connections |
AcceptorConfig |
AcceptorConfig |
new() |
Acceptor thread configuration |
ReactorConfigs |
ReactorConfig[] |
null |
Per-reactor configurations (auto-filled if null) |
| Property | Type | Default | Description |
|---|---|---|---|
RingFlags |
uint |
SINGLE_ISSUER | DEFER_TASKRUN |
io_uring setup flags |
SqCpuThread |
int |
-1 |
CPU affinity for SQPOLL thread (-1 = kernel decides) |
SqThreadIdleMs |
uint |
100 |
SQPOLL idle timeout before sleeping |
RingEntries |
uint |
8192 |
SQ/CQ size (max in-flight operations) |
RecvBufferSize |
int |
32768 |
Size of each receive buffer in bytes |
BufferRingEntries |
int |
16384 |
Number of pre-allocated recv buffers (must be power of 2) |
BatchCqes |
int |
4096 |
Max CQEs processed per loop iteration |
MaxConnectionsPerReactor |
int |
8192 |
Max concurrent connections per reactor |
CqTimeout |
long |
1000000 |
Wait timeout in nanoseconds (1ms) |
IncrementalBufferConsumption |
bool |
false |
Enable IOU_PBUF_RING_INC β each connection gets its own buffer ring; kernel packs multiple recvs into a single buffer (kernel 6.12+) |
ConnectionBufferRingEntries |
int |
128 |
Buffers per connection when IncrementalBufferConsumption is enabled (must be power of 2) |
| Property | Type | Default | Description |
|---|---|---|---|
RingFlags |
uint |
0 |
io_uring setup flags |
SqCpuThread |
int |
-1 |
CPU affinity for SQPOLL thread |
SqThreadIdleMs |
uint |
100 |
SQPOLL idle timeout |
RingEntries |
uint |
8192 |
SQ/CQ size |
BatchSqes |
uint |
4096 |
Max accepts processed per loop iteration |
CqTimeout |
long |
100000000 |
Wait timeout in nanoseconds (100ms) |
IPVersion |
IPVersion |
IPv6DualStack |
IPv4, IPv6, or IPv6DualStack |
var engine = new Engine(new EngineOptions
{
Port = 8080,
ReactorCount = 12,
ReactorConfigs = Enumerable.Range(0, 12).Select(_ => new ReactorConfig(
RecvBufferSize: 64 * 1024,
BufferRingEntries: 32 * 1024,
CqTimeout: 500_000
)).ToArray()
});Full config reference: Configuration docs
// Create and start
var engine = new Engine(options);
engine.Listen();
// Accept connections
Connection? conn = await engine.AcceptAsync(cancellationToken);
// Shutdown
engine.Stop();| Property | Type | Description |
|---|---|---|
ClientFd |
int |
The OS file descriptor for this connection |
Reactor |
Engine.Reactor |
The reactor that owns this connection |
zerg provides both high-level and low-level read APIs. The core contract is:
- Only one
ReadAsync()can be outstanding per connection at a time - After processing data, return buffers to the kernel via
ReturnRing() - Call
ResetRead()to signal readiness for the next read
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β READ LIFECYCLE β
β β
β await ReadAsync() β
β β β
β βΌ β
β βββββββββββββββββββ βββββββββββββββββββββββββββββββββββ β
β β RingSnapshot β β Option A: High-Level API β β
β β .IsClosed ββββββΊβ GetAllSnapshotRingsAs β β
β β .TailSnapshot β β UnmanagedMemory(result) β β
β βββββββββββββββββββ β .ToReadOnlySequence() β β
β ββββββββββββββββ¬βββββββββββββββββββ β
β βββββββββββββββββββ β β
β β Option B: β β β
β β Low-Level API β β β
β β TryGetRing() β β β
β β ring.AsSpan() β β β
β ββββββββββ¬βββββββββ β β
β β β β
β βΌ βΌ β
β ββββββββββββββββββββββββββββββββββββββββββββββββ β
β β Return buffers to kernel β β
β β rings.ReturnRingBuffers(connection.Reactor) β β
β β ββ or ββ β β
β β connection.ReturnRing(ring.BufferId) β β
β ββββββββββββββββββββββββ¬ββββββββββββββββββββββββ β
β β β
β βΌ β
β connection.ResetRead() β
β β β
β βΌ β
β await ReadAsync() β loop β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
// Wait for data
RingSnapshot result = await connection.ReadAsync();
if (result.IsClosed) return; // Connection was closed
// Get all received buffers as UnmanagedMemoryManager[]
var rings = connection.GetAllSnapshotRingsAsUnmanagedMemory(result);
// Create a ReadOnlySequence for easy slicing/parsing
ReadOnlySequence<byte> sequence = rings.ToReadOnlySequence();
// Return all buffers when done
rings.ReturnRingBuffers(connection.Reactor);
// Reset for next read
connection.ResetRead();For fine-grained control, consume buffers one at a time:
RingSnapshot result = await connection.ReadAsync();
if (result.IsClosed) return;
// Iterate through individual ring buffers
while (connection.TryGetRing(result.TailSnapshot, out RingItem ring))
{
ReadOnlySpan<byte> data = ring.AsSpan();
// Process data...
connection.ReturnRing(ring.BufferId);
}
connection.ResetRead();| Property | Type | Description |
|---|---|---|
TailSnapshot |
long |
Snapshot of the receive ring tail at read time |
IsClosed |
bool |
Whether the connection was closed |
| Property | Type | Description |
|---|---|---|
Ptr |
byte* |
Pointer to the receive buffer |
Length |
int |
Number of bytes received |
BufferId |
ushort |
Kernel buffer ID (used with ReturnRing()) |
For convenience, zerg provides two adapter classes that wrap the low-level ring API:
var reader = new ConnectionPipeReader(connection);
while (true)
{
var result = await reader.ReadAsync();
if (result.IsCompleted) break;
var buffer = result.Buffer;
// Parse buffer...
reader.AdvanceTo(consumed, examined);
}
reader.Complete();Kernel buffers stay held until AdvanceTo releases them β no copies. Supports partial consumption for protocol parsing.
await using var stream = new ConnectionStream(connection);
var buf = new byte[4096];
while ((int n = await stream.ReadAsync(buf)) > 0)
{
// Process buf[..n]
await stream.WriteAsync(responseBytes);
await stream.FlushAsync();
}One copy per read. Use when integrating with APIs that require Stream.
Full API reference: Connection Read, ConnectionPipeReader, ConnectionStream
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β WRITE LIFECYCLE β
β β
β connection.Write(data) connection.GetSpan(size) β
β ββ or ββ int written = Format(span); β
β connection.Write(span) connection.Advance(written); β
β β β β
β ββββββββββββββββ¬βββββββββββββββββββββ β
β βΌ β
β Staged in write slab β
β (NativeMemory, no GC) β
β β β
β βΌ β
β await connection.FlushAsync() β
β β β
β βΌ β
β Reactor submits send SQE β
β (handles partial sends) β
β β β
β βΌ β
β Kernel delivers to client β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
connection.Write("HTTP/1.1 200 OK\r\nContent-Length: 2\r\n\r\nOK"u8);
await connection.FlushAsync();Span<byte> span = connection.GetSpan(256);
// Write directly into the span...
int bytesWritten = FormatResponse(span);
connection.Advance(bytesWritten);
await connection.FlushAsync();- Write: Data is staged in the connection's write buffer
- FlushAsync: Signals the reactor to issue a
sendSQE to the kernel - The reactor handles partial sends automatically (resubmits remaining data)
- The write buffer is reset after the full send completes
Full API reference: Connection Write
The repository includes example connection handlers demonstrating different API levels:
Simplest approach. Gets all snapshot rings and processes them as spans. Good starting point for understanding the API.
Examples/ZeroAlloc/Basic/Rings_as_ReadOnlySpan.cs
Same as above but creates a ReadOnlySequence<byte> from the rings, which is useful for SequenceReader<byte> based parsing.
Examples/ZeroAlloc/Basic/Rings_as_ReadOnlySequence.cs
Same zero-copy handler as above but with IORING_SETUP_SQPOLL | IORING_SETUP_SQ_AFF enabled. The kernel spawns a dedicated polling thread per ring that continuously drains the submission queue, eliminating the submit syscall. Trades a CPU core per reactor for lower latency under sustained load.
Examples/ZeroAlloc/SqPoll/SqPollExample.cs
Zero-copy reads via ConnectionPipeReader. Data stays in io_uring kernel buffers until explicitly consumed via AdvanceTo. Supports partial consumption for protocol parsing.
Examples/PipeReader/PipeReaderExample.cs
BCL Stream compatibility via ConnectionStream. Copies received bytes into a managed buffer on each read. Use when integrating with APIs that require Stream.
Examples/Stream/StreamExample.cs
# Default (PipeReader)
dotnet run --project Examples
# Specific handler
dotnet run --project Examples -- raw
dotnet run --project Examples -- sqpoll
dotnet run --project Examples -- pipereader
dotnet run --project Examples -- streamio_uring is a Linux kernel interface for asynchronous I/O based on shared-memory ring buffers:
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β USERSPACE β
β β
β Your Code Your Handler β
β ββββββββββββββββββ ββββββββββββββββββ β
β β prep_recv() β β process CQEs β β
β β prep_send() β β dispatch by β β
β β prep_accept() β β user_data tag β β
β βββββββββ¬βββββββββ βββββββββ²βββββββββ β
β β write SQEs β read CQEs β
β βΌ β β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β SHARED MEMORY (mmap'd) β β
β β β β
β β βββββββββββββββββββββββ ββββββββββββββββββββββββ β β
β β β Submission Queue β β Completion Queue β β β
β β β [SQE][SQE][SQE].. β β [CQE][CQE].. β β β
β β βββββββββββββββββββββββ ββββββββββββββββββββββββ β β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β kernel reads SQ β² kernel writes CQ β
βββββββββββββΌβββββββββββββββββββββββββββββΌββββββββββββββββββββββββββ€
β KERNEL βΌ β β
β ββββββββββββββββββββββββββββββββββββ β
β β I/O Processing β β
β β accept / recv / send β β
β ββββββββββββββββββββββββββββββββββββ β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
SQE: [opcode][fd][buf/len][user_data][flags] β what to do
CQE: [user_data][res][flags] β what happened
| Feature | Kernel | Description |
|---|---|---|
| Multishot Accept | 5.19+ | Single submission produces a CQE for every new connection |
| Multishot Recv | 6.0+ | Single submission per connection; kernel fills a buffer from the buffer ring for each packet |
| Provided Buffer Rings | 5.19+ | Pre-registered buffer pool; kernel picks a buffer and returns its ID in the CQE |
| Incremental Buffer Consumption | 6.12+ | Kernel packs multiple recvs into a single buffer at successive offsets, reducing buffer ring pressure (opt-in) |
| SQPOLL | 5.1+ | Kernel thread polls the SQ, eliminating the submit syscall at the cost of a dedicated CPU core (opt-in) |
| SQ_AFF | 5.1+ | Pin the SQPOLL kernel thread to a specific CPU for cache locality |
| SINGLE_ISSUER | 6.0+ | Optimizes for single-thread submission β matches reactor model (default) |
| DEFER_TASKRUN | 6.1+ | Defers kernel task execution for better async/await integration (default) |
| Batch CQE Processing | 5.1+ | Drain up to 4096 CQEs per loop iteration via peek_batch_cqe + cq_advance |
| Submit-and-Wait | 5.1+ | Combined submit + wait in a single io_uring_enter syscall |
| Async Cancellation | 5.5+ | Cancel in-flight multishot operations by user_data match when connections close |
| Tunable | Increase for... | Decrease for... |
|---|---|---|
RecvBufferSize |
Large payloads (fewer syscalls) | Low memory usage, small messages |
BufferRingEntries |
Many concurrent connections | Lower memory footprint |
| Tunable | Higher value | Lower value |
|---|---|---|
BatchCqes |
Better throughput under load | Lower per-loop latency |
| Tunable | Lower value (e.g. 1ms) | Higher value (e.g. 100ms) |
|---|---|---|
CqTimeout |
Lower tail latency, higher CPU | Lower CPU usage, higher tail latency |
| Flag | Effect |
|---|---|
IORING_SETUP_SQPOLL |
Kernel thread polls SQ; saves syscalls but dedicates a CPU core |
IORING_SETUP_DEFER_TASKRUN |
Better for async/await integration (default) |
IORING_SETUP_SQ_AFF |
Pin SQPOLL kernel thread to a specific CPU core |
IORING_SETUP_SINGLE_ISSUER |
Optimize for single-thread submission (default) |
| Tunable | Effect |
|---|---|
IncrementalBufferConsumption = true |
Each connection gets its own buffer ring; kernel packs multiple recvs into one buffer, reducing buffer ring pressure for small reads (kernel 6.12+) |
ConnectionBufferRingEntries |
Number of buffers per connection ring in incremental mode. Memory per connection = entries * RecvBufferSize |
See Performance Tuning and Buffer Management guides for more.
zerg/ # Core library (NuGet package)
βββ zerg.csproj
βββ ABI/ # Linux system ABI bindings
β βββ CPU.cs # CPU affinity (sched_setaffinity)
β βββ Kernel.cs # Kernel-level utilities
β βββ LinuxSocket.cs # Socket syscall wrappers
β βββ URing.cs # io_uring P/Invoke bindings (liburingshim)
βββ Connection/ # Per-connection state and APIs
β βββ Connection.cs # Core connection class
β βββ Connection.Read.cs # Read state, IValueTaskSource, async signaling
β βββ Connection.Read.HighLevelApi.cs # Batch read APIs (GetAllSnapshotRings, etc.)
β βββ Connection.Read.LowLevelApi.cs # Low-level streaming APIs (TryGetRing)
β βββ Connection.Write.cs # Write buffer state
β βββ Connection.Write.HighLevelApi.cs # Write + FlushAsync
β βββ Connection.Write.IBufferWriter.cs # IBufferWriter<byte> implementation
β βββ Connection.Write.LowLevelApi.cs # Low-level write APIs
β βββ ConnectionStream.cs # BCL Stream adapter
β βββ ConnectionPipeReader.cs # PipeReader adapter
βββ Engine/ # Reactor pattern implementation
β βββ Engine.cs # Main coordinator
β βββ Engine.Config.cs # Configuration and thread setup
β βββ Engine.Acceptor.cs # Accept event loop
β βββ Engine.Acceptor.Listener.cs # Listener socket setup
β βββ Engine.Reactor.cs # Reactor state and setup (shared mode)
β βββ Engine.Reactor.Incremental.cs # Per-connection buffer ring lifecycle (incremental mode)
β βββ Engine.Reactor.Handle.cs # CQE dispatch (recv/send/cancel)
β βββ Engine.Reactor.HandleIncremental.cs # CQE dispatch for incremental buffer mode
β βββ Engine.Reactor.HandleSubmitAndWaitCqe.cs # Two-call submit pattern
β βββ Engine.Reactor.HandleSubmitAndWaitSingleCall.cs # Single-call submit pattern
β βββ Configs/
β βββ EngineOptions.cs # Top-level engine configuration
β βββ ReactorConfig.cs # Per-reactor configuration
β βββ AcceptorConfig.cs # Acceptor configuration
β βββ IPVersion.cs # IPv4 / IPv6 / DualStack enum
βββ Utils/ # Data structures and helpers
β βββ RingItem.cs # Received buffer metadata (ptr, len, buf_id)
β βββ ReadResult.cs # RingSnapshot struct (read snapshot result)
β βββ RingSegment.cs # ReadOnlySequence segment node
β βββ WriteItem.cs # Write buffer descriptor
β βββ PinnedByteSequence.cs # Pinned byte[] as ReadOnlySequence
β βββ Memory/
β β βββ MemoryExtensions.cs # Memory helper extensions
β βββ ReadOnlySpan/
β β βββ ReadOnlySpanExtensions.cs # Span parsing helpers
β βββ UnmanagedMemoryManager/
β β βββ UnmanagedMemoryManager.cs # Wraps unmanaged ptr as MemoryManager<byte>
β β βββ UnmanagedMemoryManagerExtensions.cs # Batch ring β sequence helpers
β βββ SingleProducerSingleConsumer/
β β βββ SpscRecvRing.cs # Lock-free SPSC ring buffer
β βββ MultiProducerSingleConsumer/
β βββ MpscIntQueue.cs # Lock-free MPSC int queue
β βββ MpscUShortQueue.cs # Lock-free MPSC ushort queue (buffer returns)
β βββ MpscUlongQueue.cs # Lock-free MPSC ulong queue (incremental buffer returns)
β βββ MpscRecvRing.cs # MPSC recv ring (reactor β connection)
β βββ MpscWriteItem.cs # MPSC write item queue
βββ native/ # Bundled native libraries
βββ uringshim.c # C shim source (wraps liburing)
βββ uringshim.h # C shim header
βββ liburingshim.so # Compiled shared library
βββ linux-x64/liburingshim.so # NuGet runtime: glibc
βββ linux-musl-x64/liburingshim.so # NuGet runtime: musl (Alpine)
| Dependency | Version | Purpose |
|---|---|---|
Microsoft.Extensions.ObjectPool |
10.0.2 | Connection object pooling |
System.IO.Pipelines |
9.0.4 | PipeReader adapter (ConnectionPipeReader) |
liburingshim.so |
bundled (liburing 2.9) | C shim bridging P/Invoke to liburing |
Note: The bundled
liburingshim.sostatically links liburing 2.9. This version is required forIOU_PBUF_RING_INC(incremental buffer consumption) β liburing <= 2.5 silently drops the buffer ring flags. If rebuilding the shim from source, use liburing >= 2.8.
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β β
β ACCEPTOR THREAD β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β io_uring: multishot accept β β
β β Accepts connections, sets TCP_NODELAY β β
β β Distributes FDs round-robin to reactors β β
β ββββββββββββ¬βββββββββββββββ¬βββββββββββββββ¬βββββββββββββ β
β β β β β
β ConcurrentQ ConcurrentQ ConcurrentQ β
β (lock-free) (lock-free) (lock-free) β
β β β β β
β βΌ βΌ βΌ β
β ββββββββββββββββ ββββββββββββββββ ββββββββββββββββ β
β β REACTOR 0 β β REACTOR 1 β β REACTOR N β β
β β β β β β β β
β β Event Loop: β β Event Loop: β β Event Loop: β β
β β 1. Drain β β 1. Drain β β 1. Drain β β
β β new FDs β β new FDs β β new FDs β β
β β 2. Drain β β 2. Drain β β 2. Drain β β
β β buf rets β β buf rets β β buf rets β β
β β 3. Drain β β 3. Drain β β 3. Drain β β
β β flushes β β flushes β β flushes β β
β β 4. Process β β 4. Process β β 4. Process β β
β β CQEs β β CQEs β β CQEs β β
β ββββββββ¬ββββββββ ββββββββ¬ββββββββ ββββββββ¬ββββββββ β
β β β β β
β βΌ βΌ βΌ β
β ββββββββββββββββ ββββββββββββββββ ββββββββββββββββ β
β β Handler β β Handler β β Handler β β
β β Tasks β β Tasks β β Tasks β β
β β (async/ β β (async/ β β (async/ β β
β β await) β β await) β β await) β β
β ββββββββββββββββ ββββββββββββββββ ββββββββββββββββ β
β β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Thread safety guarantees:
- Each connection belongs to exactly one reactor (no cross-thread contention)
- MPSC queues handle all cross-thread communication (lock-free)
Volatile.Read/Volatile.WriteandInterlockedoperations enforce correct memory ordering- Connection pooling uses generation counters to prevent stale access after reuse
MIT License - Copyright (c) 2026 Diogo Martins (MDA2AV)