PHALANXconsensus engine
v1.0.0

architecture

how phalanx moves data from a client request to a committed, replicated state change.

the single-threaded event loop

Phalanx operates on the principle of single-threaded control. While modern hardware is multi-core, consensus is inherently sequential — a state machine that processes one input at a time. By channeling all state mutations through a single select loop, we eliminate the complexity of distributed locking within the process.

The event loop in node.go multiplexes six distinct signal channels:

signalsourceaction
ticker.Cinternal tickertriggers election timeouts or leader heartbeats
grpc.RPCs()gRPC serveringests AppendEntries or RequestVote messages
grpc.Proposes()client APIingests new commands into the Raft log
grpc.Reads()client APItriggers quorum check for linearizable reads
responseChasync gRPC clientshandles callbacks from peer RPC responses
discovery.Events()gossip meshingests NodeJoin events to trigger config changes
// node.go — the complete event loop
for {
    select {
    case <-ctx.Done():
        return n.shutdown()

    case <-ticker.C:
        n.raft.Tick()
        n.applyCommitted()
        n.persistState()
        n.dispatchMessages()

    case rpc := <-n.grpc.RPCs():
        n.handleRPC(rpc)

    case op := <-n.grpc.Proposes():
        n.handlePropose(op)

    case op := <-n.grpc.Reads():
        n.handleRead(op)

    case resp := <-n.responseCh:
        n.raft.Step(resp)
        n.applyCommitted()
        n.dispatchMessages()

    case event := <-n.discoveryEvents():
        n.handleDiscoveryEvent(event)
    }
}

the pure state machine pattern

raft.go does not know the network exists. It has no imports of net, no time.Now(), no goroutines. When a message is processed via Step(msg), the state machine appends outgoing messages to an internal buffer. The caller (Node) is responsible for the actual wire delivery:

// The Raft state machine produces messages.
// The Node dispatches them over gRPC.
msgs := raft.Messages()
for _, m := range msgs {
    go transport.Send(m)
}

This design allows Phalanx to run 1,000+ consensus rounds in a unit test in under 10ms, as no real time passes and no network overhead exists. The state machine is fully deterministic — given the same sequence of Tick() and Step() calls, it produces identical outputs regardless of wall-clock time.

system topology

gRPC :9000N=5 Q=3 · tolerates 2 region failuresClient (CLI)Node 0JNB · JohannesburgNode 1LHR · LondonNode 2ORD · ChicagoLEADERNode 3SIN · SingaporeNode 4FRA · FrankfurtRaftRaftRaftRaftRaftKV FSMKV FSMKV FSMKV FSMKV FSMBadgerDBBadgerDBBadgerDBBadgerDBBadgerDBSWIM Gossip Mesh · 5 Regions

phalanx global mesh — 5-node consensus cluster across 5 continents

data flow

write path (propose)

Client
  → gRPC Propose(data)
  → Node event loop
  → raft.Propose(data)
  → append to leader log (index N)
  → broadcastHeartbeat → AppendEntries to all followers
  → majority ack → commitIndex advances to N
  → applyCommitted() → fsm.Apply(SET key=value)
  → signal pending proposal channel
  → respond to client: success

read path (linearizable)

Client
  → gRPC Read(key)
  → Node event loop
  → check: am I leader? (if not → return leader_addr for redirect)
  → HasLeaderQuorum() → verify majority acked this heartbeat round
  → fsm.Get(key)
  → respond to client: value

component boundaries

packageresponsibilityknows about
raft/consensus logicnothing (pure state machine)
network/gRPC transportpb/ types only
storage/BadgerDB persistencepb/ types only
fsm/KV state machinenothing
discovery/SWIM gossipnothing
node.goevent loop glueeverything

every package except node.go is independently testable with zero dependencies on other Phalanx packages. this is by design.