TSUNAGI Architecture

Deterministic Cardano node infrastructure in Zig

TSUNAGI is an independent Cardano node written in Zig 0.13.0. It compiles to a single static binary with three external dependencies: Zig, LMDB, and libsodium. The architecture separates concerns into distinct runtime subsystems connected through structured callbacks and bounded in-memory stores.

This page describes the real implemented system as it exists today, including what is built, what is offline-only, and what is not yet implemented.

  Cardano Network (relay peers)
        |
        v
  ┌─────────────────────────────────────────────────┐
  │  Ouroboros Protocol Engine                       │
  │  ChainSync / BlockFetch / TxSubmission           │
  │  KeepAlive / Mini-Protocol Multiplexer           │
  └────────────────────┬────────────────────────────┘
                       |
                       v
  ┌─────────────────────────────────────────────────┐
  │  Block Processing Pipeline                       │
  │  CBOR decode → tx decode → delta extraction      │
  │  shadow verification → state application         │
  └────────┬───────────┬───────────┬────────────────┘
           |           |           |
           v           v           v
  ┌────────────┐ ┌───────────┐ ┌───────────────────┐
  │ LMDB Store │ │ Journals  │ │ Observability     │
  │ UTxO set   │ │ (ndjson)  │ │ Stores            │
  │ undo log   │ │ 4 files   │ │ (ring buffers)    │
  │ coverage   │ │ append    │ │ bounded, in-mem   │
  └────────────┘ └─────┬─────┘ └─────────┬─────────┘
                       |                 |
                       v                 v
              ┌──────────────────────────────────┐
              │ JSON Generation Layer             │
              │ 11 shell scripts                  │
              │ jq-first / sed-fallback           │
              └──────────────┬───────────────────┘
                             |
                             v
              ┌──────────────────────────────────┐
              │ Web Surfaces                      │
              │ operator / explorer / status      │
              │ labs / decode / network           │
              └──────────────────────────────────┘
Figure 1 — TSUNAGI system overview showing data flow from network peers through the block pipeline to persistence, journals, observability stores, JSON generation, and web surfaces.
Runtime Core
Ouroboros Protocol Engine Network
Full Ouroboros NodeToNode protocol implementation. ChainSync for header tracking, BlockFetch for block retrieval, TxSubmission for mempool interaction, and KeepAlive for connection liveness. A mini-protocol multiplexer manages concurrent protocol sessions over a single TCP connection.
Block Processing Pipeline Decode + Extract
Each block passes through a deterministic pipeline: CBOR decode, per-transaction summary extraction (inputs, outputs, fee, metadata), UTxO delta computation (consumed/produced/net), and state application. A shadow ledger path processes every block independently to verify parity with the primary path. Divergence is detected within one block.
LMDB Persistent Storage Persistence
Transactional persistent storage with atomic writes. LMDB holds the UTxO set, undo history, and coverage state. Three persistence modes: memory-only for speed, LMDB-native for progressive convergence, LMDB-truth for full verified persistence. Snapshot bootstrap via TSF2 format with Blake2b-256 digest and Ed25519 signature.
Cursor Persistence Chain Tip State
The chain tip (slot, block number, tip hash) is persisted to cursor.json after every block. On restart the node resumes from the persisted cursor. The cursor file is the authoritative record of sync progress.
Observability Layer

TSUNAGI observability is passive. The runtime records events to bounded in-memory stores and append-only journal files. Shell scripts extract journal data into JSON endpoints. Web pages poll those endpoints. No observability component modifies runtime behavior.

Journals
Append-Only Event Logs
4 journal files in NDJSON format: journal.ndjson (roll_forward, roll_backward, tx_decode), mempool.ndjson (mempool_tx), slot_observatory.ndjson (slot_observation), peer_observatory.ndjson (peer_event). Plus confirmation.ndjson for confirmed transactions.
Stores
Bounded Ring Buffers
Fixed-capacity in-memory stores with no dynamic allocation. DeltaHistory (200 blocks), BlockDecodeStore (100), TxSummaryHistory (50), MempoolSummaryStore (128), SlotObservatory (512), PeerObservatory (512), ConfirmationTracker (256+256).
Block Pipeline
Mempool Observatory
Transactions submitted through the local TxSubmission path are decoded at submission time: inputs, outputs, fee, metadata, canonical txid. Summaries are recorded to the mempool journal and ring buffer without blocking submission.
Correlation
Confirmation Tracking
Locally submitted transactions are correlated with block inclusion using exact txid matching (blake2b256 of CBOR tx body). The confirmation tracker records submission time, confirmation time, and the confirming block. Bounded and local-only.
Consensus
Slot Observatory
Records block arrival rhythm across the chain timeline. Each observation captures the inter-block gap (empty slots between consecutive blocks). Passive recording — no influence on consensus.
Network
Peer Observatory
Captures peer connect/disconnect events at the transport boundary and block arrival events. Only events that the runtime can truthfully attribute are recorded. The TxSubmission protocol is pull-based, so inbound peer transactions are not observable.
JSON Generation & Web Surface

The runtime produces journals and in-memory stores. Shell scripts read these sources and generate static JSON files. Web pages fetch the JSON on a 10-second polling interval. This separation means the web layer has zero coupling to the Zig runtime.

JSON EndpointSourceWeb Consumer
operator.jsoncursor.json, journal.ndjsonOperator dashboard
status.jsoncursor.json, journal.ndjsonStatus page
blocks.jsonjournal.ndjson, tx_decode.ndjsonExplorer
delta.jsonjournal.ndjsonExplorer
decode.jsontx_decode.ndjsonDecode page
network.jsonpeer_observatory.ndjsonNetwork page, Operator
slot-observatory.jsonslot_observatory.ndjsonSlot page, Operator
mempool.jsonmempool.ndjsonMempool page, Operator
confirmation.jsonconfirmation.ndjson, mempool.ndjsonConfirmation page, Operator
health.jsonoperator.json, network.json, etc.Operator dashboard
producer-status.jsonproducer-bridge CLIOperator dashboard
Health Engine

A composite health score aggregates five independently scored components into a single 0–100 value. All arithmetic is integer-only with no floating point. The score classifies the node as excellent (≥90), healthy (≥75), degraded (≥60), or critical (<60).

ComponentWeightSignal
Peer25%Disconnect/connect ratio from peer observatory
Slot20%Average inter-block gap from slot observatory
Block25%Rollback rate from block pipeline counters
Confirm15%Average confirmation time for locally submitted txs
Mempool15%Pending transaction count from mempool tracker
Health Score Design

Missing components receive a neutral score of 80, so the health engine degrades gracefully when observability data is unavailable. The Zig runtime computes health inline; a shell mirror script (generate-health-json.sh) produces the same score from the same inputs for the web layer.

Block Pipeline Detail
  Block Body (CBOR bytes)
        |
        v
  ┌─────────────────────┐     ┌─────────────────────┐
  │ Transaction Decode   │     │ Shadow Ledger Path   │
  │ inputs, outputs,     │     │ independent decode   │
  │ fee, metadata        │     │ parity verification  │
  └──────────┬──────────┘     └──────────┬──────────┘
             |                           |
             v                           v
  ┌─────────────────────┐     ┌─────────────────────┐
  │ Delta Extraction     │     │ Shadow Delta Check   │
  │ consumed / produced  │     │ must match primary   │
  │ net UTxO change      │     │ divergence = error   │
  └──────────┬──────────┘     └─────────────────────┘
             |
     ┌───────┼───────┬──────────────┐
     |       |       |              |
     v       v       v              v
  LMDB   journal  delta_history  tx_summary
  state   .ndjson  ring buffer   ring buffer
  apply   append   (cap 200)     (cap 50)
Figure 2 — Block pipeline showing primary and shadow paths, delta extraction, and output sinks.
Producer Readiness

TSUNAGI includes an offline producer readiness harness. This evaluates whether the node would be elected leader for a given slot and assembles a local candidate block, but never broadcasts it. The producer path is entirely offline and local.

Producer Readiness Harness Evaluation
Standalone module that evaluates Praos leadership eligibility using E34 fixed-point threshold arithmetic. Checks VRF output against stake-weighted threshold, validates KES key period and expiration, performs Sum6Kes sign + verify round trip, and assembles a local candidate block (header + empty body in CBOR). State contract: READY / NOT_LEADER / NOT_READY.
Producer Bridge Runtime Integration
Maps runtime-shaped inputs (cursor slot, genesis config, ENV-loaded VRF/KES/stake material) to the readiness evaluation module. Pure computation with no file IO, no network IO, no broadcast. CLI command reads cursor.json and environment variables, computes epoch/KES period, and produces a JSON status report.
Artifact Bundle Offline Output
When the readiness evaluation returns READY, the harness can optionally write 5 artifact files to disk: candidate header CBOR, candidate body CBOR, full assembled block CBOR, hash manifest, and the readiness report JSON. These artifacts are written to a local directory and are never transmitted.
  cursor.json ──┐
  genesis.json ─┤
  ENV vars ─────┤
  (VRF, KES,    │
   stake, pool) │
                v
  ┌──────────────────────────────┐
  │ Producer Bridge              │
  │ compute epoch, KES period    │
  │ build BridgeInputs           │
  └──────────────┬───────────────┘
                 |
                 v
  ┌──────────────────────────────┐
  │ Readiness Evaluation         │
  │ VRF threshold check          │
  │ KES validity check           │
  │ sign + verify round trip     │
  │ candidate block assembly     │
  └──────────────┬───────────────┘
                 |
        ┌────────┴────────┐
        v                 v
  NOT_LEADER          READY
  (exit, report)      |
                      v
              ┌──────────────┐
              │ Artifact      │
              │ Bundle (opt)  │
              │ 5 files       │
              │ to local disk │
              └──────────────┘
Figure 3 — Producer readiness evaluation flow. All paths are offline. No network broadcast occurs.
Current Boundaries

TSUNAGI is a working follower node with full observability. Producer capabilities exist as offline evaluation only. The following table shows what is implemented and what is not.

CapabilityStatus
ChainSync / BlockFetchLive Full Ouroboros protocol
TxSubmissionLive Local mempool submission path
Block decode + delta extractionLive Deterministic pipeline
LMDB UTxO persistenceLive Three modes, snapshot bootstrap
Shadow ledger verificationLive Per-block parity check
Journals + JSON + web dashboardsLive Full observability pipeline
Health engineLive 5-component weighted scoring
Mempool + confirmation trackingLive Local submission path only
Slot + peer observatoriesLive Passive recording
Producer readiness evaluationOffline CLI command, never broadcasts
Producer bridgeOffline Reads cursor + ENV, no runtime hook
Artifact bundleOffline Writes files to disk only
Live block productionNot yet implemented
Block broadcast to peersNot yet implemented
Mainnet block productionNot yet implemented
Offline Before Online

Producer features are validated offline before any network activation. The readiness harness, bridge, and artifact bundle exist to prove that the node can correctly evaluate leadership and assemble valid candidate blocks without ever transmitting them. Live block production will be enabled on testnet first, only after offline verification is complete.

Design Principles
Core
Determinism
Identical inputs produce identical outputs at every stage. The ledger pipeline, delta extraction, and block decode are fully deterministic. No stage introduces randomness or non-reproducible behavior.
Memory
Bounded Storage
All in-memory observability stores use fixed-capacity ring buffers with no dynamic allocation. Capacities are compile-time constants. The node's memory footprint does not grow with chain length or runtime duration.
Observability
Passive Recording
Observability is read-only. Journals append, stores record, JSON scripts extract. Nothing in the observability layer modifies block processing, ledger state, or protocol behavior.
Dependencies
Minimal Stack
Zig 0.13.0, LMDB, and libsodium. No framework, no package manager, no build system beyond Zig's built-in. Single static binary output. Shell scripts use only standard POSIX utilities plus optional jq.
Design Lineage
Conceptual Layer Model

The current subsystem architecture grew from an earlier conceptual model that organized the system as named layers, each with a specific operational role: KAGAMI (diagnostics), YAMORI (guardian monitoring), TATE (fragment defense), KURA (stabilization), and TSURUGI (protocol engine). These names originate from Japanese functional metaphors and reflect the project's design lineage.

The conceptual model shaped the architectural separation of concerns that the current implementation inherits. The named layers remain part of the project's design vocabulary and are described in full in the manifesto. The architecture page documents the system as built.

Let It Run. Let It Resolve.

tsunagi.tech · Independent Cardano infrastructure research