Full node implementation for the Amadeus blockchain network.
This crate provides the amadeusd binary - a complete blockchain node that
participates in consensus, validates transactions, maintains chain state, and
serves the web dashboard.
Build from source:
git clone https://github.com/amadeusprotocol/rs_node
cd rs_node
cargo build --release --bin amadeusdBinary location: target/release/amadeusd
Start a full node:
cargo nodeOr with custom configuration:
UDP_ADDR=0.0.0.0:36969 HTTP_PORT=3000 cargo nodeUDP_ADDR- P2P network address (default:127.0.0.1:36969)HTTP_PORT- Web dashboard and API port (default:3000)UDP_DUMP- File path to record network traffic for debuggingUDP_REPLAY- File path to replay recorded network trafficRUST_LOG- Logging level (debug,info,warn,error)
- Validates and propagates entries and transactions
- Maintains rooted and temporal chains with BFT consensus
- Processes attestations from validators
- Syncs with peer nodes via catchup protocol
- Persistent RocksDB storage at
~/.amadeusd-rs/fabric/db - Transaction pool and mempool management
- Entry and transaction indexing
- Contract state storage
- UDP-based peer-to-peer protocol
- Encrypted and compressed messaging (AES-256-GCM + zstd)
- Reed-Solomon erasure coding for large messages
- Automatic peer discovery and handshake
Access at http://localhost:3000:
- Chain explorer and block viewer
- Transaction history and status
- Peer network visualization
- Contract deployment and interaction
- Wallet management
Programmatic access at http://localhost:3000/api:
/api/chain/*- Chain queries (entries, transactions, state)/api/tx/submit- Submit signed transactions/api/contract/*- Contract deployment and calls/api/peer/*- Peer network information/api/wallet/*- Wallet operations/api/epoch/*- Epoch and validator data/api/metrics- Prometheus metrics/api/health- Health check endpoint
OpenAPI spec: http://localhost:3000/api/openapi.yaml
Node configuration stored in: ~/.amadeusd-rs/
config.json- Node settings and validator identitynode.sk- Secret key for node identity (if validator)fabric/db/- RocksDB chain database
For wallet management and transaction submission, use the separate CLI tool:
cargo install amadeus-cli
ama --helpSee amadeus-cli for details.
Check .cargo/config.toml for command aliases. Environment variables reflect
the original Elixir node settings:
UDP_ADDR- address of the peer, default127.0.0.1:36969UDP_DUMP- file to dump the UDP traffic toUDP_REPLAY- file to replay the UDP traffic fromHTTP_PORT- port to use for the web UI
Run the full test suite:
cargo test-allNote: Some KV tests may be flaky. If they fail, re-run them.
The node can be debugged using tokio-console (cargo install tokio-console)
and logs that are printed to the output. Alternatively you can use gdb/lldb
and leaks/heap.
Expect memory footprint in the debugging mode to be higher and grow
cargo node
# for tokio console debugging
RUSTFLAGS="--cfg tokio_unstable" RUST_LOG=debug cargo node --features debugging
tokio-console # in another terminal
# for memory leaks analysis
leaks -nocontext $(pgrep -f "target/debug/node")
# for network analysis
sudo tcpdump -i any -nnvv -e 'udp and port 36969'UDP_DUMP=traffic.bin cargo nodeUDP_REPLAY=traffic.bin cargo nodeRUST_LOG=debug cargo nodeThe amadeusd library has the implementation of a traffic capturing and replay natively through rust, the size of the capture is a bit smaller than pcap capture 8.3M vs 8.7M, and the format is custom binary and can't be reliably dumped/parsed/rewritten elsewhere.
# Record traffic to log.local when running a node
# This command is not transparent and will require the UDP socket,
# beware when running it alongside another running amadeus node
UDP_DUMP=log.local cargo nodeThe log.local file has the binary capture of the traffic. If you
run the above command second time, the new capture will get appended.
# Replay the captured traffic
UDP_REPLAY=log.local cargo nodeBefore running the simulation, run scripts/rewrite-pcaps.sh en0
to rewrite the pcap files to match your LAN, this is needed to fix
the replay addressing, feel free to choose any interface.
cargo node
# best to run the replay in another terminal
tcpreplay -i en0 --pps 1000 assets/pcaps/test.pcap.localOptionally you can watch the replay as it happens:
tcpdump -i en0 -n -vv udp dst port 36969 # to watch replay in real time# This command is transparent to the node but could impact the performance,
# so feel free to run it alongside the node, but with caution
tcpdump -i any udp dst port 36969 -w test.pcap -c 10000Replaying assets/pcaps/test.pcap.local sends exactly 10000 packets, if you
see that not all packets from the capture reach the light client, it
could be because the kernel buffers are too small to handle the replay
at a given rate, you need to increase the kernel buffers for UDP
traffic or decrease the --pps value.
# The packets are often getting lost because they overflow the kernel buffers
# So it is suggested to increase the kernel buffers before replaying
sudo sysctl -w kern.ipc.maxsockbuf=8388608 # raises per-socket max
sudo sysctl -w net.inet.udp.recvspace=2097152 # default UDP recv buffer (per-socket)
sysctl kern.ipc.maxsockbuf net.inet.udp.recvspace # check the valuesIf you see that no packets can reach the light client, the reason could be that your IP address changed (e.g. after restart), simply rerun:
rm assets/pcaps/*.local && ./scripts/rewrite-pcaps.sh en0If installed on MacOS using brew, the commands are rocksdb_ldb and
rocksdb_sst_dump,
if manually - then the commands are ldb and sst_dump respectively.
rocksdb_ldb --db=.amadeusd-rs/fabric/db list_column_families
rocksdb_ldb --db=.amadeusd-rs/fabric/db --column_family=sysconf scan
rocksdb_ldb --db=.amadeusd-rs/fabric/db --column_family=entry_by_height scan
rocksdb_ldb --db=.amadeusd-rs/fabric/db --column_family=sysconf get rooted_tipcd ex
make depend && make
./build.sh
mix deps.get
WORKFOLDER="$HOME/.cache/testamadeusd" OFFLINE=1 iex -S mixNodePeers.all() |> Enum.filter(fn peer -> peer.ip == "167.99.137.218" end) |> Enum.map(& &1.ip)
API.Peer.all_for_web()Profiling of the node shows the biggest bottleneck as the
get_shared_secretfunction that takes >82% of the CPU time. From it about 60% is BLS Scalar and 35% is parse public key.
Another direction of improvement is to avoid using synchronisation, like mutexes, and instead to use channels for communication between the threads.
Apache-2.0