Ojo is a small Rust-based system metrics agent that collects host and process telemetry and exports it using OpenTelemetry OTLP.
It supports Linux and Windows, with platform-specific collectors under the hood, and can send metrics to any OTLP-compatible receiver (for example, OpenTelemetry Collector).
Ojo focuses on:
system.*metrics (CPU, memory, disk, network, load, paging)process.*metrics (optional, controlled by config)
The collector computes delta/rate metrics between polling intervals where appropriate.
src/main.rs: agent loop, polling, recording, flush, shutdownsrc/config.rs: config loading and environment mappingsrc/linuxcollect.rs: Linux host/process collectionsrc/wincollect.rs: Windows host/process collectionsrc/delta.rs: rate/delta derivation logicsrc/metrics.rs: OpenTelemetry instrument creation and recordinglinux.yaml: sample Linux agent configwindows.yaml: sample Windows agent configotel.yaml: sample OpenTelemetry Collector pipelinegrafana/ojo.json: sample Grafana dashboard
- Rust toolchain (
cargo,rustc) installed. - Network connectivity from Ojo to your OTLP endpoint.
- If using process metrics:
- Linux: permissions to read
/procdata for target processes. - Windows: run with sufficient privileges to query process/system APIs.
Use one of the included config files:
linux.yamlwindows.yaml
Set at least:
export.otlp.endpointexport.otlp.protocol
For HTTP OTLP, endpoint typically includes a path like /v1/metrics.
Run with explicit config path:
# Linux
cargo run -- --config linux.yaml
# Windows
cargo run -- --config windows.yamlThis is the recommended way to run Ojo for development
If you want maximum runtime performance, build the optimized binary first:
cargo build --releaseThen run the compiled binary directly:
# Linux
./target/release/ojo --config linux.yaml
# Windows (PowerShell or CMD)
target\\release\\ojo.exe --config windows.yamlIf you still prefer cargo run, use release mode so it builds/runs optimized code:
# Linux
cargo run --release -- --config linux.yaml
# Windows
cargo run --release -- --config windows.yamlIf --config is not provided, Ojo looks for:
PROC_OTEL_CONFIGenv var, otherwiseojo.yaml
Example:
PROC_OTEL_CONFIG=linux.yaml cargo runTop-level sections:
servicecollectionexportmetrics
service:
name: linux
instance_id: linux-0001name: exported as service name.instance_id: unique ID for host/agent instance.
collection:
poll_interval_secs: 5
include_process_metrics: truepoll_interval_secs: polling cadence.include_process_metrics: enable/disable process metrics.
export:
otlp:
endpoint: "http://127.0.0.1:4317"
protocol: grpc
headers:
x-otlp-token: "token"
compression: gzip
timeout_secs: 10
batch:
interval_secs: 5
timeout_secs: 10otlp fields:
endpoint: OTLP endpoint URL.protocol:grpcorhttp/protobuf.tokenandtoken_header: convenience auth header config.headers: additional static OTLP headers.compression: exporter compression.timeout_secs: OTLP export timeout.
batch fields:
interval_secs: maps toOTEL_METRIC_EXPORT_INTERVAL(milliseconds internally).timeout_secs: maps toOTEL_METRIC_EXPORT_TIMEOUT(milliseconds internally).
metrics:
include:
- system.
- process.
exclude:
- system.linux.- Prefix-based filtering.
includeempty means include all.excludealways wins over include.
otel.yaml in this repo is an example pipeline that:
- Receives OTLP metrics over HTTP (
:4355) and gRPC (:4356). - Applies memory limiter and batch processors.
- Exports using Prometheus remote write.
Start collector with your preferred distribution, for example:
otelcol --config otel.yamlThen point Ojo config endpoint to collector HTTP:
export:
otlp:
endpoint: "http://<collector-host>:4355/v1/metrics"
protocol: http/protobuf- Default log level is
info. - Override with
RUST_LOG, for example:
RUST_LOG=debug cargo run -- --config linux.yaml- On successful export connectivity, Ojo logs
Connected Successfully. - On transient export failure, Ojo logs reconnect warnings and retries on next poll.
Ctrl+Ctriggers graceful shutdown.
Ojo intentionally avoids forcing fake values for unsupported metrics.
- Linux-only metrics are emitted on Linux.
- On Windows, unsupported Linux-specific fields are omitted (no data) rather than emitted as
0. - If Windows disk performance counters are unavailable for a disk, disk rate/pending/time metrics for that disk are omitted.
This helps dashboards distinguish "real zero" from "metric not available on this platform".
- Verify Ojo is running with the expected config file.
- Check endpoint/protocol match (
grpcvshttp/protobuf). - Confirm collector is listening on the configured host/port.
- Set
RUST_LOG=debugand inspect export/flush logs.
Expected behavior. Unsupported Linux-specific metrics are omitted by design.
- Ensure
include_process_metrics: true. - Check runtime permissions.
- Verify include/exclude filters are not removing
process.*.
Build:
cargo buildRun tests (if present):
cargo testFormat/lint (if configured in your environment):
cargo fmt
cargo clippy --all-targets --all-features# Linux run
cargo run -- --config linux.yaml
# Windows run
cargo run -- --config windows.yaml
# Use env var-based config selection
PROC_OTEL_CONFIG=windows.yaml cargo run
# Debug logging
RUST_LOG=debug cargo run -- --config linux.yaml