Aim-compatible Python SDK for experiment tracking. The matyan-client sends run data to the Matyan frontier (WebSocket) and backend (REST): metrics, hyperparameters, custom objects (images, audio, figures), and log records. Use the same Run + Repo API you know from Aim, with optional framework adapters for Keras, PyTorch Lightning, Hugging Face, and others.
- Run — Create and manage a single experiment run; track scalars and custom objects, set hparams, add tags, log messages, upload artifacts.
- Repo — Query runs with MatyanQL, list/delete runs and experiments, aggregate sequence and params info.
- Custom objects —
Image,Audio,Figure,Text,Distributionfor use withrun.track()(optional extras for PIL/numpy/plotly). - Adapters — Callbacks and loggers for Keras, TensorFlow, PyTorch Lightning, Hugging Face, XGBoost, LightGBM, CatBoost, Optuna, and more (install only the extras you need).
- Python 3.10+ — Fully typed; compatible with existing Aim-style scripts (change imports and use
Repo(url=...)instead of a local path).
Base install (no optional dependencies):
python3 -m pip install matyan-client
# or with uv
uv add matyan-clientCore features work out of the box: Run, Repo, scalar metrics via track(), hyperparameters, tags, structured log methods, artifact uploads (presigned blob urls), and the backup/restore/convert CLI.
Install extras with brackets, e.g. python3 -m pip install matyan-client[image,figure] or uv add "matyan-client[image,figure]".
| Extra | Adds | Use case |
|---|---|---|
image |
Pillow, numpy | Image from PIL Image, numpy array, path, or bytes |
audio |
numpy | Audio from path, bytes, or numpy array (WAV) |
figure |
plotly | Figure from plotly Figure |
matplotlib |
matplotlib, plotly | Figure from matplotlib (via plotly.tools) |
numpy |
numpy | Faster Distribution histograms; used by Image/Audio and some adapters |
| Extra | Adds | Use case |
|---|---|---|
gpu |
nvidia-ml-py | GPU stats in background tracker when system_tracking_interval is set |
Install only the frameworks you use:
| Extra | Adapter use |
|---|---|
keras |
Keras MatyanCallback |
tensorflow |
TensorFlow/Keras callback |
pytorch |
PyTorch helpers (e.g. track_params_dists, track_gradients_dists) |
pytorch-lightning |
Lightning MatyanLogger |
pytorch-ignite |
Ignite logger and handlers |
hugging-face |
Transformers MatyanCallback |
distributed-hugging-face |
Distributed Transformers training |
xgboost, lightgbm, catboost |
Boosting callbacks/loggers |
optuna |
Optuna callback |
keras-tuner |
Keras Tuner callback |
prophet |
Prophet logger |
sb3 |
Stable-Baselines3 callback |
acme |
Acme logger/writer |
fastai |
FastAI callback |
paddle |
PaddlePaddle callback (no wheel on linux aarch64) |
mxnet |
MXNet handler |
| Extra | Contents |
|---|---|
extended |
image, audio, figure, matplotlib, numpy, gpu |
adapters-all |
All adapter extras listed above |
| Extra | Use case |
|---|---|
convert |
TensorBoard event logs → Matyan backup (tensorflow, tbparse, pillow, etc.) |
Install examples:
# Images and figures (Pillow, numpy, plotly)
python3 -m pip install matyan-client[image,figure]
# GPU system metrics
python3 -m pip install matyan-client[gpu]
# Keras and PyTorch Lightning adapters
python3 -m pip install matyan-client[keras,pytorch-lightning]
# Full media + GPU, no adapters
python3 -m pip install matyan-client[extended]
# All framework adapters
python3 -m pip install matyan-client[adapters-all]
# TensorBoard conversion
python3 -m pip install matyan-client[convert]Settings use environment variables with the MATYAN_ prefix (see matyan_client.config.Settings):
| Variable | Default | Description |
|---|---|---|
MATYAN_FRONTIER_URL |
http://localhost:53801 |
Frontier WebSocket (ingestion) |
MATYAN_BACKEND_URL |
http://localhost:53800 |
Backend REST API |
MATYAN_WS_VERBOSE |
false |
Verbose WebSocket logging |
MATYAN_WS_QUEUE_MAX_MEMORY_MB |
512 |
Max in-memory queue size (MB) |
MATYAN_WS_HEARTBEAT_INTERVAL |
10 |
Heartbeat interval (seconds) |
MATYAN_WS_BATCH_INTERVAL_MS |
50 |
Batch send interval (ms) |
MATYAN_WS_BATCH_SIZE |
100 |
Max messages per batch |
MATYAN_WS_RETRY_COUNT |
2 |
Send retries on failure |
You can override URLs per instance: Run(repo=..., frontier_url=...), Repo(url=...).
from matyan_client import Run
run = Run(experiment="my-experiment")
run["hparams"] = {"lr": 0.01, "batch_size": 32}
for step in range(100):
loss = 0.1 / (step + 1) # placeholder
run.track(loss, name="loss", step=step, context={"subset": "train"})
run.close()With env set (e.g. MATYAN_BACKEND_URL, MATYAN_FRONTIER_URL), the run is created on the server and metrics are sent to the frontier. View runs in the Matyan UI.
Create: Run(), Run(experiment="name"), or Run(run_hash=..., repo=...) to resume an existing run.
Attributes and params:
run["hparams"] = {...}or dotted keys (e.g.run["hparams.lr"] = 0.01)run.name,run.experiment,run.description,run.archived(get/set)run.add_tag("tag-name"),run.remove_tag("tag-name")
Tracking:
run.track(value, name="loss", step=0, context={"subset": "train"}, epoch=0)— log a scalar or a custom object (Image, Audio, etc.). Ifstepis omitted, an auto-incrementing step per(name, context)is used.- Custom objects (Image, Audio, Figure, Text, Distribution) are serialized and large blobs (image/audio) are uploaded to S3/GCS/Azure via presigned URLs.
Logging:
run.log_info(msg, **kwargs),run.log_warning,run.log_error,run.log_debug— structured log records sent to the UI.
Artifacts:
run.log_artifact(path, name=...)— upload a single filerun.log_artifacts(dir_path, name=...)— upload all files under a directory recursively
Lifecycle:
run.close()finalizes the run and flushes pending data. You can rely on__del__to close if the run is garbage-collected. For read-only access (e.g. fromRepo.get_run()), useRun(run_hash=..., read_only=True).
System tracking (optional):
- Constructor args:
system_tracking_interval(float seconds;Noneto disable),log_system_params,capture_terminal_logs. GPU metrics require thegpuextra (nvidia-ml-py).
Create: Repo(), Repo(url=...), or Repo.from_path(url). The repo talks to the Matyan backend REST API.
Runs:
repo.get_run(run_hash)— returns a read-onlyRunorNonerepo.iter_runs(),repo.list_all_runs(),repo.list_active_runs(),repo.total_runs_count()
Query (MatyanQL):
repo.query_runs(query="", paginated=False, offset=..., limit=...)repo.query_metrics(query),repo.query_images(query),repo.query_audios(query),repo.query_figure_objects(query),repo.query_distributions(query),repo.query_texts(query)
Management:
repo.delete_run(run_hash),repo.delete_runs([...]),repo.delete_experiment(exp_id)
Aggregation:
repo.collect_sequence_info(sequence_types=...),repo.collect_params_info()
Call repo.close() when finished to release the HTTP client.
Import from matyan_client: Image, Audio, Figure, Text, Distribution. Use with run.track(obj, name="...", step=...).
| Type | Inputs / factories | Extras |
|---|---|---|
| Image | PIL Image, numpy array, path, or bytes; optional caption, format_, quality, optimize |
image (Pillow + numpy) for non-file sources |
| Audio | Path, bytes, file-like, or numpy (WAV); optional format_, caption, rate |
audio (numpy) for numpy arrays |
| Figure | plotly Figure, matplotlib Figure (via plotly.tools), or dict | figure (plotly); matplotlib for matplotlib conversion |
| Text | Plain string | None |
| Distribution | Samples or pre-computed histogram; Distribution.from_histogram(hist, bin_range), Distribution.from_samples(samples, bin_count) |
Optional numpy for faster histogramming |
Example:
from matyan_client import Run, Image, Distribution
run = Run(experiment="demo")
run.track(Image("sample.png", caption="epoch 0"), name="samples", step=0)
run.track(Distribution.from_samples([0.1, 0.2, 0.3], bin_count=10), name="weights", step=0)
run.close()Adapters plug into training loops and log metrics/hparams to Matyan. Import from matyan_client.adapters.<module>. If the framework is not installed, you get a clear error with an install hint (install the matching extra).
Examples:
Keras:
from matyan_client.adapters.keras import MatyanCallback
model.fit(x, y, callbacks=[MatyanCallback(experiment="my-exp", repo="http://localhost:53800")])PyTorch Lightning:
from matyan_client.adapters.pytorch_lightning import MatyanLogger
trainer = Trainer(logger=MatyanLogger(experiment="my-exp"))Hugging Face Transformers:
from matyan_client.adapters.hugging_face import MatyanCallback
trainer = Trainer(..., callbacks=[MatyanCallback(experiment="my-exp")])Pass repo= (backend URL) when constructing the adapter if you are not using MATYAN_BACKEND_URL. See the adapter docstrings and the integrations guide for more frameworks (XGBoost, LightGBM, CatBoost, Optuna, FastAI, SB3, etc.).
The matyan-client command is available after installation.
Status — Show backend, optional frontier, and client versions:
matyan-client status [--backend-url URL] [--frontier-url URL]Backup — Create a backup via the backend REST API (no direct FDB access):
matyan-client backup OUTPUT_PATH [--backend-url URL] [--runs RUN1,RUN2] [--experiment NAME] [--since ISO_DATETIME] [--no-blobs] [--compress]Restore — Restore a backup by replaying through the ingestion pipeline:
matyan-client restore-reingest BACKUP_PATH [--backend-url URL] [--frontier-url URL] [--skip-entities] [--skip-blobs] [--dry-run]Convert TensorBoard — Convert TensorBoard event logs to a Matyan backup archive (requires convert extra):
matyan-client convert tensorboard INPUT_DIR OUTPUT_PATH [--experiment NAME] [--compress] [--workers N]- Documentation
- Repository
- SDK API reference (Run, Repo, objects, adapters)
- License: Apache-2.0