Skip to content

wikip-co/iconium

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

iconium

iconium is a headless LLM inference server built on a Minisforum AI MAX+ 395 (Strix Halo APU) running LM Studio with ROCm GPU acceleration.

This repo documents the current state of the machine: hardware, OS, ROCm configuration, LM Studio setup, loaded models, and running services.

Contents

File Description
hardware.md CPU, GPU, RAM, NPU, and storage specs
os-and-rocm.md Ubuntu 25.10, ROCm 7.2.1, kernel details
lmstudio.md LM Studio headless setup, backend transplant, ROCm fix
models.md All installed models, quantizations, and GPU offload status
services.md systemd services: lmstudio and openclaw-gateway
api.md OpenAI-compatible API endpoint reference
history.md Journey from vLLM to LM Studio — what was tried and why

Quick Status

Item Value
Hostname iconium
LM Studio server 0.0.0.0:1234
Active model google_gemma-4-31b-it (IQ4_XS, 18.35 GB)
GPU offload Full — ROCm0 (Radeon 8060S)
Autostart lmstudio.service (systemd, enabled)
OpenClaw gateway openclaw-gateway.service port 18789

About

iconium — Strix Halo LLM server documentation

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors