Skip to content

feat: add AMD ROCm support for RX 5700 XT (gfx1010 / RDNA 1)#423

Open
PelaGB17 wants to merge 1 commit intorishikanthc:mainfrom
PelaGB17:feature/amd-rocm-rx5700xt
Open

feat: add AMD ROCm support for RX 5700 XT (gfx1010 / RDNA 1)#423
PelaGB17 wants to merge 1 commit intorishikanthc:mainfrom
PelaGB17:feature/amd-rocm-rx5700xt

Conversation

@PelaGB17
Copy link
Copy Markdown

  • Dockerfile.rocm: new multi-stage image based on rocm/dev-ubuntu-24.04:6.3.1-complete
    • Sets PYTORCH_ROCM_VERSION=6.3 so uv installs ROCm-backed PyTorch wheels
    • Sets HSA_OVERRIDE_GFX_VERSION=10.3.0 to map RDNA1 (gfx1010) to RDNA2 kernels
    • Adds container user to 'video' and 'render' groups for /dev/kfd + /dev/dri access
  • docker-compose.rocm.yml: pre-built image compose file
    • Mounts /dev/kfd and /dev/dri (no NVIDIA plugin required)
    • group_add: video, render for GPU device node access
    • Documents per-generation HSA_OVERRIDE_GFX_VERSION values
  • docker-compose.build.rocm.yml: local build variant using Dockerfile.rocm
  • base_adapter.go: add GetPyTorchROCmVersion() and update GetPyTorchWheelURL()
    • PYTORCH_ROCM_VERSION env var selects ROCm wheel index (rocm6.3, etc.)
    • ROCm takes priority over CUDA; falls back to cu126 if neither is set
    • All existing adapters (WhisperX, PyAnnote, Voxtral, Parakeet, Canary) inherit ROCm wheel selection automatically via GetPyTorchWheelURL()

- Dockerfile.rocm: new multi-stage image based on rocm/dev-ubuntu-24.04:6.3.1-complete
  * Sets PYTORCH_ROCM_VERSION=6.3 so uv installs ROCm-backed PyTorch wheels
  * Sets HSA_OVERRIDE_GFX_VERSION=10.3.0 to map RDNA1 (gfx1010) to RDNA2 kernels
  * Adds container user to 'video' and 'render' groups for /dev/kfd + /dev/dri access
- docker-compose.rocm.yml: pre-built image compose file
  * Mounts /dev/kfd and /dev/dri (no NVIDIA plugin required)
  * group_add: video, render for GPU device node access
  * Documents per-generation HSA_OVERRIDE_GFX_VERSION values
- docker-compose.build.rocm.yml: local build variant using Dockerfile.rocm
- base_adapter.go: add GetPyTorchROCmVersion() and update GetPyTorchWheelURL()
  * PYTORCH_ROCM_VERSION env var selects ROCm wheel index (rocm6.3, etc.)
  * ROCm takes priority over CUDA; falls back to cu126 if neither is set
  * All existing adapters (WhisperX, PyAnnote, Voxtral, Parakeet, Canary)
    inherit ROCm wheel selection automatically via GetPyTorchWheelURL()
@icedream
Copy link
Copy Markdown

icedream commented Mar 22, 2026

Was trying to run this (my system runs a gfx1100 card, the 7900 XTX, so I removed the HSA_OVERRIDE_GFX_VERSION setting), got following errors:

[+] Preparing Python environment
time=22:08:36 level="INFO " msg="Initializing unified transcription service"
time=22:08:36 level="INFO " msg="Initializing registered models in parallel..."
time=22:08:36 level="INFO " msg="Preparing NVIDIA Parakeet environment" env_path=/app/whisperx-env/parakeet
time=22:08:36 level="INFO " msg="Preparing NVIDIA Sortformer environment" env_path=/app/whisperx-env/parakeet
time=22:08:36 level="INFO " msg="Preparing WhisperX environment" env_path=/app/whisperx-env
time=22:08:36 level="INFO " msg="Preparing NVIDIA Canary environment" env_path=/app/whisperx-env/parakeet
time=22:08:36 level="INFO " msg="transcription model initialized" model_id=openai_whisper
time=22:08:36 level="INFO " msg="Preparing PyAnnote environment" env_path=/app/whisperx-env/pyannote
time=22:08:36 level="INFO " msg="Preparing Voxtral environment" env_path=/app/whisperx-env/voxtral
time=22:08:36 level="INFO " msg="Created buffered transcription script" path=/app/whisperx-env/parakeet/parakeet_transcribe_buffered.py
time=22:08:36 level="INFO " msg="Installing Sortformer dependencies"
time=22:08:36 level="INFO " msg="Installing PyAnnote dependencies"
time=22:08:36 level="INFO " msg="Installing Voxtral dependencies"
time=22:08:36 level="INFO " msg="Parakeet environment not ready, setting up"
time=22:08:36 level="INFO " msg="Environment already configured by Parakeet"
time=22:08:36 level="INFO " msg="Downloading Canary model" path=/app/whisperx-env/parakeet/canary-1b-v2.nemo
time=22:08:36 level="INFO " msg="Installing Parakeet dependencies"
time=22:08:51 level=ERROR msg="Failed to initialize diarization model" model_id=pyannote error="failed to setup PyAnnote environment: uv sync failed: exit status 1: Using CPython 3.12.3 interpreter at: /usr/bin/python3\nCreating virtual environment at: .venv\n  × No solution found when resolving dependencies for split (markers:\n  │ python_full_version >= '3.12' and platform_machine == 'x86_64' and\n  │ sys_platform == 'linux'):\n  ╰─▶ Because pyannote-audio==4.0.2 depends on torch==2.8.0 and only the\n      following versions of torch are available:\n          torch<2.8.0\n          torch>=2.8.0+rocm6.3\n      we can conclude that pyannote-audio==4.0.2 depends on\n      torch==2.8.0+rocm6.3. (1)\n\n      Because there is no version of pytorch-triton-rocm{platform_machine ==\n      'x86_64' and sys_platform == 'linux'}==3.4.0 and torch==2.8.0+rocm6.3\n      depends on pytorch-triton-rocm{platform_machine == 'x86_64'\n      and sys_platform == 'linux'}==3.4.0, we can conclude that\n      torch==2.8.0+rocm6.3 cannot be used.\n      And because we know from (1) that pyannote-audio==4.0.2 depends on\n      torch==2.8.0+rocm6.3, we can conclude that pyannote-audio==4.0.2 cannot\n      be used.\n      And because your project depends on pyannote-audio==4.0.2 and your\n      project requires pyannote-diarization[dev], we can conclude that your\n      project's requirements are unsatisfiable."
time=22:08:52 level=ERROR msg="Failed to initialize transcription model" model_id=voxtral error="failed to setup Voxtral environment: uv sync failed: exit status 2: Using CPython 3.12.3 interpreter at: /usr/bin/python3\nCreating virtual environment at: .venv\nResolved 84 packages in 16.09s\nerror: Distribution `torch==2.0.1 @ registry+https://download.pytorch.org/whl/rocm6.3` can't be installed because it doesn't have a source distribution or wheel for the current platform"
time=22:08:55 level=ERROR msg="Failed to initialize transcription model" model_id=whisperx error="failed to sync WhisperX: uv sync failed: exit status 1: Downloading cpython-3.10.20-linux-x86_64-gnu (download) (28.5MiB)\n Downloaded cpython-3.10.20-linux-x86_64-gnu (download)\nUsing CPython 3.10.20\nCreating virtual environment at: .venv\n  × No solution found when resolving dependencies for split (markers:\n  │ python_full_version == '3.13.*' and platform_machine == 'x86_64' and\n  │ sys_platform != 'darwin'):\n  ╰─▶ Because there is no version of pytorch-triton-rocm{platform_machine ==\n      'x86_64' and sys_platform == 'linux'}==3.4.0 and torch==2.8.0+rocm6.3\n      depends on pytorch-triton-rocm{platform_machine == 'x86_64'\n      and sys_platform == 'linux'}==3.4.0, we can conclude that\n      torch==2.8.0+rocm6.3 cannot be used.\n      And because only the following versions of torch{platform_machine ==\n      'x86_64' and sys_platform != 'darwin'} are available:\n          torch{platform_machine == 'x86_64' and sys_platform !=\n      'darwin'}<2.8.0\n          torch{platform_machine == 'x86_64' and sys_platform !=\n      'darwin'}==2.8.0+rocm6.3\n          torch{platform_machine == 'x86_64' and sys_platform !=\n      'darwin'}>2.9.dev0\n      and your project depends on torch{platform_machine == 'x86_64' and\n      sys_platform != 'darwin'}>=2.8.0,<2.9.dev0, we can conclude that your\n      project's requirements are unsatisfiable.\n\n      hint: While the active Python version is 3.10, the resolution failed for\n      other Python versions supported by your project. Consider limiting your\n      project's supported Python versions using `requires-python`.\n\n      hint: `torch` was requested with a pre-release marker (e.g., all of:\n          torch>=2.8.0,<2.8.0+rocm6.3\n          torch>2.8.0+rocm6.3,<2.9.dev0\n      ), but pre-releases weren't enabled (try: `--prerelease=allow`)"

Seems to be python dependency version conflicts.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants