- cross-posted to:
- amd@lemmy.ml
- cross-posted to:
- amd@lemmy.ml
One way of greatly improving ROCm installation process would be to use the Open Build Service which allows to use the single spec file to produce packages for many supported GNU/Linux distributions and versions of them. I opened a feature request about this.
Look forward to this completing just in time for ROCm to drop support for your card.
How about improving ROCm itself? Is it still a big problem like before?
I use ROCm for inference, both text generation via llama.cpp/LMStudio and image generation via ComfyUI.
Works pretty much perfectly on a 6900 XT. Very fast and easy to setup.
I had issues with some libraries only supporting CUDA when trying to train, but that was almost 6 months ago so things probably have improved in that area as well.
I’ve commented this before and I’ll do it again. DO NOT try to install ROCm/HIP, it’s a nightmare. AMD provides preconfigured docker containers with it already setup. Download one of them and do whatever you need to do on that.
Haven’t had any issues since I’ve installed it… Yet.