domi

joined 2 years ago
[–] domi@lemmy.secnd.me 2 points 6 months ago

One day we will get a spiritual successor.

[–] domi@lemmy.secnd.me 3 points 6 months ago (2 children)

I really enjoyed it. First game in a while that could scratch that Prey-itch.

[–] domi@lemmy.secnd.me 8 points 6 months ago

I'm looking forward to somebody building a cluster with these and running a full size DeepSeek R1.

For about 26.000€ you can get a cluster with enough RAM to run it. For comparison, a single Nvidia GPU with 80GB VRAM costs about 21.000€, bringing the total cost to about 400.000€ for just the GPUs to run it.

[–] domi@lemmy.secnd.me 25 points 6 months ago (1 children)

What? X11 has zero HDR support.

[–] domi@lemmy.secnd.me 3 points 6 months ago* (last edited 6 months ago)

AMD: Yes (Played the last beta for a few hours, make sure to use Proton Experimental)

Nvidia: I believe the graphical issues are not fixed yet

Steam Deck: Game does not run well

[–] domi@lemmy.secnd.me 4 points 7 months ago (1 children)

I’m curious. Say you are getting a new computer, put Debian on, want to run e.g. DeepSeek via ollama via a container (e.g. Docker or podman) and also play, how easy or difficult is it?

On the host system, you don't need to do anything. AMDGPU and Mesa are included on most distros.

For LLMs you can go the easy route and just install the Alpaca flatpak and the AMD addon. It will work out of the box and uses ollama in the background.

If you need a Docker container for it: AMD provides the handy rocm/dev-ubuntu-${UBUNTU_VERSION}:${ROCM_VERSION}-complete images. They contain all the required ROCm dependencies and runtimes and you can just install your stuff ontop of it.

As for GPU passthrough, all you need to do is add a device link for /dev/kfd and /dev/dri and you are set. For example, in a docker-compose.yml you just add this:

    devices:
      - /dev/kfd:/dev/kfd
      - /dev/dri:/dev/dri

For example, this is the entire Dockerfile needed to build ComfyUI from scratch with ROCm. The user/group commands are only needed to get the container groups to align with my Fedora host system.

spoiler

ARG UBUNTU_VERSION=24.04
ARG ROCM_VERSION=6.3
ARG BASE_ROCM_DEV_CONTAINER=rocm/dev-ubuntu-${UBUNTU_VERSION}:${ROCM_VERSION}-complete

# For 6000 series
#ARG ROCM_DOCKER_ARCH=gfx1030
# For 7000 series
ARG ROCM_DOCKER_ARCH=gfx1100

FROM ${BASE_ROCM_DEV_CONTAINER}

RUN apt-get update && apt-get install -y git python-is-python3 && rm -rf /var/lib/apt/lists/*
RUN pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/rocm6.3 --break-system-packages

# Change group IDs to match Fedora
RUN groupmod -g 1337 irc && groupmod -g 105 render && groupmod -g 39 video

# Rename user on newer 24.04 release and add to video/render group
RUN usermod -l ai ubuntu && \
    usermod -d /home/ai -m ai && \
    usermod -a -G video ai && \
    usermod -a -G render ai

USER ai
WORKDIR /app

ENV PATH="/home/ai/.local/bin:${PATH}"

RUN git clone https://github.com/comfyanonymous/ComfyUI .
RUN pip install -r requirements.txt --break-system-packages

COPY start.sh /start.sh
CMD /start.sh

[–] domi@lemmy.secnd.me 3 points 7 months ago

However, for some reason on PC it's often quirky (Windows or Linux). My PC bluetooth works through a dongle so I wonder if an integrated card would do better.

Is it an USB dongle?

If so, make sure to add a short USB-A to USB-A cable between your PC and the dongle. Interference is a serious issue on USB 2.4 GHz wireless dongles when directly connected to a mainboard.

[–] domi@lemmy.secnd.me 8 points 7 months ago

Remedy already confirmed that they are going to self-publish Control 2.

[–] domi@lemmy.secnd.me 4 points 7 months ago

Milking after one game and one DLC?

[–] domi@lemmy.secnd.me 3 points 7 months ago (1 children)

That used to be the case, yes.

Alpaca pretty much allows running LLM out of the box on AMD after installing the ROCm addon in Discover/Software. LM Studio also works perfectly.

Image generation is a little bit more complicated. ComfyUI supports AMD when all ROCm dependencies are installed and the PyTorch version is swapped for the AMD version.

However, ComfyUI provides no builds for Linux or AMD right now and you have to build it yourself. I currently use a simple Docker container for ComfyUI which just takes the AMD ROCm image and installs ComfyUI ontop.

[–] domi@lemmy.secnd.me 10 points 7 months ago (6 children)

If it's just about self-hosting and not training, ROCm works perfectly fine for that. I self-host DeepSeek R1 32b and FLUX.1-dev on my 7900 XTX.

You even get more VRAM for cheaper.

[–] domi@lemmy.secnd.me 15 points 7 months ago (3 children)

Weren't they down for ~7 hours just last year?

Not saying it happens often but having a downtime that long is unprofessional for a company that size.

view more: ‹ prev next ›