this post was submitted on 14 Jan 2024
11 points (92.3% liked)

Linux

48310 readers
645 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS
 

Not sure if this is the best place to ask for this kind of help but here it is. I am setting up IOMMU using this Arch Wiki article and have set up a passthrough for a selected GPU using a script placed at /usr/local/bin/vfio-pci-override.sh, due to having 2 RX580's.

The issue is that vfio-pci doesn't load on boot and I get a

sh: /usr/local/bin/vfio-pci-override.sh: Permission denied

The file is owned by root and has 755 permissions, in addition I moved modconf after block and filesystems in /etc/mkinitcpio.conf as suggested by this thread. I checked if vfio-pci was succesfully echoed to /sys/bus/pci/devices/0000:05:00.0/driver_override using cat but the only thing there is

(null)

I'll provide other files and info if needed. I really appreciate the help!

System specs/logs

OS: Artix Linux Kernel: 6.6.9-artix1-1 Init System: OpenRC GPU Driver: amdgpu Encryption: I use Full-Disk Encryption

Full dmesg

you are viewing a single comment's thread
view the rest of the comments
[–] Sonotsugipaa@lemmy.dbzer0.com 3 points 10 months ago* (last edited 10 months ago) (2 children)

Hey, I have two RX580s too!

I'm trying to wrap my head around this problem and I have no idea why a Permission denied would pop up if the script is run at boot, I'm not familiar with the process but I would assume that sh would run as root.

Have you tried following the rest of the guide, and just ignoring the actual VFIO passthrough step in particular? How I found this out is a long story, but apparently on my system libvirt is able to "yank" the GPU from the host and give it to the vfio-pci driver while the system is running, as long as the libvirt domain has the proper <hostdev> in it (or, if you're using virt-manager, you have the PCI 0000:05:00.0 and PCI 0000:05:00.1 thingies set up).

I'm not sure that's supposed to be the case in general, but if that doesn't work for you I don't think your system will explode, if anything you have both GPUs working for the host on boot.

The guide says this:

[...] due to their size and complexity, GPU drivers do not tend to support dynamic rebinding very well, so you cannot just have some GPU you use on the host be transparently passed to a virtual machine without having both drivers conflict with each other. Because of this, it is generally advised to bind those placeholder drivers manually before starting the virtual machine, in order to stop other drivers from attempting to claim it.

The con is that after running the VM, you'd most likely want to reattach the GPU like this:

pcidev0=  # Your passed-through GPU, something like  0000:05:00.0
pcidev1=  #                                          0000:05:00.1
pcidev2=  # ...
pcidevN=  #                                          0000:05:00.N

# You need to do this for all the devices in the IOMMU group
function rm_pci {
   echo 'Removing PCI device '"$1"
   echo -n 1 >/sys/bus/pci/devices/"$1"/remove
}

rm_pci "$pcidev0"
rm_pci "$pcidev1"
# ...
rm_pci "$pcidevN"

echo 'Rescanning PCI devices'
echo -n 1 >/sys/bus/pci/rescan

This is because I've found out the hard way that a GPU managed by the vfio-pci module may or may not spin its fans when it heats up, and if the VFIO GPU is sitting in front of the other one's fans... y'know, heat.
(consider the first paragraph of this comment)

If you manage to give the GPU back to the host via the pseudo scriptlet above, the actual GPU driver will be able to do its job with the fans; the alternatives is rebooting the system, or just assuming that the main GPU doesn't blow 300C° onto the VFIO one while the latter refuses to acknowledge it.

[–] jvrava9@lemmy.dbzer0.com 2 points 10 months ago (1 children)

Hello fellow RX580 owner! I really have no idea why the whole permission thing is happening, the file has 777 perms and is owned by root. The arch wiki thread was the only mention of this problem that I found at all. I am indeed using virt-manager, would I just need to add a RX580 as a PCI host device and run the script after shutting down the vm?

Ps: If I do as mentionned above, I get

Error starting domain: internal error: QEMU unexpectedly closed the monitor (vm='Atlas'): 2024-01-14T19:42:41.288341Z qemu-system-x86_64: -device {"driver":"vfio-pci","host":"0000:03:00.0","id":"hostdev0","bus":"pci.5","addr":"0x0"}: vfio 0000:03:00.0: group 31 is not viable
Please ensure all devices within the iommu_group are bound to their vfio bus driver.

If i do it with 0000:05:00.0, I get a blackscreen as an output. I am pretty sure that 0000:05:00.0 is the unused gpu. How did you identify which gpu is which physically to know which iommu group to pass?

[–] Sonotsugipaa@lemmy.dbzer0.com 1 points 10 months ago* (last edited 10 months ago)

would I just need to add a RX580 as a PCI host device and run the script after shutting down the vm?

Indeed, although don't just copy-paste the snippet I wrote: I just wrote it on the spot without testing it, you have to tweak it to run the function for the PCI device(s) you have in the IOMMU group of the GPU you want to pass through. In my case it's just 0000:03:00.0 and 0000:03:00.1, perhaps you will also only need two since the GPUs are the same.

You can procrastinate on doing all that, I'm fairly certain nothing will blow up.
Unfortunately my setup is very complex, I hacked together a framework of Zsh scripts that use libvirt hooks - otherwise I would just copy them here.


I didn't mean to say that you must use 0000:05:00.0 specifically, only to follow the rest of the guide without having the script - I'm not sure about identifying the correct device, I did that a long time ago, but I am pretty sure the AL Wiki guide has a way to list GPUs.

The error you get is self-explainatory: along with 0000:05:00.0 (or whatever device), you must also list the ones in the same IOMMU group, which should also be identified along with the PCI device(s) you want to pass through.


EDIT: I skimmed through the guide, apparently it's extremely un-straightforward (gayforward? idk), I'll try to make a director's cut.

The following script should allow you to see how your various PCI devices are mapped to IOMMU groups. If it does not return anything, you either have not enabled IOMMU support properly or your hardware does not support it.

#!/bin/bash
shopt -s nullglob
for g in $(find /sys/kernel/iommu_groups/* -maxdepth 0 -type d | sort -V); do
    echo "IOMMU Group ${g##*/}:"
    for d in $g/devices/*; do
        echo -e "\t$(lspci -nns ${d##*/})"
    done;
done;

Example output:

IOMMU Group 1:
	00:01.0 PCI bridge: Intel Corporation Xeon E3-1200 v2/3rd Gen Core processor PCI Express Root Port [8086:0151] (rev 09)
IOMMU Group 13:
	06:00.0 VGA compatible controller: NVIDIA Corporation GM204 [GeForce GTX 970] [10de:13c2] (rev a1)
	06:00.1 Audio device: NVIDIA Corporation GM204 High Definition Audio Controller [10de:0fbb] (rev a1)

An IOMMU group is the smallest set of physical devices that can be passed to a virtual machine. For instance, in the example above, both the GPU in 06:00.0 and its audio controller in 6:00.1 belong to IOMMU group 13 and can only be passed together.

As to identifying which of the two is which GPU, your only safe bet is trying to determine which monitor is connected to which PCI device somehow, which I have no idea how to do - I went with trial and error, and hard resets.