My Single GPU Passthrough Setup Using a MSI GTX 1070 Ti
In this post I want to quickly share my setup for a Windows 11 VM dedicated to gaming and media editing. A “GPU Passthrough” is a special VM setup (using libvirt/qemu) that allows access to the physical GPU device, allowing for near native performance. In the case of a single GPU passthrough, the host completely transfers ownership of the only GPU to the VM; you can only use one at a time.
My other options would have been:
- Use a dual GPU setup with my GTX 970. This didn’t work for some reason, passthrough refuses to work when there are two GPUs in the system. I suspect this could be a motherboard issue.
- Use the integrated GPU for the host and use a KVM switch to switch between host and VM. This would be the ideal solution, since I do not game on my Ubuntu machine and therefore don’t need a good GPU for the host. However, my integrated Intel HD Graphics 4600 is not capable of driving two 4K monitors at 60hz simultaneously.
Therefore I decided that a single GPU passthrough would be the best solution, but why not just dual-boot?
- I don’t use Windows that often anymore, so dedicating an entire disk/partition to it seems stupid.
- Windows likes to mess with my system clock and bootloader. This I want to avoid.
- I want an easy way of backing up Windows. If Windows runs in a VM I can just make a copy of the disk image as often as I want.
- I want to upgrade to Windows 11 but my system does not support TPM!
The Setup
First, my specs:
Component | Name |
---|---|
CPU | Intel Core i7-4790K @ 8x 4.4GHz |
GPU | MSI GeForce GTX 1070 Ti |
RAM | A mere 16 GB! |
Mainboard | ASRock Fatal1ty Z97 Killer (wow, what an edgy name!) |
Thanks to this Github repo and the Arch Linux wiki for providing most information I needed. I advise you to follow these guides if you want to setup your own VM. Following are some individual quirks I’ve encountered plus my hook scripts.
1. Dump the VBIOS
In the case of my MSI GTX 1070 Ti I had to dump the card’s VBIOS, as the card would refuse to start in the VM without it. This can be done quite easily:
# echo 1 > /sys/bus/pci/devices/0000:01:00.0/rom
# cat /sys/bus/pci/devices/0000:01:00.0/rom > /usr/share/vgabios/gtx1070ti.rom
Remember to replace the PCI address with your own, which you can look up with lspci -nn
.
Now just add the ROM file to your XML configuration.
Either use the GUI or use virsh edit
:
<hostdev mode="subsystem" type="pci" managed="yes">
<source>
<address domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
</source>
<address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
<rom file="/usr/share/vgabios/gtx1070ti.rom"/>
</hostdev>
2. Pass-through the USB controller
Passing through individiual USB devices yields performance issues for me. This is especially noticeable with my USB headset, which had audio lag issues. In that case just add the USB controller directly.
Use lspci -t
to find out which controllers you need to add.
My hook scripts
These are mostly just copied from this guide. However I also detach/reattach the USB controller device that I’m passing through.
start.sh
#!/bin/bash
# Helpful to read output when debugging
set -x
# Stop display manager
systemctl isolate multi-user.target
systemctl stop display-manager.service
## Uncomment the following line if you use GDM
killall gdm-x-session
# Unbind VTconsoles
echo 0 > /sys/class/vtconsole/vtcon0/bind
echo 0 > /sys/class/vtconsole/vtcon1/bind
# Unbind EFI-Framebuffer
echo efi-framebuffer.0 > /sys/bus/platform/drivers/efi-framebuffer/unbind
# Avoid a Race condition by waiting 2 seconds. This can be calibrated to be shorter or longer if required for your system
sleep 2
modprobe -r nvidia_drm nvidia_modeset nvidia_uvm nvidia
# Unbind the GPU from display driver
virsh nodedev-detach pci_0000_00_14_0
virsh nodedev-detach pci_0000_01_00_0
virsh nodedev-detach pci_0000_01_00_1
# Load VFIO Kernel Module
modprobe vfio-pci
finish.sh
#!/bin/bash
set -x
# Re-Bind GPU to Nvidia Driver
virsh nodedev-reattach pci_0000_00_14_0
virsh nodedev-reattach pci_0000_01_00_0
virsh nodedev-reattach pci_0000_01_00_1
# Reload nvidia modules
modprobe nvidia
modprobe nvidia_modeset
modprobe nvidia_uvm
modprobe nvidia_drm
# Rebind VT consoles
echo 1 > /sys/class/vtconsole/vtcon0/bind
# Some machines might have more than 1 virtual console. Add a line for each corresponding VTConsole
#echo 1 > /sys/class/vtconsole/vtcon1/bind
nvidia-xconfig --query-gpu-info > /dev/null 2>&1
echo "efi-framebuffer.0" > /sys/bus/platform/drivers/efi-framebuffer/bind
# Restart Display Manager
systemctl start display-manager.service
How do I debug this?
Naturally a single GPU passthrough is painful to debug, since you cannot access your host machine in case
something goes wrong (plus you have a blackscreen!).
What I did was ssh into my host with my laptop and then watch the output of dmesg -W
.
If your VM goes bonkers and or you’re stuck in a blackscreen you can also use the command virsh destroy vm
to force-shutdown the VM.
This will also call the ‘finish’ hook script and should restore your system just fine.
Happy emulating!