Windows 10 virtual machine on PopOS 20.04 with GPU passthrough
In this guide I will go through how I got Windows 10 to run on a virtual machine under PopOS 20.04 with GPU passthrough. I will passthrough a Nvidia RTX 2080 GPU on the first PCI-e slot of the motherboard to Windows 10. Here’s the whole specs of my system:
AMD Ryzen 3950X (later updated to 5950X)
Asrock X570 Creator motherboard (BIOS version: 3.30, AGESA version: AMD AGESA Combo-AM4 V2 18.104.22.168 patch D)
MSI RTX 2080 Sea Hawk EK X GPU (on the first PCI-e x16 slot, for Windows 10)
AMD Radeon VII GPU (on the second PCI-e x16 slot, for PopOS)
Inateck 2 port USB 3.0 PCI-E card (on the third PCI-e x4 slot for Windows 10)
PopOS 20.04 as host OS
Windows 10 as guest OS
USB on Windows 10
Installing SSH server
My second recommendation would be to install SSH server on your PopOS. Especially if you are going to passthrough your GPU in the first PCI-e slot of your motherboard, as you might stumble to a problem that after you have backlisted the GPU and restarted your computer, PopOS can’t start X Server (basically meaning a blank screen with no graphical interface). We will talk about more how to solve this problem later, if you happen to encounter it. With an SSH server running, you can remote connect to your computer from another computer. To install SSH server run the following command:
sudo apt install openssh-server
You should first enable IOMMU in BIOS (IOMMU allows us to passthrough PCI-e devices). For example in my Asrock motherboard the setting can be found under: Advanced -> AMD CBS -> NBIO Common Settings -> IOMMU
You might also need to enable SR-IOV Support under Advanced -> PCI Configuration -> SR-IOV support. And you might need to enable CSM under Boot -> CSM (Compatibility Support Module) -> CSM
Installing Qemu and enabling IOMMU
First we will start by installing Qemu (handles the virtualization), Virt-manager (graphical interface for QEMU so we can set up the VM a bit more easier) and all the other needed software:
sudo apt install qemu-kvm qemu-utils virt-manager libvirt-daemon-system libvirt-clients bridge-utils ovmf
Enabling IOMMU for AMD CPUs:
sudo kernelstub -o "amd_iommu=on amd_iommu=pt"
Identifying PCI bus / device ids
Next we want to find out the PCI bus ids and device ids for the GPU and USB card that we want to passthrough:
This should print out a really long list of all the different devices in your computer. You need to look through this list and find your GPU and USB card. For example here’s my Nvidia card:
35:00.0 VGA compatible controller : NVIDIA Corporation TU104 [GeForce RTX 2080 Rev. A] [10de:1e87] (rev a1) (prog-if 00 [VGA controller]) Subsystem: Micro-Star International Co., Ltd. \[MSI\] TU104 \[GeForce RTX 2080 Rev. A\] \[1462:3728\] Flags: fast devsel, IRQ 255 Memory at dc000000 (32-bit, non-prefetchable) \[size=16M\] Memory at 90000000 (64-bit, prefetchable) \[size=256M\] Memory at a0000000 (64-bit, prefetchable) \[size=32M\] I/O ports at e000 \[size=128\] Expansion ROM at dd000000 \[disabled\] \[size=512K\] Capabilities: Kernel driver in use: vfio-pci Kernel modules: nvidiafb, nouveau, nvidia\_drm, nvidia 35:00.1 Audio device : NVIDIA Corporation Device [10de:10f8] (rev a1) Subsystem: Micro-Star International Co., Ltd. \[MSI\] Device \[1462:3728\] Flags: fast devsel, IRQ 255 Memory at dd080000 (32-bit, non-prefetchable) \[disabled\] \[size=16K\] Capabilities: Kernel driver in use: vfio-pci Kernel modules: snd\_hda\_intel 35:00.2 USB controller [0c03]: NVIDIA Corporation Device [10de:1ad8] (rev a1) (prog-if 30 [XHCI]) Subsystem: Micro-Star International Co., Ltd. \[MSI\] Device \[1462:3728\] Flags: fast devsel, IRQ 80 Memory at a2000000 (64-bit, prefetchable) \[size=256K\] Memory at a2040000 (64-bit, prefetchable) \[size=64K\] Capabilities: Kernel driver in use: xhci\_hcd 35:00.3 Serial bus controller [0c80]: NVIDIA Corporation Device [10de:1ad9] (rev a1) Subsystem: Micro-Star International Co., Ltd. \[MSI\] Device \[1462:3728\] Flags: fast devsel, IRQ 255 Memory at dd084000 (32-bit, non-prefetchable) \[disabled\] \[size=4K\] Capabilities: Kernel driver in use: vfio-pci Kernel modules: i2c\_nvidia\_gpu
And here’s my USB card:
2b:00.0 USB controller [0c03]: Fresco Logic FL1100 USB 3.0 Host Controller [1b73:1100] (rev 10) (prog-if 30 [XHCI]) Subsystem: Fresco Logic FL1100 USB 3.0 Host Controller [1b73:1100] Flags: fast devsel, IRQ 27 Memory at eaa00000 (64-bit, non-prefetchable) [size=64K] Memory at eaa11000 (64-bit, non-prefetchable) [size=4K] Memory at eaa10000 (64-bit, non-prefetchable) [size=4K] Capabilities: Kernel driver in use: vfio-pci Kernel modules: xhci_pci
For my USB card , I would write down PCI bus id 2b:00.0 and device id 1b73:1100.
Blacklisting your GPU and USB card
Next we will blacklist our GPU that we want to passthrough to Windows. Before we blacklist the GPU, there’s few things you should know: The BIOS menu will still show up in the first GPU, but when Pop OS starts to load up, you will see video output only on the second GPU. Second thing is that you might not get any video output at all in PopOS, if the X Server fails to start, but don’t panic, we will fix that problem later. To blacklist our GPU and USB card, run the following command, but remember to replace with your own device ids:
sudo kernelstub --add-options "vfio-pci.ids=10de:1e87, 10de:10f8, 10de:1ad8, 10de:1ad9, 1b73:1100"
sudo kernelstub --delete-options "vfio-pci.ids=10de:1e87, 10de:10f8, 10de:1ad8, 10de:1ad9, 1b73:1100"
Next we will run the following command (some certain Windows version require it):
sudo kernelstub --add-options "kvm.ignore_msrs=1"
Next we will disable the EFI/VESA framebuffer when the PopOS is booting (as otherwise the Pop OS will reserve the first GPU card, and then the GPU passthrough might not work):
sudo kernelstub --add-options "video=efifb:off"
Last the step before rebooting, is to blacklist the Nvidia/Nouveau drivers in PopOS, so the Nvidia drivers don’t try to initiliaze the GPU (and then the GPU passthrough would fail). Go to edit the following file (in my example I’m using Emacs to edit it):
sudo emacs /etc/modprobe.d/blacklist.conf
Add the following lines to the end of the file:
blacklist nvidia blacklist nouveau
Finally reboot your computer. After rebooting your computer you can run the “ls -nnv” command again, and now you should see on your GPU and USB card devices that have “Kernel driver in use” as “vfio-pci” (except for the USB-C port that’s on the Nvidia GPU, if you happen to have an USB-C port in your Nvidia GPU). For example for my VGA controller it would show the following:
35:00.0 VGA compatible controller : NVIDIA Corporation TU104 [GeForce RTX 2080 Rev. A] [10de:1e87] (rev a1) (prog-if 00 [VGA controller]) Subsystem: Micro-Star International Co., Ltd. \[MSI\] TU104 \[GeForce RTX 2080 Rev. A\] \[1462:3728\] Flags: fast devsel, IRQ 255 Memory at dc000000 (32-bit, non-prefetchable) \[size=16M\] Memory at 90000000 (64-bit, prefetchable) \[size=256M\] Memory at a0000000 (64-bit, prefetchable) \[size=32M\] I/O ports at e000 \[size=128\] Expansion ROM at dd000000 \[disabled\] \[size=512K\] Capabilities: Kernel driver in use: vfio-pci Kernel modules: nvidiafb, nouveau, nvidia\_drm, nvidia
Problems after rebooting
If you didn’t have any problems after rebooting then you can skip this section. But if you have any problems after rebooting, like no video signal on your second GPU after PopOS has started to boot (no PopOS boot information and no login screen). Then you might need to have to edit your xorg.conf. If you encrypted your PopOS hard drive then after booting, wait a bit, and then input your encryption password blindly (even though you can’t see the input encryption screen). And then wait a bit again. And then try to SSH from another computer to your computer. Create xorg.conf.d folder under /etc/X11/
sudo mkdir /etc/X11/xorg.conf.d
Then create the 10-gpus.conf file to under that folder:
sudo emacs /etc/X11/xorg.conf.d/10-gpus.conf
And then add the following to the file (this applies to only AMD GPU):
# For AMD
Then disconnect SSH and reboot your computer. Hopefully now you can see PopOS booting and the encryption/login screen.
Creating a virtual disk for the Windows VM
Now we will create a virtual disk for the Windows VM that will host our Windows VM install. There are different approaches for this (for example you could passthrough a whole NVMe SSD drive), but I have usually created just img-file. Maybe one advantage for this is that if you want to make a backup of the Windows install, then you just copy the img-file. Of course one disadvantage will be that the file size will be as big as the size of the virtual disk (even though you haven’t filled up your virtual disk). We can create a virtual disk with the following command. The 512G option means that the virtual disk will 512GB big. You can make the disk bigger or smaller as you want. The virtual disk can be made bigger later, if you find that you need more space (I will cover this one in a later post).
fallocate -l 512G win10.img
Download necessary files
Download Windows 10 installation ISO from Microsoft (https://www.microsoft.com/software-download/windows10ISO). Then download Virtio drivers for the virtual disk, as otherwise Windows installer won’t recognize the virtual disk. The Virtio drivers can be found at https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/ (Download the stable-virtio version that has a version number on it and that ends with .iso). You can optionally download your GPU drivers and put them on a USB stick, if you prefer to install the GPU drivers without Windows automatic driver installation.
(Optional) Download vBIOS
This step doesn’t affect, if you use already new enough drivers (it will work with those too). But you can of course try first without doing this step and then come back if you encounter to the Error 43.
To fix Error 43, we will first download a copy of the GPU’s vBIOS from https://www.techpowerup.com/vgabios/. Find the exact GPU model that you are using and download its vBIOS (you can’t use another manufacturer’s or another version of vBIOS, it needs to be the same manufacturer and exactly the same model otherwise it can be harmful to your GPU).
Then we create vgabios folder under usr/share folder (because usr/share folder is accessible for Qemu, for example if we put the vBIOS under our own home directory PopOS’ AppArmor security might block access to it):
sudo mkdir /usr/share/vgabios
Then we just go to the folder that has our donwloaded vBIOS and move it to the vgabios folder (for example my ROM-file is named MSI):
sudo cp MSI.rom /usr/share/vgabios/MSI.rom
We will add this vBIOS file later to the VM settings, when we get to that point.
Creating the VM in Virtio
After all the pre-configurations we are ready to create the VM in Virtio. Launch Virt-manager. And select File -> New Virtual Machine
From the new window select “Local install media (ISO image or CDROM”).
In the next window click “Browse”, then “Browse Local”, and then select the Windows installer ISO that you downloaded previously.
Next you can choose how much memory and CPUs (threads), you want to assign to your VM. Leave some threads and memory to Pop OS (don’t assign everything to Windows). For example I used 24 threads (so 12 cores) and 49152MiB.
Next we will assign the virtual disk where Windows will be installed that we created before hand. Select “Select or create custom storage” option, and then click “Manage”, then “Browse Local”, and then find your virtual disk.
In the next window enable the “Customize configuration before install” option and hit Finish.
Now we will add the Virtio ISO that we downloaded previously. Click “Add Hardware”.
Choose “Storage”, and then as “Device Type” “CDROM device” and “Bus Type” as”SATA”. Click the “Manage”, and then “Browse Local”, and find the location where the Virtio ISO is.
Next we will add the GPU devices and the USB card that we want to passthrough. At the beginning of this guide we wrote down the PCI bus ids of the GPU and USB card. Click the “Add Hardware” again, then select “PCI Host Device”, and select the PCI bus id that you had written down before. And repeat this until you have added all the devices that you want to passthrough.
(Optional step) Depending on the Nvidia drivers, you might encounter Error 43 inside Windows. To avoid this we will give a fresh copy of CPU’s vBios that we downloaed earlierly to the VM. Select your VGA controller (in my case it’s the 0000:32:00:0). And go to the XML-tab and add the line “<rom bar=”on” file=”/usr/share/vgabios/MSI.rom”/>”. And click Apply. You don’t need to add the vBios to other deviced related to your GPU.
During the Windows installation you might notice that Windows can’t find a disk to install the OS. This is because we need to load the Virtio drivers that we donwloaded earlierly. Click “Load driver” and point to the Virtio disc. The drivers should be inside the “amd64/w10” folder. Now you should able to continue with the installation.
After Windows has finished installing, we can install the Nvidia drivers. Depending if you disabled your network connection, then you need to insert the USB stick with the GPU drivers and install from there. Or if you didn’t disable your network connection, then Windows should automatically recognize your GPU and install the correct drivers. After the drivers have finished installing, you should able to extend the desktop to the monitor that is connected to your GPU. You can disable the “virtual monitor” from Windows’ monitor settings. Or you can even delete the Spice Server/Screen in your VM settings to completely disable the “virtual monitor”.
Now you should have a Windows VM up and running! I hope that this guide has been helpful. If you notice that there’s some errors in the guide, or if you have encountered any problems, please leave a comment. I posted some problems that I have encountered myself below.
Slow upload speed with Dropbox
I noticed that some times I had really slow upload speeds in Dropbox some times for some reason. I haven’t yet found a reason why this happens when I use the shared ethernet connection. Currently I have passthrough the second Ethernet connection to the VM (as my motherboard has two Ethernet ports). And I haven’t noticed any slow down anymore.
VM not starting up
If for some reason, your VM doesn’t start, for example you get an error, but the error message isn’t that clear what’s the problem, then you can try to see Qemu’s log file (replace the “win10” with the name that you gave to your VM in Virt-manager):
sudo emacs /var/log/libvirt/qemu/win10.log
How to check what you have added to your kernelstub
If you want to check what you have added to your kernelstub, for example you have some problems with the VM and you want to double check, if everything is correct in the kernelstub. Or if you bought a new GPU, and you want to remove the old one, but you don’t remeber exactly what you have added to the kernelstub. You can quickly check with the following command:
For example I get the following output (remember not to remove the initrd or root, as they relate which hard drive to use to boot your Pop OS):
initrd=\EFI\Pop_OS-********-****-****-****-************\initrd.img root=UUID=********-****-****-****-************ ro kvm.ignore_msrs=1 amd_iommu=on amd_iommu=pt video=efifb:off vfio-pci.ids=10de:1e87,10de:10f8,10de:1ad8,10de:1ad9,1b73:1100
Ignored rdmsr problem when booting PopOS
If you are booting your PopOS, but the bootup fails, and you get a black screen, and the following error message (the number values might be different for you):
[27.508658] kvm : ignored rdmsr: 0x48b data 0x0
This might be that the IOMMU and SR-IOV settings in your motherboard’s BIOS have been disabled (because of a BIOS reset). This might happen if you have updated your motherboard’s BIOS. Or for example if there has been some electric blackout during at the same time as you were using your computer and the computer then has shutdown incorrectly and caused a BIOS reset. So you should then check your BIOS settings that the IOMMU and SR-IOV are enabled (and that other settings are also correct).
-21. November 2021 added ignored rdmsr problem
Useful links and sources:
Few useful guides from Mathias Hueber that have helped me a lot:
Level1Techs have really good forums about VMs:
Discussion about vBIOS and Pop OS’ AppArmor: