Dorky dork dork stuff (Virtualization stuff)

Update!: If you got here looking for a Code 43 fix, scroll to the bottom

So a few months ago, I embarked on a hack to pass my NVidia GTX 1060 through to Windows 10 running a Virtual Machine on my PC. The entire aim was to stop running Windows as the primary operating system, but still maintain Windows on the machine so I could play some games and keep running Quicken and Microsoft Office 2013.

I was successful. It work great.

Then I upgraded from Ubuntu 19.04 to 19.10, and that upgrade completely broke the KVM/Libvirt/Qemu virtualization stack.

So, I had to tear it all down and rebuild it. Which was fine. I lost no data. But the Windows 10 VM came back without the pass-through, running the Spice QXL virtual display driver. No worries, I could re-do theĀ  pass-through.

But then I got to thinking… how many times had I used the VM for a game that required the big honkin’ GTX? Maybe a dozen. In months. How many times had I lamented the slow response of the main desktop while it rendered Ubuntu’s myriad of visual effects? A lot. As it turns out, I might be able to get those games to run under Linux anyway with Wine, maybe I didn’t need Windows for that part.

But I still need it for video overlay processing on my car videos. I can do all the editing and stuff on Linux, but there’s no Dashware or RaceRender available.

So I flipped the script. I’m using an AMD A10 APU, which has four compute cores and SIX Radeon R7 graphics cores on the die. Why not flip it? Use the NVidia for the desktop that I use most of the time, and the Radeon cores in the VM for stuff I use some of the time. I’d get accelerated graphics and access to Video Encode/Decode functions inside the VM, but keep the super high performance of the Nvidia GTX for my main desktop.

Turns out it was possible. The AMD Radeon cores on the chip are in their own IUMMU group, so I could wall them off and pass them through, and I did.

And RaceRender works beautifully:

What you’re seeing my main desktop. On the right is a Windows Remote Desktop connection to the Win10 Virtual machine running on my system. The rest of stuff running in the host OS.

I’m running a video overlay encode in RaceRender, and you can almost see from the Task Manager output, it’s using the Video Encode functions of the Radeon R7 cores I’ve passed through. I’m still using the Spice QXL as the main display, but Rdesktop doesn’t care about that, and Windows can use the R7 to render stuff and then fire it out the RDP connection to a client that supports it.

So, my cake is being had and eaten. Until the next Ubuntu upgrade, which I’m sure will trash even this.

Update!

Shortly after getting this working, it stopped. The AMD driver wouldn’t load, and Windows just reported the frustratingly ubiquitous “Code 43” as the reason.

Much searching turned up nothing. Then on a whim I did an lsmod on my host. Turns out that the amdgpu kernel module had loaded on the host, which grabbed the GPU. By the time the Windows VM had started, it could no longer access it, despite the IOMMU/VFIO config that was supposed to wall it off from the host OS.

Blacklisting the amdgpu module in /etc/modprobe.d/blacklist.conf did the trick:

blacklist amdgpu

Reboot the host to clear everything out, and I was back up and running.

Leave a Reply