What happens when I don’t have a car to tinker with…

An interesting confluence of events occurred. With the departure of the Buick for Riyadh, I’m left with no project. Even when that car isn’t broken, there’s always something to do on it. Something to improve, inspect, or clean.

So what do I do with myself now?

Well, I cleaned the garage out so the girlfriend could park her car inside, at least until the Buick comes back in December. That killed a day. Just one.

Then I started watching Youtube videos, like one of them silly millenials. There’s this nut called Linus (not Torvolds) that runs a channel called Linus Tech Tips. He’s got a pile of sponsors and they send him all kinds of free stuff, and he does crazy things with the free stuff.

But one of the little projects he demonstrated was doing GPU pass-through on a Linux host. He ran a Windows 10 virtual machine on a Linux system, and passed as GPU through to the VM so it could run games at full speed. I thought that was damn cool.

And as it turns out, I had two computers running full time in the house. A server in the basement running Linux and two VMs – one running Plex Media Server and the other a Bastion host for getting into the house from outside. It was in a giant case with six hard drives in an array to store all my music and movies. It was running a cheap mainboard I bought specifically because it had eight SATA ports and supported an el-cheapon AMD FM2+ socket, into which I stuck an A10 CPU. 16Gb of DDR3 memory capped the combo.

Upstairs was my workstation. And I’d recently bit the bullet on an NVIDIA GTX1060 for the desktop, mostly so I could run driving sims like iRacing and Project Cars without stuttering. The workstation was an HP Pavilion I got a Costco six years ago. It had the motherboard crap out about a year ago, and I budget-rescued it with another cheap motherboard and an AMD A8 CPU. Another 16GB of memory.

But the workstation had a problem – the onboard ethernet controller likes to quit working when it gets hot. So I had a PCI-E ethernet controller in it for awhile. but it got displaced by the new video card. So I had unreliable networking on the desktop. No bueno.

So, I said, “Screw it! Let’s do this!”

Before taking the desktop apart, I used Microsoft Sysinternals disk2vhd utility to back up the hard drive to a VHD file, which is the format used by Microsoft’s Hyper-V virtualization system. This turned my 500GB C: drive into a 177GB disk file that could be plugged into a hypervisor. I also went ahead and installed the RedHat VirtIO drivers so that when it booted up under virtualization, it could see and use the disks. More on that later.

Apart came the desktop. The memory, SSD, and video card came out. Into the server case they went, and upstairs the server came. I booted Ubuntu 18.04 off a Live CD, and installed it on a 250GB SSD that I salvaged from a dead laptop.

So, the quick specs:

  • Motherboard: BIOSTAR Somthing or other from Neweff
    • CPU: AMD A10, 3.4Ghz, 4 compute cores, 6 Radeon GPU cores
    • 32GB DDR3 RAM
    • 1x250GB Crucial SSD (Linux Main boot drive)
    • 1x500GB Samsung EVO SSD (Windows 10 boot drive)
    • 6x1TB 3.5″ Spinners (5 Hitachi, 1 Western digital) Configured in a RAID-6
    • NVIDIA GTX1060 with 3GB of memory

The installation detected the RAID and the volume group on it. Paydirt.

So, another reboot to connect the desktop’s old SSD, mount it, and copied the VHD file into my images directory.

After verifying the VMs I already had still worked, I defined and booted up my Windows 10 VM.

Whoah did it suck. I/O Performance was absolute TRASH. Windows hits the disks too much.

Nevertheless, I pushed on with the Graphics configuration. There are several tutorials out there, I mostly followed this one. You can read, so I won’t repeat the details here. Needless to say, I followed it, and quickly had it working. GPU passthrough is a thing!

But performance was still atrocious. It took ten minutes to boot and was unusable. So, onward!

Checked the settings on the hard disks. I discovered that one of them had its write cache disabled! Enabling (hdparm -W1 <device>) that increased write performance by a factor of ten. The VM was now usable, but still slow. I also needed a custom init script to enable that cache on that drive every single boot, because it was defaulting to off for some reason. Annoying.

I used qemu-img to convert the VHD file to a raw file. Performance again improved, but it was still not good.

So I finally took that raw file and used DD to write it back to the original 500GB SSD. But this image had all the drivers and config installed. I reconfigured the VM to use the disk directly, and bam. I had it cracked. Native performance, with accelerated video. It was pretty cool.

But sound. I couldn’t get sound to work. None of the tweaking. None of the qemu:env options. Nothing worked.

Then I checked dmesg, and here’s where this entry is going to help a bunch of you:

In Ubuntu 18.04, Apparmor blocks the attempt of QEMU to connect to Pulseaudio.

I found NOTHING about this anywhere. But it’s easy enough to fix!

Apparmor typically stores its profiles under /etc/apparmor or /etc/apparmor.d. Ubuntu is no different. BUT, there’s a little trick they’ve added where an apparmor profile is created dynamically for a VM when it starts. There’s a template file in the directory, but adding the apparmor rules to that didn’t work.

The file that provides most of the content for the dynamic profiles is under /etc/apparmor.d/abtractions. Add these lines:

Changed to apparmor for sound to work

After this, restart everything. Boom! Sound! But it sounded like ass. It was scratchy and popped a lot, nearly unintelligible.  Ze Googles was of no help. All the settings people were claiming got rid of the distortion didn’t work.

Then I checked the QEMU Version. Ubuntu 18.04 shipped with QEMU 2.11. This is OLD. A lot of work has gone into the audio section. So I upgraded to Ubuntu 19.10. That got me to QEMU 3, and sound got better, but it still sucked.

So, I went and grabbed the source for QEMU 4.1 from the project site, and built it. I installed it under /opt, and modified the XML for my VM to use it:

Restart the VM, and boom. Useful sound. It still pops a little, but it’s tolerable. Any music listening I do I’ll do from the main Linux portion of the system anyway.

So, there you have it. The snags that hung me up for three days, now explained in a few paragraphs. This should save somebody a pile of time.

Dual screen with accelerated Windows VM

Leave a Reply