TLDR; tell me if this is a waste of time before I spend forever tinkering on something that will always be janky
I want to run multiple OSs on one machine including Linux, Windows, and maybe OSX from a host with multiple GPUs + igpu. I know there are multiple solutions but I’m looking for advice, opinions and experience. I know I can google how-to but is this worh pursuing?
I currently dual boot Bazzite and Ubuntu, for gaming and develoent respectively. I love Bazzite ease of updates and Ubuntu is where it’s at for testing and building frontier AI/ML tools.
What if I kept my computer running a thin hypervisor 24/7 and switched VMs based on my working context? I could pass through hardware as needed.
Proxmox? XCP-NG? Debian + QEMU? Anyone living with these as their computing machines (not homelabs/server hosts)?
This is inspired by Chris Tidus’s (YouTube) setup on arch but 1) i don’t know arch 2) I have a fairly beefy i7 265k 192gb build, but he’s on an enterprise xenon ddr5 build so in a differenrent power class 3) I have a heterogenous mix of graphics cards I’m hoping to pass though depending on workload
Use cases:
- Bazzite + 1 gpu for gaming
- Ubuntu + 1 or more GPUs for work
- Windows + 0 or more GPU Music Production paid vstis and kernel-level anti cheat games (GTAV, etc)
- OSX? Lightroom? GPU?
Edit: Thank you all for your thoughts and contributions
Edit: what I’ve learned
- this is viable but might be a pain
- a Windows VM for getting around anti-cheat in vames defeats the purpose. I’d need a dual boot for that use case
- hyperV is a no. Qubes Qemu libvirt, yes
- may want to just put everything on sparate disks and boot / VM into them as needed
Edit: distrobox/docker works great but doesn’t fit all my needs because I can’t install kernel-level modules in them (AFAIK)
You will need to continue dual booting or you will need to use two computers. I promise there is no hope in virtualization for your use case. I have personally considered myself the exception to this rule and discovered everyone else was right all along. Save yourself the time.
Pro tip: you will not find a CPU + Chipset that supports 2 GPUs at full speed. If you think you have, read the specs closer. You may also throw away your M.2 bandwidth as well. If you can afford an enterprise CPU with enough PCI-E lanes, you should just build two computers
I am very aware if the PCIE lane circus you have to go through on consumer hardware. I considered building a used/refurb EPYC system and even borrowed one from work for a bit. It was nice, but my build would have put me somewhere near $6000 at best. Hahahahh 😭