You're better off attaching the disk using IDE or SATA, getting it to boot up, then install virtio driver and finally switch it over.
Going straight to any disk controller with no driver will almost always BSOD with Windows 7/2008 variants.
I've got a 1050 in my rig and it's working great with a simple config.
Host hardware is Threadripper 1900x in ASRock X399 Taichi. ID 41:00 is the GPU and ID 0d:00 is a USB controller.
Just updated one of our nodes last night to the latest Kernel 4.15.18-16-pve (for the SACK patches) and I started getting a massive amount of log spam to /var/log/messages and dmesg.
The message is:
Jun 24 22:56:00 orthrus4 kernel: [ 233.086556] dpc 0000:30:03.1:pcie010: DPC containment...
I'm still running into issues with OVMF resolution setting.
This issue is referenced in an earlier thread:
It still seems to persist in 5.4 and with the same caveats.
I have an EFI disk attached and the...
Have you tried testing with a monitor plugged in?
Many functions in nvidia-smi will not work correctly unless the system is fooled into thinking that a monitor is present (a common issue crypto-miners run into). You can setup Xorg to get around this, but the easiest way is to use a dummy plug.
It's in your list as:
1e:00.0 VGA compatible controller: NVIDIA Corporation Device 1b84 (rev a1) (prog-if 00 [VGA controller])
The HDMI audio sub-device at 1e:00.1 gives it away: "NVIDIA Corporation GP104 High Definition Audio Controller (rev a1)".
The 1b84 device code is for the GTX1060 3GB...
I'm doing pretty much the same as this, plus an added Windows10 VM with GPU-passthrough as my HTPC and Pfsense VM as a core router.
In some cases, I have 2 disks attached to the VMs if there's bulk data stored, as well. All VMs have a "boot" drive on the main SSD and bulk data (e.g. Nexctloud...
I'm seeing this as well on one of our Supermicro boards which is running 2x Xeon E5-2640v4 on an X10DRi motherboard. I wrote it off as an anomaly but it now happens about once every 1-3 months. I've got recorded outages as 7/10/2017, 9/1/2018, 1/2/2018 and 28/3/2018.
PVE and VMs lock up...
Just a thought, but could you used nested virtualization?
ie. Create a standard AMD64 VM, then install Qemu with relevant extensions inside the VM?
It's not ideal as it's an added layer of abstraction, but performance theoretically shouldn't suffer that much and saves the hassle of...
If you really want to, you could create an LV manually, format it with something (e.g. ext4), mount it as a directory, then add the directory to the PVE storage page.
If it were me, I'd just plug in an external USB drive and use that.
If you're on an Intel host, I've found the most reliable way is just to pass the entire card in via PCIE passthrough.
This is how my HTPC operates and I don't have any audio glitches or DPC problems. Also have a GTX1060 passed through and various USB devices.
That's exactly what it's doing.
The best way is usually to install everything (OS etc) without the GPU connected first. While you're doing that, install or enable remote access in some way (teamviewer/remotedesktop/logmein/whatever) then attach the GPU afterwards and use your remote access...
On a related note, if you're using GPU/PCIE passthrough, you can use Steam in-home Streaming to stream games/desktop to a client. There's a few others as well like Nvidia Gamestream (Shield or Moonlight clients) or Splashtop.
If you use Nvidia Gamestream + Moonlight, it has an Android client...
The Opteron CPUs are serious power hogs (comparatively).
To be honest if the power consumption is a problem for you, a single new quad-core Xeon (ie 1230v5) could probably outperform that setup on less than half the power.
Have you got the actual error output?
It'll usually show in the web GUI if you open up the start task at the bottom.
Alternatively, start the VM via console/ssh with "qemu start VMID" and see what pops up.