This happened to me as well (granted - on nVidia, but could still be a contributing case for AMD), the causes were 2:
1. resize bar was enabled in the BIOS
2. the PCIe slot was set to "raid" mode (4x4x4x4) instead of 16x.
This happened to me as well, the causes were 2:
1. resize bar was enabled in the BIOS
2. the PCIe slot was set to "raid" mode (4x4x4x4) instead of 16x.
Where did you add softdep nvme pre: vfio-pci ?
Upon checking lspci, I see the NVME I've passed through shows as:
Kernel driver in use: vfio-pci
Kernel modules: nvme
Also, I have 2 NVMe's (one is for lvm storage, the other one - passed through) softdep nvme pre: vfio-pci won't break that?
Hey there,
I've got a few questions / issues regarding a nvme ssd passed through in a VM.
The server has 2 nvmes - one as LVM storage for VM disks, another (Seagate BarraCuda Q5 ZP10000CV30001) passed in a VM.
The VM is Windows 11, installed directly onto that nvme as ovmf Q35 ***without*** an...
Sorry for waking an old post up but... are you sure you're not passing an pcie root port/switch or something?
I'm also passing an usb controller directly to VM - no issues whatsoever, but did a loooot of checking before passing it.
Hello,
If would be nice if PVE could detect a VM stuck in a reboot loop (case when something on the virtual disk has failed resulting in constant reboots).
I've had a FreeBSD VM with a damaged disk... constantly rebooting and no indication there was a problem (it was showing as running...
I know 6.4 is EOL, I haven't had time to upgrade mostly because I'm also planning on removing the current PVE install (a HDD) and replacing with a pair of SSD's in raid 0. The main concern here is that my VMs storage is LVM on an nvme ssd and I have no idea if that LVM would be immediately...
I'm getting "Marking TSC unstable due to clocksource watchdog" after some hours of uptime.
System is Threadripper 3960x, Asus TRX40-pro with the latest available bios, 8x8G 3200 sticks.
```
[1198064.004082] clocksource: timekeeping watchdog on CPU46: hpet retried 2 times before success...
Can someone tell me if it's even possible to restore an LVM storage located on a separate disk (nvme) when doing a completely new install of PVE (on new SSDs). The LVM storage (NVME_SMS) is listed in storage.cfg however, will it be detected on the new install at all, if I just add these entries...
I still haven't made the reinstall onto the new SSDs, however, planning to do so in a day or two.
Quick question - when I do the clean install, will it see the old LVM disk (nvme0p1) containing the VM disks?
On the current install:
root@proxmox:# pvscan...
So, is there a way to upgrade from 2.0-4 to 2.1 ?
"""
Q: Can I dist-upgrade Proxmox Backup Server 2.0 to 2.1 with apt?
A: Yes, just update via GUI or run "apt update && apt dist-upgrade"
"""
Doesn't do the job.
*EDIT* - Found the problem... the "no-subscription" repository wasn't added...
I figured things out:
1. add 1bb1:5013 to vfio-pci.ids in /etc/default/grub
2. add 1bb1:5013 to options vfio-pci ids= in /etc/modprobe.d/vfio.conf
3. When creating the VM, in UEFI de-select creation of EFI disk.
4. Add the NVME device passthrough first (hostpci0).
5. Add other passthrough...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.