Search results

  1. M

    Passed-through nvme: poor performance on 4k random writes with higher queues

    I thought that should only affect the performance on the virtual drive though, not the passed-though drive?
  2. M

    Passed-through nvme: poor performance on 4k random writes with higher queues

    qm config 104 balloon: 0 bios: ovmf boot: order=scsi0;net0 cores: 12 cpu: host efidisk0: local:104/vm-104-disk-1.raw,efitype=4m,pre-enrolled-keys=1,size=528K hostpci0: 0000:0b:00,pcie=1,x-vga=1 hostpci1: 0000:0d:00.3,rombar=0 hostpci2: 0000:01:00.0,rombar=0 machine: pc-q35-6.2 memory: 16000...
  3. M

    Passed-through nvme: poor performance on 4k random writes with higher queues

    Ah, sorry, I'm confusing things - actually I tried three ways: 1) Windows host 2) Proxmox host, with tested drive a vfio scsi drive on the nvme I'm interested in (no caching) 3) Proxmox host, with nvme drive passed through. In case 1 and 2, 4k random writes with 32 queue depth are good. In the...
  4. M

    Passed-through nvme: poor performance on 4k random writes with higher queues

    Yes, identical VM - the OS itself is running off a virtual disk.
  5. M

    Passed-through nvme: poor performance on 4k random writes with higher queues

    Just one socket - It's a Ryzen 3900x 1 socket/12 core/24 thread. I set as 1 socket, 12 threads, 'host' cpu type for VM.
  6. M

    Passed-through nvme: poor performance on 4k random writes with higher queues

    I have a Firecuda 530 1TB, which can get ~660 MB/s on 4k random writes with a 32 queue depth. However, I'm only seeing 260 when the whole disk is passed through to a windows VM. All other figures (including read 4k random, and sequential) are fine. The VM itself has 12 threads and plenty of...
  7. M

    Trying to build PVE kernel 5.13.19-6 - but git branch has set 5.13.19-14

    Actually, I'm getting confused between what 19-6-pve suffix means vs 19-15 suffix?
  8. M

    Trying to build PVE kernel 5.13.19-6 - but git branch has set 5.13.19-14

    Per title, I'm trying to build the kernel with a quirk patch enabled. root@pve:~/sources/pve-kernel# uname -a Linux pve 5.13.19-6-pve #1 SMP PVE 5.13.19-15 (Tue, 29 Mar 2022 15:59:50 +0200) x86_64 GNU/Linux I've thus checked out...
  9. M

    What processes/resources are used while doing a VM backup in "stop" mode

    Thanks again. This is probably venturing into Qemu territory that I should probably find another forum to ask. However, if the VM is "started...in a paused state", and "just reading the full data stored for the guest image", what is Qemu process doing differently that couldn't be achieved just...
  10. M

    What processes/resources are used while doing a VM backup in "stop" mode

    Thanks @fabian Do you have any inkling then please, what might contribute to a NVMe or PCI problem either while running the VM, or just backing it up? Does the backup process use the virtio scsi driver? Could there be low level qemu/kvm issues there, or any verbose logging or settings that...
  11. M

    What processes/resources are used while doing a VM backup in "stop" mode

    I'm having an issue with my nvme controller going offline https://forum.proxmox.com/threads/proxmox-just-died-with-nvme-nvme0-controller-is-down-will-reset-csts-0xffffffff-pci_status-0x10.88604/page-2#post-471159 As you'll see in that thread, there are lots of possible causes, but I've now...
  12. M

    [SOLVED] GPU Passthrough Issues After Upgrade to 7.2

    I check https://git.proxmox.com/?p=pve-kernel.git;a=summary for changes, and there's only been two changes since this thread. One's for an network controller, the other for NFS.
  13. M

    Proxmox 7.2 / Kernel 5.15 - safe yet for Nvidia VFIO passthrough?

    I've seen a number of threads now about issues with Nvidia GPU passthrough after upgrading to Proxmox 7.2 and/or kernel 5.15. There's a few workarounds, with mixed reports about what works, but I'm wondering if it's likely to be fixed properly soon, or whether I should just go ahead and update now.
  14. M

    [SOLVED] GPU Passthrough Issues After Upgrade to 7.2

    I'm wondering if any of these fixes are likely to be incorporated into Proxmox, as currently I'm holding off upgrading from 7.1 to 7.2
  15. M

    Best way to setup a virtualized router

    Side note - what mini pc? What NICs does it have? Just to warn if they are not Intel ones, you’re going to have problems with pfsense throughput otherwise.
  16. M

    Proxmox just died with: nvme nvme0: controller is down; will reset: CSTS=0xffffffff, PCI_STATUS=0x10

    I think I may be near the end of the journey https://git.launchpad.net/~ubuntu-kernel/ubuntu/+source/linux/+git/focal/commit/?id=47add9f75714fabd3702dca0e5899a56d2f3ee2f Essentially, it seems some deep power states are not working on some SSDs on Linux, and there's a quirk patch. That said...
  17. M

    Proxmox just died with: nvme nvme0: controller is down; will reset: CSTS=0xffffffff, PCI_STATUS=0x10

    Looking into this further, I think it might just be an issue with power states: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1705748 https://bugzilla.kernel.org/show_bug.cgi?id=195039...
  18. M

    Proxmox just died with: nvme nvme0: controller is down; will reset: CSTS=0xffffffff, PCI_STATUS=0x10

    Ok, I got something after building the ancient it87 out-of-tree driver: root@pve:~/drivers/it87-it8688E# sensors it8688-isa-0a40 Adapter: ISA adapter in0: 276.00 mV (min = +0.00 V, max = +3.06 V) in1: 1.99 V (min = +0.00 V, max = +3.06 V) in2: 1.99 V (min =...
  19. M

    Proxmox just died with: nvme nvme0: controller is down; will reset: CSTS=0xffffffff, PCI_STATUS=0x10

    @leesteken - would you believe it, I've asked the same question about sensors before on the x570 (I since moved to b550 to avoid chipset fans) https://github.com/lm-sensors/lm-sensors/issues/154#issuecomment-650662163 Anyway, I think that'll work. I can then monitor voltages and see if they...