Search results

  1. [SOLVED] GPU Passthrough Issues After Upgrade to 7.2

    Oh my god. I got it working. The Proxmox Wiki already contained everything i needed to know. https://pve.proxmox.com/wiki/Pci_passthrough - it specifically mentions code 43 when Resizable BAR / Smart Access Memory is enabled on the Host. VMs do not support it yet. Disabled it, and now it is...
  2. [SOLVED] GPU Passthrough Issues After Upgrade to 7.2

    Windows 11 also throws Code 43 error. On Ubuntu 22.04 I am able to use 2 monitors at 4K, watch videos and play games via Steam & Proton.
  3. [SOLVED] GPU Passthrough Issues After Upgrade to 7.2

    *maybe* this is windows related. As i said, I am (sometimes) able to get a screen and enter Windows using the GPU. It just doesnt load with "code 43". For fun i added another VM and put in the Ubuntu 22.04 desktop installer. Out of the box, even in the installer, the gpu is picked up and 4K...
  4. [SOLVED] GPU Passthrough Issues After Upgrade to 7.2

    I let it boot without any cron scripts running. lspci -v shows that it bound correctly to amdgpu. Following dmesg, the card got initialized without errors by amdgpu. iomem shows nothing about BOOTFB, so this looks good I assume. When i start the VM, both gpu and its audio device show kernel...
  5. [SOLVED] GPU Passthrough Issues After Upgrade to 7.2

    Ahh, this is great to know. Thank you! I removed the blacklisting and the vfio early binds. I added that vtcon command as a @reboot cron as i intend to (later) enable autoboot for my VM. It hangs in the VMs bootscreen now. Funfact: On Kernel 5.13 it is not working either. o_O
  6. [SOLVED] GPU Passthrough Issues After Upgrade to 7.2

    This is strange af; i upgraded my System today. GTX 1060 was running absolutely fine on PVE 7.2 @ Kernel 5.15, no issues at all. Special grub flags (regarding efibuffer etc.) were not necessary since PVE 7.1, it basically works out of the box, only had to enable vfio and iommu. Now I switched...
  7. Datastore mit Festplatten

    Hier noch die versprochene Aufzählung aus meinem Datastore: 1k: 6112 2k: 5090 4k: 7282 8k: 10199 16k: 22877 32k: 52426 64k: 57533 128k: 59923 256k: 68144 512k: 125568 1M: 185843 2M: 150328 4M: 67005 8M: 2688 16M: 21 1M vs 4M recordsize versetzt...
  8. Datastore mit Festplatten

    Ich verwende recordsize=4M und erziele aktuell mit zstd eine compressratio von 1.21x bei ca. 1.2 TB im datastore, ca. 2200 Backups, Deduprate 111.32x. Pool ist ein MIrror aus 2x 4TB Enterprise HDDs, kein Special Device oder Sonstiges mit dran. Bin mit der Gesamtleistung absolut zufrieden...
  9. VM HDD read write speed about 25% less than direct on node speed

    If you change the SCSI Controller to "VirtIO SCSI single" and enable IO Thread on your VM Disks, it can improve IO performance when having multiple virtual disks per VM. On top of Ceph, consider using cache=writeback to help with performance. According to Proxmox' benchmarks it can drastically...
  10. [SOLVED] Stats from last Garbage Collection

    Bei mir habe ich ein ähnliches Konstrukt @home und @wörk laufen. Mal anhand meines privaten PBS demonstriert: Es gibt das ZFS dataset rpool/datastore. Dort habe ich eine Quota gesetzt um sicherzustellen, dass nicht eines Tages der Pool überschwappt, 20% Buffer für ZFS & OS. Außerdem diverse...
  11. VM HDD read write speed about 25% less than direct on node speed

    Cached reads are in no way a good performance indicator, you can basically ignore those numbers. Also consider using fio for IO benchmarking. Proxmox provides some commands and numbers for this: https://www.proxmox.com/de/downloads/item/proxmox-ve-zfs-benchmark-2020 Also, what is your VM...
  12. Opt-in Linux Kernel 5.15 for Proxmox VE 7.x available

    Running smooth so far on a Ryzen 1700X. GPU passthrough of a GTX1060 works. No issues at all, so far. :)
  13. Considering Proxmox for hosting - What are your thoughts

    Yes, but be aware that you can only use the bandwidth of a single 10G port *per connection*, even if you are bonding multiple 10G ports using LACP. Considering that you are doing many VMs (hosting), your parallel performance is more important I think, which can be improved by using the right...
  14. Proxmox CEPH performance

    I know that feeling. :rolleyes: :D Checkout this PDF: https://forum.proxmox.com/threads/proxmox-ve-ceph-benchmark-2020-09-hyper-converged-with-nvme.76516/ There is a comparison of a handful of SSDs, including a Samsung EVO. It's ridiculous, the Intel 3510 are not the newest drives by any...
  15. Proxmox CEPH performance

    Other users and proxmox staff could provide a more educated answer on this, I think. But you can read this in many threads here and I also made the personal experience that there is a very noticeable difference between QLC and TLC/MLC drives. For usecases like ZFS and Ceph there is also that...
  16. PBS scaling out storage

    Ah, it looks like I misremembered. I've found the commit I was thinking about: https://git.proxmox.com/?p=proxmox-backup-qemu.git;a=commit;h=6eade0ebb76acfa262069575d7d1eb122f8fc2e2 But that is about a backup restores, not verifys. Overall, I dont see any magically performance "fix" coming...
  17. Proxmox CEPH performance

    The v300 is MLC NAND while the QVO are QLC. MLC is way more durable and provides better write performance than QLC NAND in general.
  18. PBS scaling out storage

    Ah, I understand. Well, building multiple smaller servers that provide NFS shares may be a more appropriate solution then. Or, as long as its feasible, upgrading the existing HDDs to bigger ones. Why? ZFS can handle dozens of disks properly and with new features like dRAID even rebuilds kann be...
  19. PBS scaling out storage

    You can buy and deploy those JBODs on demand, no need to keep empty cabs around. If you are fine with multiple datasources (NFS share per datastore) anyway, you have lots of additional possibilities anyway. It sounded like you really want to have one single machine serve all your datastores, at...
  20. PBS scaling out storage

    Growing a ZFS pool sounds like a good solution to me, for quite some time. One can add a hell lot of disks using some JBODs. And nowadays there are pretty large HDDs, too. If that doesn't fit, building a ceph cluster could work for further scaling, but if you expect to reach 20TB in like 2-3...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!