Oh my god. I got it working.
The Proxmox Wiki already contained everything i needed to know.
https://pve.proxmox.com/wiki/Pci_passthrough - it specifically mentions code 43 when Resizable BAR / Smart Access Memory is enabled on the Host.
VMs do not support it yet.
Disabled it, and now it is...
*maybe* this is windows related.
As i said, I am (sometimes) able to get a screen and enter Windows using the GPU. It just doesnt load with "code 43".
For fun i added another VM and put in the Ubuntu 22.04 desktop installer.
Out of the box, even in the installer, the gpu is picked up and 4K...
I let it boot without any cron scripts running.
lspci -v shows that it bound correctly to amdgpu.
Following dmesg, the card got initialized without errors by amdgpu.
iomem shows nothing about BOOTFB, so this looks good I assume.
When i start the VM, both gpu and its audio device show kernel...
Ahh, this is great to know. Thank you!
I removed the blacklisting and the vfio early binds.
I added that vtcon command as a @reboot cron as i intend to (later) enable autoboot for my VM.
It hangs in the VMs bootscreen now.
Funfact: On Kernel 5.13 it is not working either. o_O
This is strange af; i upgraded my System today.
GTX 1060 was running absolutely fine on PVE 7.2 @ Kernel 5.15, no issues at all.
Special grub flags (regarding efibuffer etc.) were not necessary since PVE 7.1, it basically works out of the box, only had to enable vfio and iommu.
Now I switched...
Ich verwende recordsize=4M und erziele aktuell mit zstd eine compressratio von 1.21x bei ca. 1.2 TB im datastore, ca. 2200 Backups, Deduprate 111.32x.
Pool ist ein MIrror aus 2x 4TB Enterprise HDDs, kein Special Device oder Sonstiges mit dran.
Bin mit der Gesamtleistung absolut zufrieden...
If you change the SCSI Controller to "VirtIO SCSI single" and enable IO Thread on your VM Disks, it can improve IO performance when having multiple virtual disks per VM.
On top of Ceph, consider using cache=writeback to help with performance. According to Proxmox' benchmarks it can drastically...
Bei mir habe ich ein ähnliches Konstrukt @home und @wörk laufen.
Mal anhand meines privaten PBS demonstriert:
Es gibt das ZFS dataset rpool/datastore.
Dort habe ich eine Quota gesetzt um sicherzustellen, dass nicht eines Tages der Pool überschwappt, 20% Buffer für ZFS & OS.
Außerdem diverse...
Cached reads are in no way a good performance indicator, you can basically ignore those numbers.
Also consider using fio for IO benchmarking. Proxmox provides some commands and numbers for this: https://www.proxmox.com/de/downloads/item/proxmox-ve-zfs-benchmark-2020
Also, what is your VM...
Yes, but be aware that you can only use the bandwidth of a single 10G port *per connection*, even if you are bonding multiple 10G ports using LACP.
Considering that you are doing many VMs (hosting), your parallel performance is more important I think, which can be improved by using the right...
I know that feeling. :rolleyes: :D
Checkout this PDF: https://forum.proxmox.com/threads/proxmox-ve-ceph-benchmark-2020-09-hyper-converged-with-nvme.76516/
There is a comparison of a handful of SSDs, including a Samsung EVO.
It's ridiculous, the Intel 3510 are not the newest drives by any...
Other users and proxmox staff could provide a more educated answer on this, I think.
But you can read this in many threads here and I also made the personal experience that there is a very noticeable difference between QLC and TLC/MLC drives.
For usecases like ZFS and Ceph there is also that...
Ah, it looks like I misremembered.
I've found the commit I was thinking about: https://git.proxmox.com/?p=proxmox-backup-qemu.git;a=commit;h=6eade0ebb76acfa262069575d7d1eb122f8fc2e2
But that is about a backup restores, not verifys.
Overall, I dont see any magically performance "fix" coming...
Ah, I understand. Well, building multiple smaller servers that provide NFS shares may be a more appropriate solution then.
Or, as long as its feasible, upgrading the existing HDDs to bigger ones.
Why? ZFS can handle dozens of disks properly and with new features like dRAID even rebuilds kann be...
You can buy and deploy those JBODs on demand, no need to keep empty cabs around.
If you are fine with multiple datasources (NFS share per datastore) anyway, you have lots of additional possibilities anyway.
It sounded like you really want to have one single machine serve all your datastores, at...
Growing a ZFS pool sounds like a good solution to me, for quite some time. One can add a hell lot of disks using some JBODs.
And nowadays there are pretty large HDDs, too.
If that doesn't fit, building a ceph cluster could work for further scaling, but if you expect to reach 20TB in like 2-3...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.