This is - at least - surprising to me.
You have a 500 GB disk on a zpool with 900 GB capacity. Youre using no snapshots, which could take up additional space.
So whats eating the 399 GB here?
Write amplification refers to each little change in...
Just for fun, we can do the math for your setup and what you should get with 128k.
Let's look at a 132k write.
First block is an incompressible 128k write. Second is also a 128k block but with only 4k data, the rest zeros. LZ4 can compress that...
I'd enjoy a more detailed RAM usage statistic there too. It could look similar like this one for VMs or better, a horizontally stacked bar
The issue with considering ARC as being unused is that it's not usually freed fast enough to be useful...
Looks totally fine to me. You're using roughly 60GB without L2ARC, rest is L2ARC usage (~ 170GB) + some remaining free GB (~ 20). I see absolutely nothing wrong here.
The graphs have different color. If you hover over these graphs it will tell...
this sounds like an instance of "hole" mishandling, which is often caused by devices lying about their support for discarding. changing the block size might just cause the data to be aligned differently by the guest OS and thus avoid the issue...
That question is an oversimplification ;) But most likely, it would rise of course.
Since a lets say from 128k down to 100k compressed block can make use of wider stripes and easier fit the pool geometry, padding has less impact and so on. But...
Nein, kein Fehler beim System - eher Gedanklich beim Anwender ;-)
Windows bzw SAMBA laufen immer nach dem schema \\server\freigabe
Da gibt es bei der Benennung restriktionen wie länge oder die Zeichen.
Windows ist es egal, ob du gross- oder klein...
That question is an oversimplification ;) But most likely, it would rise of course.
Since a lets say from 128k down to 100k compressed block can make use of wider stripes and easier fit the pool geometry, padding has less impact and so on. But...
I have exactly the same problems since a view days...
Not all VMs are effected but some (randomly) and resetting is the only chance to get them back.
Proxmox 9.1.2
Linux proxmox 6.17.2-2-pve #1 SMP PREEMPT_DYNAMIC PMX 6.17.2-2...
Most likely yes.
It tooks me a second to read the above discussion when I realized they're no longer speaking in terms of your question. The answer to your question depends on other factors you didnt mention, namely:
1. are you trying to...
testing whether the https://kernel.ubuntu.com/mainline/v6.16/ and https://kernel.ubuntu.com/mainline/v6.16.12/ kernels are showing the issue on your system(s) would be highly appreciated to narrow down the cause. we still cannot reproduce any...
Unfortunately, that did not work apparently,
Here is what booting looks like
So I rebooted into the installed, and chrooted again into the system
root@proxmox:/# mkdir /mnt/root
root@proxmox:/# mkdir /mnt/boot
root@proxmox:/# mkdir...
This aspect SHOULD NOT rely on a third party from the Proxmox group itself. Those scripts are well intentioned, but are NOT sufficiently vetted for security functions. Proxmox VE clusters are used in many sensitive environments and this can lead...
Is there a tutorial for this anywhere? I see several posts - mostly this one https://forum.proxmox.com/threads/how-to-migrate-from-legacy-grub-to-uefi-boot-systemd-boot.120531/ - about how to do this with a ZFS root file system but nothing for...