What exactly does fail where? You have the pve-headers which let you compile modules all by yourself and blacklist existing ones. The reason that (in most cases) this step is taken very rarely is that you have to recompile everything once...
Besides what @bbgeek17 said, i'd suggest you either setup Proxmox Managment Side to only be accessible via VPN - or - if you know Linux, you can just close port 8006 and tunnel it through SSH and thus only need SSH open via firewall.
You can...
I'd highly discourage anyone doing that on a production cluster. @SteveITS was correct, its wise to update, upgrade and then dist-upgrade to bring the whole platform to the same major and minor.
This is - at least - surprising to me.
You have a 500 GB disk on a zpool with 900 GB capacity. Youre using no snapshots, which could take up additional space.
So whats eating the 399 GB here?
Write amplification refers to each little change in...
Thanks to everyone contributing so far!
Summary to this point:
- Original volume for VM 11000 uses 300GB+ beyond its allocated volsize on a non-raidz, single disc pool (nearly factor 2x)
- Creating new test-volumes and filling them with...
I'm 100% there with you. I create the disk in the most boring way possible:
Standard settings, single disc, didnt change a single setting. I could (theoretically) delete the volume and recreate it later down the road when i finished...
Just to complete my mental distress, i've tried to verify the behaviour:
[root@ ~]# zfs get -p used,logicalused,compressratio,copies,volblocksize vms/vm-11000-disk-0
NAME PROPERTY VALUE SOURCE
vms/vm-11000-disk-0 used...
Highly appreciated your time aaron (as always!). You may be served:
[root@ ~]# zfs get all vms
NAME PROPERTY VALUE SOURCE
vms type filesystem -
vms creation Tue May 20...
VM gets trimmed once every day. All filesystems are trimmed.
Your wish may be fullfilled:
[root@ ~]# zpool status -v
pool: rpool
state: ONLINE
scan: scrub repaired 0B in 00:00:29 with 0 errors on Sun Dec 14 00:24:30 2025
config...
I was thinking the exact same and i'm still not certain if i figured out the correct reason just yet.
Used to be 545G throughout the whole time. Then i filled the partition up and it just kept growing nonstop.
raidz needs to store parity information somewhere (in blocks of size at least the pool's ashift, which usually nowadays is 12 so 4k). zvols by default use a very small blocksize (8k). if you write a single 8k block to the zvol, and use raidz2...
Looks totally fine to me. You're using roughly 60GB without L2ARC, rest is L2ARC usage (~ 170GB) + some remaining free GB (~ 20). I see absolutely nothing wrong here.
The graphs have different color. If you hover over these graphs it will tell...
Heya!
I've just had one of my VMs stalled because of a Proxmox "io-error". Well, seems the pool usage is 100%.
[root@~]# zfs get -p volsize,used,logicalused,compressratio vms/vm-11000-disk-0
NAME PROPERTY VALUE SOURCE...
If you have a pull request for Proxmox please be so kind to link it here so i can review/improve it before there is a chance Proxmox will merge it.
I'd add the option into the CEPH pool configuration UI because its linked on a per-pool-basis and...
rbd config pool set POOLNAME rbd_read_from_replica_policy localize
Regarding the post mortem: I had to delay my work on that because i have to deal with lots of other stuff with higher inhouse priority at the moment. Hopefully i'll be able to...