Maybe because if the space is allocate again to a new vm, if the vm is doing a discard, it should remove the zeroed block
yes, it's zeroing by security, to avoid to have old datas when you'll create a new vm on the old allocated space of...
for secure delete, the new blkdiscard here, is not using discard, but zeroing feature (blkdiscard -z). It's a little bit different, because it's really writing zeroes (telling to the storage to writing zeroes by range from this begin sector -...
I wonder
yes, it should work. The only "problem" is that lvm is reservering blocks address space. The creation of snapshot itself is not writing zero.
I wonder if it could be possible to declare a lun with with virtual size bigger than your...
We're proud to present the next iteration of our Proxmox Virtual Environment platform. This new version 9.1 is the first point release since our major update and is dedicated to refinement.
This release is based on Debian 13.2 "Trixie" but we're...
Currently they are no thinprovisioning, so It'll not help for lvm+qcow2 to retreive space. (as the lvm under qcow2 is thick)
But it could reduce the size of you backup as it's cleaning the disk blocks on file deletion.
@fstrankowski I'm looking to add the option in the proxmox gui, to be sure, how do you set the value ?
"ceph config set client.admin rbd_read_from_replica_policy localize"
?
It's supported on pve9. (with lvm block volume formated with qcow2, and qcow2 snapshot chain of lvm volume) . Qcow2 is not only for file. (same for vhd on xcp-ng, their lvm volume are formated with vhd)
about snapshot && space usage, only qcow2 with external snapshot (so for shared block device), is currently still experimental and need indeed same space for each snapshots.
but all others plugins can do snapshots && don't need same space for...
Block jobs
Non-active block-commit was optimized to keep sparseness
blockdev-mirror was optimized to do less work with zero blocks
blockdev-mirror and blockdev-backup gained new options, see QMP section
interesting :)...
mmm , this is strange. for the intel, I have some of them in production,and I don't have theses result.
does it solve the problem if you force the values manually ? (you can use "ceph config set osd.x osd_mclock_max_capacity_iops_ssd 30000" )
https://lore.proxmox.com/pve-devel/mailman.50.1727091601.332.pve-devel@lists.proxmox.com/t/
"
* During backup, there is often a longer running connection open to our QMP socket of running VMs
(/var/run/qemu-server/XXXX.qmp, where XXXX is...
OK, I'm quite confident that I've found and isolated the problem (and will drop it on github soon).
So let's start with the analysis. At first, be careful that virtio releases in time are not relevant to the commit dates.
I mean that if you see...
I doubt that they are using the proxmox api, because their backup are using the qemu socket directly without using the proxmox api, and I think this is why you see qmp socket error. (Only 1 client can be connected to the socket at one time)...
>>E: Failed to fetch https://enterprise.proxmox.com/debian/ceph-squid/dists/trixie/InRelease 401 Unauthorized [IP: 66.70.154.82 443]
if you don't have an enterprise subscription, you need to configure no subscription repositories...