Zero out a ceph disk

adamb

Famous Member
Mar 1, 2012
1,329
77
113
Is it possible to zero out a ceph disk?

For example we write zero's to our iscsi storage to get our space back, or to help make the proxmox VM backups nice and tight. Would the same process of using dd on a ceph rbd device get the job done as well?

I started to test this, but the object count in my pool was still increasing. Maybe there is another way to get this done?
 
Ceph disks are thin provisioned by default. To get space back, you execute a trim (discard) inside the VM/CT. For VMs the discard option needs to be set on the disks.
 
Ceph disks are thin provisioned by default. To get space back, you execute a trim (discard) inside the VM/CT. For VMs the discard option needs to be set on the disks.

Ok I think I got this all working with a discard setup. Appreciate the input.
 
Last edited:
This is an old thread, but I'm interested in an answer that goes beyond TRIM (discard) if anyone has one.

For context:
  • Although I've documented my Proxmox VE config in detail, and I'm backing up all VMs/CTs to Proxmox Backup Server, I still like to make a full disk image of the disks in my Proxmox server/cluster once it is set up in order to get back to a known good configuration without lots of manual steps.

  • My PVE cluster consists of 3 identical nodes with:
    • 1 SSD (Proxmox boot, local-lvm with an EXT4 logical volume I use for a few local backups)
    • 1 NVMe (each assigneed to my Ceph cluster)
    • No disk encryption
  • The way I achieve full system backup is by using a bootable linux distro (SuSE's Rescue System) to run "dd" and "pigz" to create a compressed byte-for-byte image of my disks (/dev/sda and /dev/nvme0n1).

  • If I understand TRIM, all it does is mark chunks of data on an SSD as not used, but it doesn't actually zero out that data
  • This matters because "dd" will read every block of that non-zero data and feed it to "pigz" for compression, but the compression will be limited because the free space isn't as compressible if it's not all the same (zero).

  • An illustrative example:
    • When I first built my cluster and hadn't created any VMs/CTs, I imaged all 6 of the disks from the 3 nodes, and it was 23 GB (compressed).
    • Then I created a bunch of VMs, including a bunch I eventually deleted, cloned, etc. as I tested things. I have "discard" enabled for all of them. They require 550 GB of storage space (uncompressed), which occupies a total of 1650 GB on disks dedicated to Ceph (to maintain 3 replications)
    • Imaging these NVMe disks (WITH compression) results in backup files = 2390 GB, which is even 45% larger than the uncompressed data itself. The only way that happens is if there's non-zero data in the "free" space that can't be compressed much.
    • Theoretically, this compressed backup should always be less than 1650 GB, and perhaps even less than 550 GB. To prove that point, this same VM/CT data, all backed up on my Proxmox Backup Server (which uses de-duplication, essentially a type of compression) only occupies 190 GB - less than 1/10th the size of the compressed disk image!
So, has anyone found a way to zero out the free space in a Ceph cluster (and in the local-lvm while we're at it)? Doing so in a VM is easy, but doing so on the Proxmox+Ceph host definitely isn't straightforward.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!