Copy progress stalls when moving drives from Ceph to local lvm

devaux

Active Member
Feb 3, 2024
166
38
28
Hi there,
Doing my first steps with ceph storage and i'm not sure if there is anything wrong or this behaviour is expected.
When moving a disk from ceph to local lvm-thin (2x business SSDs on RAID1 with RAID-Controller), the progress stalls for about a minute multiple times.


1728281379201.png


Strangely enough, I can't reproduce this behaviour, when I move it back to Ceph. It only happens when i move from Ceph to local lvm.

The VM is a newly installed Ubuntu 24.04 Server with 120GB Disk with lvm partition layout.

Update: Just noticed that the disk uses 120GB of space on the lvm-thin although it uses just about 5gb of data in the ubuntu-vm.

Update2: Did following test: Installed a new Ubuntu VM with a 120GB disk, lvm-partitioned on the lvm-thin. As expected it took just a few GB space on the lvm-thin. But after moving it to ceph, it looks like it's thick-provisioned:
Code:
~# rbd du vms0_ceph/vm-901-disk-0
NAME           PROVISIONED  USED
vm-901-disk-0      120 GiB  120 GiB

Update3: It looks like i can get some space back doing a "fstrim -va" inside the vm. At least the space which is allocated to the vm:
Code:
~# rbd du vms0_ceph/vm-901-disk-0
NAME           PROVISIONED  USED
vm-901-disk-0      120 GiB  69 GiB
Code:
# pvs
  PV         VG        Fmt  Attr PSize    PFree
  /dev/sda3  ubuntu-vg lvm2 a--  <118.00g 59.00g
The workaround i found is to create a partition from the unallocated space and use blkdiscard to trim:
Code:
lvcreate -l100%FREE -n blkdiscard ubuntu-vg
blkdiscard -v /dev/ubuntu-vg/blkdiscard
lvremove ubuntu-vg/blkdiscard
Now it looks quite better:
Code:
# rbd du vms0_ceph/vm-901-disk-0
NAME           PROVISIONED  USED  
vm-901-disk-0      120 GiB  9.8 GiB

Update4: ok, it's definitely "trim-related". If i trim the disks of the VM before moving from lvm-thin to Ceph, it's a) faster and b) doesn't have these slowdowns while moving.
But after every move between these two storage types, i have to trim again.
So i think it is what it is(?)
 

Attachments

  • task-vm-qmmove-2024-10-07T05_57_49Z.log
    36.5 KB · Views: 2
  • task-vm-with-trimmed-disk-qmmove-2024-10-07T07_36_45Z.log
    5.8 KB · Views: 0
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!