Hi there,
Doing my first steps with ceph storage and i'm not sure if there is anything wrong or this behaviour is expected.
When moving a disk from ceph to local lvm-thin (2x business SSDs on RAID1 with RAID-Controller), the progress stalls for about a minute multiple times.
Strangely enough, I can't reproduce this behaviour, when I move it back to Ceph. It only happens when i move from Ceph to local lvm.
The VM is a newly installed Ubuntu 24.04 Server with 120GB Disk with lvm partition layout.
Update: Just noticed that the disk uses 120GB of space on the lvm-thin although it uses just about 5gb of data in the ubuntu-vm.
Update2: Did following test: Installed a new Ubuntu VM with a 120GB disk, lvm-partitioned on the lvm-thin. As expected it took just a few GB space on the lvm-thin. But after moving it to ceph, it looks like it's thick-provisioned:
Update3: It looks like i can get some space back doing a "fstrim -va" inside the vm. At least the space which is allocated to the vm:
The workaround i found is to create a partition from the unallocated space and use blkdiscard to trim:
Now it looks quite better:
Update4: ok, it's definitely "trim-related". If i trim the disks of the VM before moving from lvm-thin to Ceph, it's a) faster and b) doesn't have these slowdowns while moving.
But after every move between these two storage types, i have to trim again.
So i think it is what it is(?)
Doing my first steps with ceph storage and i'm not sure if there is anything wrong or this behaviour is expected.
When moving a disk from ceph to local lvm-thin (2x business SSDs on RAID1 with RAID-Controller), the progress stalls for about a minute multiple times.
Strangely enough, I can't reproduce this behaviour, when I move it back to Ceph. It only happens when i move from Ceph to local lvm.
The VM is a newly installed Ubuntu 24.04 Server with 120GB Disk with lvm partition layout.
Update: Just noticed that the disk uses 120GB of space on the lvm-thin although it uses just about 5gb of data in the ubuntu-vm.
Update2: Did following test: Installed a new Ubuntu VM with a 120GB disk, lvm-partitioned on the lvm-thin. As expected it took just a few GB space on the lvm-thin. But after moving it to ceph, it looks like it's thick-provisioned:
Code:
~# rbd du vms0_ceph/vm-901-disk-0
NAME PROVISIONED USED
vm-901-disk-0 120 GiB 120 GiB
Update3: It looks like i can get some space back doing a "fstrim -va" inside the vm. At least the space which is allocated to the vm:
Code:
~# rbd du vms0_ceph/vm-901-disk-0
NAME PROVISIONED USED
vm-901-disk-0 120 GiB 69 GiB
Code:
# pvs
PV VG Fmt Attr PSize PFree
/dev/sda3 ubuntu-vg lvm2 a-- <118.00g 59.00g
Code:
lvcreate -l100%FREE -n blkdiscard ubuntu-vg
blkdiscard -v /dev/ubuntu-vg/blkdiscard
lvremove ubuntu-vg/blkdiscard
Code:
# rbd du vms0_ceph/vm-901-disk-0
NAME PROVISIONED USED
vm-901-disk-0 120 GiB 9.8 GiB
Update4: ok, it's definitely "trim-related". If i trim the disks of the VM before moving from lvm-thin to Ceph, it's a) faster and b) doesn't have these slowdowns while moving.
But after every move between these two storage types, i have to trim again.
So i think it is what it is(?)
Attachments
Last edited: