Ceph cache tier and disk resizing

satiel

Member
Jan 13, 2016
11
0
21
38
Hello,
i'm currently running a ceph cluster (Hammer), last weekend I implemented a cache tier (writeback mode) of SSDs for better performance.

Everything seems fine except for disk resizing.
I have a Windows VM with a raw RBD disk, i powered off the VM, resized the disk, verified that both ceph rbd and proxmox see the new expanded capacity.

When i powered on the VM i noticed something strange, the guest doens't see the new free space.
I tried to refresh and rescan with all the tools that i know but with no results.

I think that the problem is that the rescan of the disk is performed against the cache instead of base pool.

After three days the problem remains.

What should i do?
 
did you resize the windows partition inside the vm?
 
Hello,
i'm currently running a ceph cluster (Hammer), last weekend I implemented a cache tier (writeback mode) of SSDs for better performance.

Everything seems fine except for disk resizing.
I have a Windows VM with a raw RBD disk, i powered off the VM, resized the disk, verified that both ceph rbd and proxmox see the new expanded capacity.

When i powered on the VM i noticed something strange, the guest doens't see the new free space.
I tried to refresh and rescan with all the tools that i know but with no results.

I think that the problem is that the rescan of the disk is performed against the cache instead of base pool.

After three days the problem remains.

What should i do?
Hi,
work for me on an ec-pool too:
Code:
# fdisk -l
Disk /dev/vdb: 10 GiB, 10737418240 bytes, 20971520 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

### expand from 10 to 20GB ###

# dmesg | tail -2
[  134.086819] virtio_blk virtio2: new size: 41943040 512-byte logical blocks (21.4 GB/20.0 GiB)
[  134.086825] vdb: detected capacity change from 10737418240 to 21474836480

# fdisk -l
Disk /dev/vdb: 20 GiB, 21474836480 bytes, 41943040 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Udo
 
Hi,
the base pool is a replicated pool as well.
I created another pool with the same crush rule without cache tiering, resizing is working as expected.
Maybe this is happening because i added the cache tier to an existing pool ( i did it with all VM powered on) ?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!