Proxmox6 LVM SSD Cache

Adam86

Active Member
Jul 11, 2018
19
1
43
Hello,

I've just done a new install of Proxmox onto a new server.

In the past on Proxmox 5, i have usually used an SSD drive as a LVM cache for the default pve volume, as that is installed onto a RAID-1 mirror. I then usually install my VM's onto this volume as I only have a few VM's running usually.

The commands used in the past are usually:

pvcreate /dev/sdc
vgextend pve /dev/sdc

lvcreate -L 100G -n CacheDataLV pve /dev/sdc
lvcreate -L 5G -n CacheMetaLV pve /dev/sdc

lvconvert --type cache-pool --poolmetadata pve/CacheMetaLV pve/CacheDataLV
lvconvert --type cache --cachepool pve/CacheDataLV --cachemode writeback pve/data

On Proxmox5 this has usually worked fine, on Proxmox6 I am having an issue after rebooting the server however. If i try to create a VM I get the following error:

Activation of logical volume pve/data is prohibited while logical volume pve/data_tmeta is active.
TASK ERROR: unable to create VM 100 - lvcreate 'pve/vm-100-disk-0' error: Aborting. Failed to locally activate thin pool pve/data.

If i run lvs -a I get the following output:

root@pve-tc:~# lvs -a
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
[CacheDataLV] pve Cwi---C--- 100.00g
[CacheDataLV_cdata] pve Cwi------- 100.00g
[CacheDataLV_cmeta] pve ewi------- 5.00g
data pve twi---tz-- 429.11g
[data_tdata] pve Cwi---C--- 429.11g [CacheDataLV] [data_tdata_corig]
[data_tdata_corig] pve owi---C--- 429.11g
[data_tmeta] pve ewi-a----- <4.38g
[lvol0_pmspare] pve ewi------- 5.00g
root pve -wi-ao---- 96.00g
swap pve -wi-ao---- 8.00g

I am just wondering why setting up an LVM SSD Cache does not appear to be working for me on Proxmox6? and if this is easy to fix?

Regards
 
As a test i installed Proxmox5 from fresh, and went through the same commands, and that worked as expected. So definitely related to Proxmox6.
 
Same happened to me too, below command fixed it until the next reboot.
Anybody has an idea on how to make this permanent?

After the reboot some LVs seem to been activated before others and have to manually fix it. I can type:



and everything's okay again, but i've not looked for a proper solution.
 
Just solved the issue by:

echo "dm_cache" >> /etc/initramfs-tools/modules
echo "dm_cache_mq" >> /etc/initramfs-tools/modules
echo "dm_persistent_data" >> /etc/initramfs-tools/modules
echo "dm_bufio" >> /etc/initramfs-tools/modules

update-initramfs -u
reboot
 
After the reboot some LVs seem to been activated before others and have to manually fix it. I can type:



and everything's okay again, but i've not looked for a proper solution.

WOW this just saved my life.

Live production servers couldnt come online after a emergency reboot caused by load.
I stressed a bit, made coffee and found this post and ran it and it all is good again.

Thank you
 
I am testing this in the past two day's, I see no benefit, if you have good RAID controller + cache.
Maybe to put OS on nvme separate it from data disks/controller will make more sense to me at this point.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!