Hello,
From ~1 week ago, one of my Proxmox nodes' data LVM is doing strange things.
Physiscally, it is stored in a customer class Crucial MX500 SATA SSD connected directly to the motherboard controller (no PCIe HBA for the system disk) and it is brand new. I have set up proxmox inside a cluster with LVM and making backups to a NFS external location.
Last week I tried to migrate a stopped VM of ~64 GiB from one server to another, and found out the SSD started to underperform (~5 MB/s) after roughly 55 GiB copied. It was so bad that even cancelling the migration, the SSD continued busy writting at that speeed and I need to reboot the instance, as it was completely unusable (it is in my homelab, not running mission critical workloads, so it was okay to do that).
After that, (and several retries, even making a backup and trying to restore the backup, obviously without luck) I ended up creating the instance from scratch and migrating data from one VM to another.
The problem is that now the pve/data logical volume is showing 377 GiB used, but the total size of stored VM disks (even if tyhey are 100% approvisioned) is 168 GiB.
I don't know if the forced reboot just made the LV behave strange, and sincerely i don't know how to fix it.
Any ideas aside from doing a backup and reinstall from scratch?
Thanks!
Some information about the storage:
From ~1 week ago, one of my Proxmox nodes' data LVM is doing strange things.
Physiscally, it is stored in a customer class Crucial MX500 SATA SSD connected directly to the motherboard controller (no PCIe HBA for the system disk) and it is brand new. I have set up proxmox inside a cluster with LVM and making backups to a NFS external location.
Last week I tried to migrate a stopped VM of ~64 GiB from one server to another, and found out the SSD started to underperform (~5 MB/s) after roughly 55 GiB copied. It was so bad that even cancelling the migration, the SSD continued busy writting at that speeed and I need to reboot the instance, as it was completely unusable (it is in my homelab, not running mission critical workloads, so it was okay to do that).
After that, (and several retries, even making a backup and trying to restore the backup, obviously without luck) I ended up creating the instance from scratch and migrating data from one VM to another.
The problem is that now the pve/data logical volume is showing 377 GiB used, but the total size of stored VM disks (even if tyhey are 100% approvisioned) is 168 GiB.
I don't know if the forced reboot just made the LV behave strange, and sincerely i don't know how to fix it.
Any ideas aside from doing a backup and reinstall from scratch?
Thanks!
Some information about the storage:
Code:
root@venom:~# lvs -a
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi-aotz-- 377.55g 96.13 1.54
[data_tdata] pve Twi-ao---- 377.55g
[data_tmeta] pve ewi-ao---- <3.86g
[lvol0_pmspare] pve ewi------- <3.86g
root pve -wi-ao---- 60.00g
swap pve -wi-ao---- 4.00g
vm-150-disk-0 pve Vwi-a-tz-- 4.00m data 14.06
vm-150-disk-1 pve Vwi-a-tz-- 128.00g data 100.00
vm-201-disk-0 pve Vwi-aotz-- 4.00m data 14.06
vm-201-disk-1 pve Vwi-aotz-- 40.00g data 71.51
Code:
root@venom:~# lvdisplay
--- Logical volume ---
LV Path /dev/pve/swap
LV Name swap
VG Name pve
LV UUID oXwwwG-ol5r-RTHr-LQ8k-AJXe-IqdF-QAeZ0M
LV Write Access read/write
LV Creation host, time proxmox, 2022-11-04 22:43:53 +0100
LV Status available
# open 2
LV Size 4.00 GiB
Current LE 1024
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0
--- Logical volume ---
LV Path /dev/pve/root
LV Name root
VG Name pve
LV UUID jeFJRs-KWAr-kk5D-WrCf-1Rzg-qwwr-XHfPzW
LV Write Access read/write
LV Creation host, time proxmox, 2022-11-04 22:43:53 +0100
LV Status available
# open 1
LV Size 60.00 GiB
Current LE 15360
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:1
--- Logical volume ---
LV Name data
VG Name pve
LV UUID PAsdyN-8Klo-xKUy-nmsN-YYYv-CLZ0-tmF87a
LV Write Access read/write (activated read only)
LV Creation host, time proxmox, 2022-11-04 22:44:08 +0100
LV Pool metadata data_tmeta
LV Pool data data_tdata
LV Status available
# open 0
LV Size 377.55 GiB
Allocated pool data 96.13%
Allocated metadata 1.54%
Current LE 96654
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:5
--- Logical volume ---
LV Path /dev/pve/vm-150-disk-0
LV Name vm-150-disk-0
VG Name pve
LV UUID oLKxPj-OaKZ-GRBi-ZLh6-j1pT-ceqG-vfX1cy
LV Write Access read/write
LV Creation host, time venom, 2022-11-09 12:34:53 +0100
LV Pool name data
LV Status available
# open 0
LV Size 4.00 MiB
Mapped size 14.06%
Current LE 1
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:6
--- Logical volume ---
LV Path /dev/pve/vm-150-disk-1
LV Name vm-150-disk-1
VG Name pve
LV UUID u3b54U-uZ4T-d9O8-MQ3m-eiEj-ktPI-0oTlQm
LV Write Access read/write
LV Creation host, time venom, 2022-11-09 12:34:54 +0100
LV Pool name data
LV Status available
# open 0
LV Size 128.00 GiB
Mapped size 100.00%
Current LE 32768
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:7
--- Logical volume ---
LV Path /dev/pve/vm-201-disk-0
LV Name vm-201-disk-0
VG Name pve
LV UUID S5JIDU-wzlB-5M24-jm3v-x2sc-oiS9-duiwAC
LV Write Access read/write
LV Creation host, time venom, 2022-12-21 20:27:54 +0100
LV Pool name data
LV Status available
# open 1
LV Size 4.00 MiB
Mapped size 14.06%
Current LE 1
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:8
--- Logical volume ---
LV Path /dev/pve/vm-201-disk-1
LV Name vm-201-disk-1
VG Name pve
LV UUID P6Jm81-SGnc-cyiv-xOZu-8hjY-hAc7-bcFvgO
LV Write Access read/write
LV Creation host, time venom, 2022-12-21 20:27:55 +0100
LV Pool name data
LV Status available
# open 1
LV Size 40.00 GiB
Mapped size 71.51%
Current LE 10240
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:9
Code:
root@venom:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 126G 0 126G 0% /dev
tmpfs 26G 2.5M 26G 1% /run
/dev/mapper/pve-root 59G 19G 38G 33% /
tmpfs 126G 66M 126G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
/dev/sdj2 511M 340K 511M 1% /boot/efi
/dev/fuse 128M 64K 128M 1% /etc/pve
route-to-external-location:/mnt/proxmox-backups 11T 2.1T 8.3T 20% /mnt/pve/backups
tmpfs 26G 0 26G 0% /run/user/0