Bug - share iSCSI storage with volume chain (snapshots)

TimmiORG

Member
Oct 23, 2023
6
0
6
Hi all,

I'm currently trying to hunt down an issue with VMs and snapshots on our shared iSCSI storage.
Yes, I know it is technology preview, still I think it make sense to report the issue.

For me it looks like the that disks are not getting detached after a snapshot have been created and this causes issue if the VM is getting migrated to a different host within the cluster while powering up after taken the snapshot or rollback.

No VM is running on the system
Code:
lrwxrwxrwx 1 root root       7 Oct 31 13:25 MSA-Storage03 -> ../dm-7

VM is running
Code:
lrwxrwxrwx 1 root root       7 Oct 31 13:25 MSA-Storage03 -> ../dm-7
lrwxrwxrwx 1 root root       7 Oct 31 13:26 MSA--Storage03-snap_vm--299--disk--0_initial--OS.qcow2 -> ../dm-8
lrwxrwxrwx 1 root root       7 Oct 31 13:26 MSA--Storage03-vm--299--disk--0.qcow2 -> ../dm-9

The mapper to the VM disks are gone again after powering down the VM.

But if you create a snap shot the mappers are not getting removed after the task is completed.
Code:
lrwxrwxrwx 1 root root       7 Oct 31 13:28 MSA--Storage03-snap_vm--299--disk--0_initial--OS.qcow2 -> ../dm-8
lrwxrwxrwx 1 root root       7 Oct 31 13:28 MSA--Storage03-snap_vm--299--disk--0_Test.qcow2 -> ../dm-9
lrwxrwxrwx 1 root root       8 Oct 31 13:28 MSA--Storage03-vm--299--disk--0.qcow2 -> ../dm-10

This is causing issue the VM is balanced during powerup to a different host.
The mappers are gone again If I start/stop the VM on the same host.

So I assume that the LVM mappers should be removed after the snapshot task.

Hope this helps and regards
 
Last edited:
the snapshot volumes need to be active. could you describe which symptoms you are seeing exactly?

the only time volumes are usually deactivated are
- as part of error handling for freshly allocated volumes
- as part of migration to another node

if you are missing some deactivation when migrating, please clearly describe the state before and after migration, and include "pveversion -v" and the VM and storage configuration. thanks!
 
Hi Fabian,

I'm running a cluster with 4 nodes and share iSCSI storage.
The VM disks (qcow2) are not registered with the OS while the VM is off.

The are visible in (e.g. dmsetup) only when the VM is running on the host.
Everthing is working normal during migration or if I poweroff the VM.

But when I take a snap shot the volumes are staying registered with the OS.

This is the output of you requested:
Code:
proxmox-ve: 9.0.0 (running kernel: 6.14.11-4-pve)
pve-manager: 9.0.11 (running version: 9.0.11/3bf5476b8a4699e2)
proxmox-kernel-helper: 9.0.4
proxmox-kernel-6.14.11-4-pve-signed: 6.14.11-4
proxmox-kernel-6.14: 6.14.11-4
proxmox-kernel-6.14.11-3-pve-signed: 6.14.11-3
proxmox-kernel-6.14.11-2-pve-signed: 6.14.11-2
proxmox-kernel-6.14.11-1-pve-signed: 6.14.11-1
proxmox-kernel-6.14.8-2-pve-signed: 6.14.8-2
proxmox-kernel-6.8.12-13-pve-signed: 6.8.12-13
proxmox-kernel-6.8: 6.8.12-13
proxmox-kernel-6.8.12-9-pve-signed: 6.8.12-9
ceph-fuse: 19.2.3-pve2
corosync: 3.1.9-pve2
criu: 4.1.1-1
frr-pythontools: 10.3.1-1+pve4
ifupdown2: 3.3.0-1+pmx10
intel-microcode: 3.20250512.1
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libproxmox-acme-perl: 1.7.0
libproxmox-backup-qemu0: 2.0.1
libproxmox-rs-perl: 0.4.1
libpve-access-control: 9.0.3
libpve-apiclient-perl: 3.4.0
libpve-cluster-api-perl: 9.0.6
libpve-cluster-perl: 9.0.6
libpve-common-perl: 9.0.11
libpve-guest-common-perl: 6.0.2
libpve-http-server-perl: 6.0.5
libpve-network-perl: 1.1.8
libpve-rs-perl: 0.10.10
libpve-storage-perl: 9.0.13
libspice-server1: 0.15.2-1+b1
lvm2: 2.03.31-2+pmx1
lxc-pve: 6.0.5-1
lxcfs: 6.0.4-pve1
novnc-pve: 1.6.0-3
proxmox-backup-client: 4.0.16-1
proxmox-backup-file-restore: 4.0.16-1
proxmox-backup-restore-image: 1.0.0
proxmox-firewall: 1.2.0
proxmox-kernel-helper: 9.0.4
proxmox-mail-forward: 1.0.2
proxmox-mini-journalreader: 1.6
proxmox-offline-mirror-helper: 0.7.2
proxmox-widget-toolkit: 5.0.6
pve-cluster: 9.0.6
pve-container: 6.0.13
pve-docs: 9.0.8
pve-edk2-firmware: 4.2025.02-4
pve-esxi-import-tools: 1.0.1
pve-firewall: 6.0.3
pve-firmware: 3.17-2
pve-ha-manager: 5.0.5
pve-i18n: 3.6.1
pve-qemu-kvm: 10.0.2-4
pve-xtermjs: 5.5.0-2
qemu-server: 9.0.23
smartmontools: 7.4-pve1
spiceterm: 3.4.1
swtpm: 0.8.0+pve2
vncterm: 1.9.1
zfsutils-linux: 2.3.4-pve1