[SOLVED] Can't seem to move disks between local LVM Storage

Tau

Member
Sep 24, 2020
23
1
8
Hello fellow Proxmox users,

At the moment I run 4 proxmox hosts on version 7.4-3
All these hosts have local LVM storage, which I use to run the VM Disks on.
Now, If i try and move one disk from one local LVM storage to another host's local LVM storage, I am unable to, and I get the error: TASK ERROR: storage migration failed: no such volume group 'SSD-Local-Proxmox6'
I did share all added local LVM storage in the cluster under Datacenter - storage - EDIT - Box 'shared' is enabled.
1686555672218.png

What am I missing here?


I was wondering if someone can clarify this.
 
the "Shared" check box is not for actively sharing a storage, but for telling PVE that is is already shared by other means (such as, being backed by some network storage solution), so that PVE knows it can expect the contents to be identical without the need to migrate volumes.

if you want to migrate a guest with local disks to another host, there are two options:

1. the local storage is configured for both nodes (using the same name!)

-> you don't need to do anything, the migration should just work

2. the local storage is not available on the target node

-> you need to select a targetstorage that is available on the target node, the local volumes will be switched on the fly as part of the migration if possible (some combinations of format and storage and guest config are not supported)

if you need more assistance, please provide
- pveversion -v
- storage.cfg
- guest config
- a clear description of what you want do achieve/do ;)
 
  • Like
Reactions: Tau and Moayad
Thank you for your answer :)

So if I rename my local storage and use the same name on all 4 hosts it should work?

I am trying to move the disk of a VM from one hosts local storage to another hosts local storage, so a VM that resides on the proxmox5 host (see below) has its disk on the local LVM called SSD-Local-Proxmox5 and I want to move disk to the local storage on proxmox6, that local storage is named SSD-Local-Proxmox6.

Im not sure how to link the guest config; you mean this?

root@proxmox5:/etc/pve/nodes/proxmox5/qemu-server# cat 102.conf
bootdisk: virtio0
cores: 2
ide2: none,media=cdrom
memory: 2048
name: pf01
net0: virtio=3E:A6:53:EC:61:87,bridge=vmbr0
net1: virtio=0A:D5:F6:2D:2F:BE,bridge=vmbr5,tag=50
net2: virtio=2A:68:40:F3:B0:41,bridge=vmbr5,tag=99
net3: virtio=6A:73:91:E3:CF:D8,bridge=vmbr5,tag=13
numa: 0
onboot: 1
ostype: other
scsihw: virtio-scsi-single
smbios1: uuid=158a406a-a958-4d5e-b632-d8f9c19fad16
sockets: 1
virtio0: SSD-Local-Proxmox5:vm-102-disk-0,aio=native,iothread=1,size=32G
vmgenid: 98efdaf7-5296-4d7a-9358-71a9e6ce9b31


root@proxmox5:~# pveversion -v
proxmox-ve: 7.4-1 (running kernel: 5.15.107-2-pve)
pve-manager: 7.4-3 (running version: 7.4-3/9002ab8a)
pve-kernel-5.15: 7.4-3
pve-kernel-5.4: 6.4-20
pve-kernel-5.15.107-2-pve: 5.15.107-2
pve-kernel-5.4.203-1-pve: 5.4.203-1
pve-kernel-5.4.119-1-pve: 5.4.119-1
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph-fuse: 14.2.21-1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: residual config
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.4
libproxmox-backup-qemu0: 1.3.1-1
libproxmox-rs-perl: 0.2.1
libpve-access-control: 7.4-2
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.4-1
libpve-guest-common-perl: 4.2-4
libpve-http-server-perl: 4.2-3
libpve-rs-perl: 0.7.6
libpve-storage-perl: 7.4-2
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-2
lxcfs: 5.0.3-pve1
novnc-pve: 1.4.0-1
proxmox-backup-client: 2.4.1-1
proxmox-backup-file-restore: 2.4.1-1
proxmox-kernel-helper: 7.4-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-offline-mirror-helper: 0.5.1-1
proxmox-widget-toolkit: 3.6.5
pve-cluster: 7.3-3
pve-container: 4.4-3
pve-docs: 7.4-2
pve-edk2-firmware: 3.20230228-2
pve-firewall: 4.3-1
pve-firmware: 3.6-5
pve-ha-manager: 3.6.1
pve-i18n: 2.12-1
pve-qemu-kvm: 7.2.0-8
pve-xtermjs: 4.16.0-1
qemu-server: 7.4-3
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+3
vncterm: 1.7-1
zfsutils-linux: 2.1.11-pve1


root@proxmox5:~# cat /etc/pve/storage.cfg
lvm: SSD-Local-Proxmox5
vgname SSD-Local-Proxmox5
content images,rootdir
shared 1

lvm: SSD-Local-Proxmox6
vgname SSD-Local-Proxmox6
content rootdir,images
shared 1

lvm: SSD-Local-Proxmox8
vgname SSD-Local-Proxmox8
content rootdir,images
shared 1

lvm: SSD-Local-Proxmox9
vgname SSD-Local-Proxmox9
content rootdir,images
shared 1
 
Last edited:
yes, in this case, you would usually just name the LVM volume group identical on each node (e.g., SSD-Local), and have a single storage entry pointing at that VG (possibly also called SSD-Local ;)). the shared option is wrong so needs to be removed.

then you should be able to migrate a guest that has its disks stored on this LVM storage from one node to another.

of course, in your case, if you already have volumes on that storage, you need to also stop all the guests using them, and update the references in their configs, if you rename the storage and the VG.
 
  • Like
Reactions: Tau
Thank you for your time and clear answer :)

How do you see which local storage is on which host when moving a disk this way? Or can I still name the ID however I want?
Or does this mean that a disk replicates to all 4 hosts local storage?

I probably need to follow a proxmox course sometime soon :)
 
no, nothing is replicated (replication is a separate feature that requires ZFS).

for a local storage like LVM backed by a local disk, you have
- one cluster wide definition in /etc/pve/storage.cfg (possibly restricted to a subset of nodes)
- different contents on each node where that storage exists

now when you have a guest on node A that has its volumes on that storage, you can migrate it to node B, and the disk will be transferred as part of the migration. if all storages exist on both nodes, this should always work. if a storage only exists on the source node, but not on the target node, you need to provide a mapping (targetstorage), and there needs to be support for that particular storage combination in order for the migration to work.

there is no way to move a volume from one node to another without also moving the corresponding guest. it is possible to move a volume from one storage to another storage on the same node (this is called "move disk" or "move volume").

shared storages are something different:
- one cluster wide definition (again)
- same content on all nodes that have access - provided by the storage, not by PVE

some (common) examples for this are NFS or CIFS exports, iSCSI, Ceph.

for volumes on a shared storage migration is even easier - since the volume is already accessible on both nodes, the volume doesn't need to be copied at all, so migration is a lot faster.
 
  • Like
Reactions: Tau

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!