Moving VM disks on hosts with no shared storage, but not using replication.

mcparlandj

Well-Known Member
Mar 1, 2017
30
2
48
50
I have two servers in a "cluster". I don’t have any shared storage between them, and I don’t really want to setup replication between them as I’m not using ZFS.

So my question is, how do I move machine qcow images between the hosts? I don’t need to migrate the qcow files while the VMs are running. I’m happy to power off the VM, move the qcow, then start it up on the other machine.

But the only way I’ve been able to figure out how to do that is doing SCP from the command line on one host to another. Then I go in and update the VMs config file to reflect the new storage area the qcow is located on.

Is there a way to move Vm disk images between two hosts that don’t have shared storage in the web gui?
 
Simply migrate the VM/CT to the other node in the cluster. The complete disks will be transferred between the hosts then, even if the VM is running.
 
  • Like
Reactions: mcparlandj
Hi,

To add to what Mira is saying..

You don't require shared storage in order to do either a live migrate or a migrate of shut-down VMs - the only difference is that with a live migrate you will be able to select the destination filesystem/dataset you would like the VM and its disks to be migrated to, whereas with a migrate of a stopped VM, you have to move it to exactly the same destination on the host you are migrating to.

We do live migrates all the time off on-compute (i.e. non-shared) storage and it works well (occasionally we have glitches but 99% of the time all is good :) )

Kind regards,

Angelo.
 
  • Like
Reactions: mcparlandj
Could it be that your PVE installation is rather old?
Please provide the output of pveversion -v
 
KVM1 is an old install. KVM2 is brand new.

Code:
root@kvm1:~# pveversion -v
proxmox-ve: 7.0-2 (running kernel: 5.11.22-5-pve)
pve-manager: 7.0-11 (running version: 7.0-11/63d82f4e)
pve-kernel-helper: 7.1-2
pve-kernel-5.11: 7.0-8
pve-kernel-5.4: 6.4-5
pve-kernel-5.11.22-5-pve: 5.11.22-10
pve-kernel-5.11.22-4-pve: 5.11.22-9
pve-kernel-5.4.128-1-pve: 5.4.128-1
pve-kernel-4.4.128-1-pve: 4.4.128-111
pve-kernel-4.4.117-2-pve: 4.4.117-110
pve-kernel-4.4.98-6-pve: 4.4.98-107
pve-kernel-4.4.98-3-pve: 4.4.98-103
pve-kernel-4.4.83-1-pve: 4.4.83-96
pve-kernel-4.4.67-1-pve: 4.4.67-92
pve-kernel-4.4.49-1-pve: 4.4.49-86
pve-kernel-4.4.40-1-pve: 4.4.40-82
pve-kernel-4.4.35-2-pve: 4.4.35-79
pve-kernel-4.4.35-1-pve: 4.4.35-77
pve-kernel-4.4.24-1-pve: 4.4.24-72
pve-kernel-4.4.21-1-pve: 4.4.21-71
pve-kernel-4.4.6-1-pve: 4.4.6-48
ceph-fuse: 14.2.21-1
corosync: 3.1.5-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: 0.8.36+pve1
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve1
libproxmox-acme-perl: 1.4.0
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.0-5
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-10
libpve-guest-common-perl: 4.0-2
libpve-http-server-perl: 4.0-3
libpve-storage-perl: 7.0-12
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.9-4
lxcfs: 4.0.8-pve2
novnc-pve: 1.2.0-3
proxmox-backup-client: 2.0.11-1
proxmox-backup-file-restore: 2.0.11-1
proxmox-mini-journalreader: 1.2-1
proxmox-widget-toolkit: 3.3-6
pve-cluster: 7.0-3
pve-container: 4.1-1
pve-docs: 7.0-5
pve-edk2-firmware: 3.20200531-1
pve-firewall: 4.2-4
pve-firmware: 3.3-2
pve-ha-manager: 3.3-1
pve-i18n: 2.5-1
pve-qemu-kvm: 6.0.0-4
pve-xtermjs: 4.12.0-1
qemu-server: 7.0-14
smartmontools: 7.2-pve2
spiceterm: 3.2-2
vncterm: 1.7-1
zfsutils-linux: 2.0.5-pve1

Code:
root@kvm2:~# pveversion -v
proxmox-ve: 7.0-2 (running kernel: 5.11.22-4-pve)
pve-manager: 7.0-11 (running version: 7.0-11/63d82f4e)
pve-kernel-5.11: 7.0-7
pve-kernel-helper: 7.0-7
pve-kernel-5.11.22-4-pve: 5.11.22-8
ceph-fuse: 15.2.14-pve1
corosync: 3.1.2-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.21-pve1
libproxmox-acme-perl: 1.3.0
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.0-4
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-6
libpve-guest-common-perl: 4.0-2
libpve-http-server-perl: 4.0-2
libpve-storage-perl: 7.0-10
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.9-4
lxcfs: 4.0.8-pve2
novnc-pve: 1.2.0-3
proxmox-backup-client: 2.0.9-2
proxmox-backup-file-restore: 2.0.9-2
proxmox-mini-journalreader: 1.2-1
proxmox-widget-toolkit: 3.3-6
pve-cluster: 7.0-3
pve-container: 4.0-9
pve-docs: 7.0-5
pve-edk2-firmware: 3.20200531-1
pve-firewall: 4.2-2
pve-firmware: 3.3-1
pve-ha-manager: 3.3-1
pve-i18n: 2.5-1
pve-qemu-kvm: 6.0.0-3
pve-xtermjs: 4.12.0-1
qemu-server: 7.0-13
smartmontools: 7.2-1
spiceterm: 3.2-2
vncterm: 1.7-1
zfsutils-linux: 2.0.5-pve1
 
Please provide your storage config cat /etc/pve/storage.cfg

Could it be that you have ZFS storages for each node with different names, but haven't limited the storage to that node?
If so, that explains the issue. You can limit the storage to a number of nodes when editing it under Datacenter -> Storages.
 
Yep! That was the problem! Thanks so much!

KVM1
Code:
root@kvm1:~# cat /etc/pve/storage.cfg
dir: local
    disable
    path /var/lib/vz
    content iso,backup,vztmpl
    prune-backups keep-all=1

lvmthin: local-lvm
    disable
    thinpool data
    vgname pve
    content images,rootdir

dir: tank2
    path /tank2
    content images,iso
    nodes kvm1
    shared 0

dir: Ztanksketchy
    path /tanksketchy
    content images
    shared 0

dir: tankKVM2
    path /tankKVM2
    content images
    nodes kvm2
    prune-backups keep-all=1
    shared 1
    ---------------------------
    root@kvm1:~# zpool status
      pool: tank2
     state: ONLINE
      scan: scrub repaired 0B in 00:38:14 with 0 errors on Sun Nov 14 01:02:15 2021
    config:
    
        NAME        STATE     READ WRITE CKSUM
        tank2       ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            sda     ONLINE       0     0     0
            sdb     ONLINE       0     0     0
    
    errors: No known data errors

KVM2
Code:
root@kvm2:~# cat /etc/pve/storage.cfg
dir: local
    disable
    path /var/lib/vz
    content iso,backup,vztmpl
    prune-backups keep-all=1

lvmthin: local-lvm
    disable
    thinpool data
    vgname pve
    content images,rootdir

dir: tank2
    path /tank2
    content images,iso
    nodes kvm1
    shared 0

dir: Ztanksketchy
    path /tanksketchy
    content images
    shared 0

dir: tankKVM2
    path /tankKVM2
    content images
    nodes kvm2
    prune-backups keep-all=1
    shared 1
    
    root@kvm2:~# zpool status
      pool: tankKVM2
     state: ONLINE
      scan: scrub repaired 0B in 00:23:33 with 0 errors on Sun Nov 14 00:47:34 2021
    config:
    
        NAME        STATE     READ WRITE CKSUM
        tankKVM2    ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            sdb     ONLINE       0     0     0
            sdc     ONLINE       0     0     0
          mirror-1  ONLINE       0     0     0
            sdd     ONLINE       0     0     0
            sde     ONLINE       0     0     0

So for anyone reading this in the future, I had checked the share button on one of the servers local disks. It seems counter intuitive, but you need the sharing box UNCHECKED for the local storage on each of the two servers.
Screen Shot 2021-11-18 at 1.27.26 AM.png

Then migrating the host will ask you which storage you want to migrate to when the live migration dialog box comes up.
 
  • Like
Reactions: mira
The `shared` option doesn't mean that the storage will be shared, but rather that the storage IS shared.
For example an LVM over iSCSI, an NFS/CIFS share or some other shared storage that is available on multiple nodes.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!