Proxmox migrating VS

Blisk

Member
Apr 16, 2022
40
1
13
I am trying to migrate VS from one proxmox server to another proxmox server over Cluster.
But it doesn't work and there is no way for me to setup anything when I am migrating.
I have 3 virtual servers but none work for migrating I always get some disk problem.
Is there any better and easier way to migrate my virtual servers?
1710177838154.png
1710177875974.png
 
Please review:
https://forum.proxmox.com/threads/uninitialized-value-in-perl-programs-during-migration.130016/
https://forum.proxmox.com/threads/migration-failures-i-dont-understand.120682/

If the explanations in those threads are not sufficient, please provide information similar to the one requested in the first thread (versions, configurations, etc).



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
thank you I already see that.
I think problem are disks. If I detatch disks and than reatach it on new server I will lost what is on those disks. How I can configure new disks for this VS and copy content on new server?
1710190656476.png
 
You are taking that specific answer somewhat out of context.

As explained in those threads, when migrating a VM with local storage (within cluster), the target node must have identical storage configuration. This means that there must exist a storage pool with the same name as the source (disk2tb) and that it must be of the same type (zfs, lvm). Although the latter restriction may be somewhat relaxed and it may be possible to migrate across _some_ different storage types.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
You are taking that specific answer somewhat out of context.

As explained in those threads, when migrating a VM with local storage (within cluster), the target node must have identical storage configuration. This means that there must exist a storage pool with the same name as the source (disk2tb) and that it must be of the same type (zfs, lvm). Although the latter restriction may be somewhat relaxed and it may be possible to migrate across _some_ different storage types.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Thank you, I figured that out. But in my case, that is not and this is why I struggle, with how to do it. I can detach those storages, but how can I copy that to a new server?
 
I can detach those storages, but how can I copy that to a new server?
The detach instructions were in response to a unique situation the original poster got himself into. That advice is not widely applicable to everything. You are not in the same state, as far as I know. But you still have not provided any factual information, so I could be wrong.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Here are files from both servers.
old server
Linux 5.15.143-1-pve #1 SMP PVE 5.15.143-1 (2024-02-08T18:12Z) x86_64

dir: local
path /var/lib/vz
content backup,iso,vztmpl

lvmthin: local-lvm
thinpool data
vgname pve
content images,rootdir

dir: disk2tb
path /disk2tb
content images
prune-backups keep-all=1
shared 0

dir: disk4tb
path /disk4tb
content images
prune-backups keep-all=1
shared 0

lvm: disk4tba
vgname disk4tba
content rootdir,images
nodes yourtop
shared 0

lvm: disk2tba
vgname disk2tba
content rootdir,images
nodes yourtop
shared 0

iscsi: QNAP8TB
portal 192.168.0.211
target iqn.2004-04.com.qnap:ts-253be:iscsi.backupqnp.165cee
content images

proxmox-ve: 7.4-1 (running kernel: 5.15.136-1-pve)
pve-manager: 7.4-17 (running version: 7.4-17/513c62be)
pve-kernel-5.15: 7.4-10
pve-kernel-5.13: 7.1-9
pve-kernel-5.15.136-1-pve: 5.15.136-1
pve-kernel-5.15.107-2-pve: 5.15.107-2
pve-kernel-5.13.19-6-pve: 5.13.19-15
pve-kernel-5.13.19-2-pve: 5.13.19-4
ceph-fuse: 15.2.15-pve1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx4
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.4
libproxmox-backup-qemu0: 1.3.1-1
libproxmox-rs-perl: 0.2.1
libpve-access-control: 7.4.3
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.4-2
libpve-guest-common-perl: 4.2-4
libpve-http-server-perl: 4.2-3
libpve-rs-perl: 0.7.7
libpve-storage-perl: 7.4-3
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-2
lxcfs: 5.0.3-pve1
novnc-pve: 1.4.0-1
proxmox-backup-client: 2.4.6-1
proxmox-backup-file-restore: 2.4.6-1
proxmox-kernel-helper: 7.4-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-offline-mirror-helper: 0.5.2
proxmox-widget-toolkit: 3.7.3
pve-cluster: 7.3-3
pve-container: 4.4-6
pve-docs: 7.4-2
pve-edk2-firmware: 3.20230228-4~bpo11+2
pve-firewall: 4.3-5
pve-firmware: 3.6-6
pve-ha-manager: 3.6.1
pve-i18n: 2.12-1
pve-qemu-kvm: 7.2.0-8
pve-xtermjs: 4.16.0-2
qemu-server: 7.4-4
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+3
vncterm: 1.7-1
zfsutils-linux: 2.1.14-pve1

________________________________________________________________________________________________________________________________________
AND NEW SERVER
Linux pve 6.5.13-1-pve #1 SMP PREEMPT_DYNAMIC PMX 6.5.13-1 (2024-02-05T13:50Z) x86_64

dir: local
path /var/lib/vz
content backup,iso,vztmpl

lvmthin: local-lvm
thinpool data
vgname pve
content images,rootdir

dir: disk2tb
path /disk2tb
content images
prune-backups keep-all=1
shared 0

dir: disk4tb
path /disk4tb
content images
prune-backups keep-all=1
shared 0

lvm: disk4tba
vgname disk4tba
content rootdir,images
nodes yourtop
shared 0

lvm: disk2tba
vgname disk2tba
content rootdir,images
nodes yourtop
shared 0

iscsi: QNAP8TB
portal 192.168.0.211
target iqn.2004-04.com.qnap:ts-253be:iscsi.backupqnp.165cee
content images


proxmox-ve: 8.1.0 (running kernel: 6.5.13-1-pve)
pve-manager: 8.1.4 (running version: 8.1.4/ec5affc9e41f1d79)
proxmox-kernel-helper: 8.1.0
pve-kernel-5.13: 7.1-5
proxmox-kernel-6.5.13-1-pve-signed: 6.5.13-1
proxmox-kernel-6.5: 6.5.13-1
pve-kernel-5.13.19-2-pve: 5.13.19-4
ceph-fuse: 16.2.11+ds-2
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx8
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.0
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.1.2
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.1.1
libpve-guest-common-perl: 5.0.6
libpve-http-server-perl: 5.0.5
libpve-network-perl: 0.9.5
libpve-rs-perl: 0.8.8
libpve-storage-perl: 8.1.0
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve4
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.1.4-1
proxmox-backup-file-restore: 3.1.4-1
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.5
proxmox-widget-toolkit: 4.1.4
pve-cluster: 8.0.5
pve-container: 5.0.8
pve-docs: 8.1.4
pve-edk2-firmware: 4.2023.08-4
pve-firewall: 5.0.3
pve-firmware: 3.9-2
pve-ha-manager: 4.0.3
pve-i18n: 3.2.1
pve-qemu-kvm: 8.1.5-3
pve-xtermjs: 5.3.0-3
qemu-server: 8.0.10
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.2-pve2
 
You have some storage config that is restricted to one node (as it should be if its local storage that is only available on that node) and some local storage that is not restricted, leading PVE to believe it should be on all nodes (at least in name).

Please see discussion in this thread https://forum.proxmox.com/threads/c...cant-restrict-to-one-node.112435/#post-485746

Are "disk2tb" and "disk4tb" really present on first and second nodes? If not, restrict them as described.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
You have some storage config that is restricted to one node (as it should be if its local storage that is only available on that node) and some local storage that is not restricted, leading PVE to believe it should be on all nodes (at least in name).

Please see discussion in this thread https://forum.proxmox.com/threads/c...cant-restrict-to-one-node.112435/#post-485746

Are "disk2tb" and "disk4tb" really present on first and second nodes? If not, restrict them as described.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
don't know what happend but it looks like all disks are mixed now, I see both the same on both nodes even I disconnect it.
 

Attachments

  • node1 disks.png
    node1 disks.png
    157.8 KB · Views: 3
  • node1.png
    node1.png
    141.6 KB · Views: 3
  • node2 disks.png
    node2 disks.png
    143.4 KB · Views: 3
  • node2 disks2.png
    node2 disks2.png
    90.9 KB · Views: 3
  • node2.png
    node2.png
    125.5 KB · Views: 3

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!