[SOLVED] Cant Migrate Between Nodes

300cpilot

Well-Known Member
Mar 24, 2019
108
5
58
There is something I am not understanding about proxmox and storage, I am sure I have this setup wrong. Just not sure how to redo this.


I have 2 nodes, each with a seperate ZFS storage configured for vm's and ct's. Local storage is 2-300gb sas drives in raid one on dedicated controller, houses only proxmox os and iso's. "Main" storage is 12-3tb disks and 1.2 tb cache. It is in Raidz3.

How should they be configured so that migration will work?


Replication attempt will tell me that the identical storage is not present on the receiving server, because it has a different name. But it doesn't allow you to name two storage's the same name on each node.

These are two new installs on identical servers with version 6 Proxmox on them. I have 107 vm's and containers running on them currently. I restored the vm's from a version 5 Proxmox cluster. I w

Migrating vm off error:

2019-09-05 20:29:01 ERROR: migration aborted (duration 00:00:00): storage 'Main' is not available on node 'One'
TASK ERROR: migration aborted

Also get this today:
Unable to create new inotify object: Too many open files at /usr/share/perl5/PVE/INotify.pm line 397, <DATA> line 755.
TASK ERROR: command '/usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=One' root@10.80.1.220 pvecm mtunnel -migration_network 10.100.1.1/24 -get_migration_ip' failed: exit code 24

Thanks for those that will tell me to read the manual... I have.

Specs:
(2) Servers HP DL380 Gen8
(2) Xeon E5-2650 (Each server)
340 gigs of ram (Each server)
24-3 tb sas drives (Each server)
2-300 gig sas drives (Each server)
Dual 10 gig cards Direct connected between nodes, not bonded (Each server)
4-1 gig nics 2x2 LACP 802.3ad (Each server)
 
Last edited:
Local storages with the same name and type available on 2 or more nodes is possible, and required for migration with local disks to work.
Please post the output of 'pveversion -v' and 'cat /etc/pve/storage.cfg'.
 
Sorry for the delay in responding.

root@Marvel:~# pveversion -v
proxmox-ve: 6.0-2 (running kernel: 5.0.18-1-pve)
pve-manager: 6.0-5 (running version: 6.0-5/f8a710d7)
pve-kernel-5.0: 6.0-6
pve-kernel-helper: 6.0-6
pve-kernel-5.0.18-1-pve: 5.0.18-1
pve-kernel-5.0.15-1-pve: 5.0.15-1
ceph-fuse: 12.2.11+dfsg1-2.1
corosync: 3.0.2-pve2
criu: 3.11-3
glusterfs-client: 5.5-3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.10-pve1
libpve-access-control: 6.0-2
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-3
libpve-guest-common-perl: 3.0-1
libpve-http-server-perl: 3.0-2
libpve-storage-perl: 6.0-6
libqb0: 1.0.5-1
lvm2: 2.03.02-pve3
lxc-pve: 3.1.0-61
lxcfs: 3.0.3-pve60
novnc-pve: 1.0.0-60
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.0-5
pve-cluster: 6.0-4
pve-container: 3.0-5
pve-docs: 6.0-4
pve-edk2-firmware: 2.20190614-1
pve-firewall: 4.0-6
pve-firmware: 3.0-2
pve-ha-manager: 3.0-2
pve-i18n: 2.0-2
pve-qemu-kvm: 4.0.0-3
pve-xtermjs: 3.13.2-1
qemu-server: 6.0-7
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.1-pve1
root@Marvel:~#


root@Marvel:~# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content backup,vztmpl,images,iso,rootdir
maxfiles 1
shared 0

lvmthin: local-lvm
thinpool data
vgname pve
content images,rootdir
nodes Marvel

zfspool: Main
pool Main
content images,rootdir
nodes Marvel

nfs: NFS
export /backup
path /mnt/pve/NFS
server 10.100.1.3
content backup
maxfiles 1

zfspool: Main1
pool Main
content rootdir,images
nodes hulk
sparse 0
 
You should be able to migrate if you're using the storage Main with the following 3 things:
1) add 'hulk' to the nodes field in the storage 'Main'
2) move disks from Main1 to Main
3) remove Main1


As it uses the same zpool name it should work.
 
Glad to hear that!
Please mark the thread as solved (Top of the first post -> Edit Thread -> Prefix 'Solved')
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!