[SOLVED] LXC Migration - ERROR: unknown command 'mtunnel'

Gardouille

Member
Mar 11, 2013
15
1
23
wiki.101010.fr
Hi,

I have a 3 nodes cluster with Proxmox 4 and mainly use LXC containers (~30). Only 4-5 KVM.

Today :
  • i upgrade : libpve-common-perl pve-cluster pve-container pve-docs pve-manager pve-qemu-kvm python-pil qemu-server on the first node (r630a)
  • Start to migrate some non-critical containers from the second node to the first (r630b -> r630a) to upgrade the empty node.
  • From here i start to have an over CPU LOAD on the first node…

Ok, maybe some services didn't need to restart or whatever. To be sure i disable the HA on the first's node containers and reboot (monday morning… i don't to start my week with a headache for non-critical CT ^^).

The load looks good, just to be sure, i try to migrate a container (from ha-manager or webgui) and i got this error for the task CT VMID - Migrate :

Code:
task started by HA resource agent
ERROR: unknown command 'mtunnel'
USAGE: pvecm <COMMAND> [ARGS] [OPTIONS]
pvecm add <hostname> [OPTIONS]
pvecm addnode <node> [OPTIONS]
pvecm create <clustername> [OPTIONS]
pvecm delnode <node>
pvecm expected <expected>
pvecm keygen <filename>
pvecm nodes
pvecm status
pvecm updatecerts [OPTIONS]

pvecm help [<cmd>] [OPTIONS]
Nov 14 16:31:35 ERROR: migration aborted (duration 00:00:01): command '/usr/bin/ssh -o 'BatchMode=yes' root@10.10.10.9 pvecm mtunnel --get_migration_ip' failed: exit code 255
TASK ERROR: migration aborted

No error for the HA VMID - Migrate task, but to be sure, i remove the HA for a container and got the same error…

I check the pvecm's man and it's say :
Code:
pvecm mtunnel [OPTIONS]

Used by VM/CT migration - do not use manually.

-get_migration_ip boolean (default=0)
   return the migration IP, if configured

-migration_network string
  the migration network used to detect the local migration IP

Maybe i should upgrade all nodes to be sure they have the same configuration, but… i prefer to conserve the LXC's migration between two nodes ^^

If anyone have an idea… :)

Code:
proxmox-ve: 4.3-71 (running kernel: 4.4.21-1-pve)
pve-manager: 4.3-10 (running version: 4.3-10/7230e60f)
pve-kernel-4.4.21-1-pve: 4.4.21-71
pve-kernel-4.4.19-1-pve: 4.4.19-66
lvm2: 2.02.116-pve3
corosync-pve: 2.4.0-1
libqb0: 1.0-1
pve-cluster: 4.0-47
qemu-server: 4.0-94
pve-firmware: 1.1-10
libpve-common-perl: 4.0-80
libpve-access-control: 4.0-19
libpve-storage-perl: 4.0-68
pve-libspice-server1: 0.12.8-1
vncterm: 1.2-1
pve-docs: 4.3-14
pve-qemu-kvm: 2.7.0-6
pve-container: 1.0-81
pve-firewall: 2.0-31
pve-ha-manager: 1.0-35
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u2
lxc-pve: 2.0.5-1
lxcfs: 2.0.4-pve2
criu: 1.6.0-1
novnc-pve: 0.5-8
smartmontools: 6.5+svn4324-1~pve80
openvswitch-switch: 2.5.0-1
ceph: 0.80.7-2+deb8u1
 
You cannot migrate form an updated node to old node this time, because we implemented a new/required command in pve-cluster package.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!