lxc live migration.

got it and thank you for the reply.

then there may be an issue with 'restart mode' migration. last 5 restart mode migrations have failed :
Code:
017-10-07 02:05:25 shutdown CT 7101
2017-10-07 02:05:25 # lxc-stop -n 7101 --timeout 180
2017-10-07 02:05:26 # lxc-wait -n 7101 -t 5 -s STOPPED
2017-10-07 02:05:26 starting migration of CT 7101 to node 'sys10' (10.1.10.10)
2017-10-07 02:05:26 volume 'lxc-ceph:vm-7101-disk-1' is on shared storage 'lxc-ceph'
rbd: sysfs write failed
can't unmap rbd volume vm-7101-disk-1: rbd: sysfs write failed
2017-10-07 02:05:27 ERROR: volume deactivation failed: lxc-ceph:vm-7101-disk-1 at /usr/share/perl5/PVE/Storage.pm line 999.
2017-10-07 02:05:27 aborting phase 1 - cleanup resources
2017-10-07 02:05:27 start final cleanup
2017-10-07 02:05:27 start container on target node
2017-10-07 02:05:27 # /usr/bin/ssh -o 'BatchMode=yes' -o 'HostKeyAlias=sys10' root@10.1.10.10 pct start 7101
2017-10-07 02:05:27 Configuration file 'nodes/sys10/lxc/7101.conf' does not exist
2017-10-07 02:05:28 ERROR: command '/usr/bin/ssh -o 'BatchMode=yes' -o 'HostKeyAlias=sys10' root@10.1.10.10 pct start 7101' failed: exit code 255
2017-10-07 02:05:28 ERROR: migration aborted (duration 00:00:03): volume deactivation failed: lxc-ceph:vm-7101-disk-1 at /usr/share/perl5/PVE/Storage.pm line 999.
TASK ERROR: migration aborted
If we shutdown the lxc in advance then the migration works.

version
Code:
proxmox-ve: 5.0-23 (running kernel: 4.10.17-3-pve)
pve-manager: 5.0-32 (running version: 5.0-32/2560e073)
pve-kernel-4.10.17-3-pve: 4.10.17-23
libpve-http-server-perl: 2.0-6
lvm2: 2.02.168-pve3
corosync: 2.4.2-pve3
libqb0: 1.0.1-1
pve-cluster: 5.0-12
qemu-server: 5.0-15
pve-firmware: 2.0-2
libpve-common-perl: 5.0-18
libpve-guest-common-perl: 2.0-12
libpve-access-control: 5.0-6
libpve-storage-perl: 5.0-15
pve-libspice-server1: 0.12.8-3
vncterm: 1.5-2
pve-docs: 5.0-9
pve-qemu-kvm: 2.9.1-1
pve-container: 2.0-16
pve-firewall: 3.0-3
pve-ha-manager: 2.0-2
ksm-control-daemon: 1.2-2
glusterfs-client: 3.8.8-1
lxc-pve: 2.1.0-2
lxcfs: 2.0.7-pve4
criu: 2.11.1-1~bpo90
novnc-pve: 0.6-4
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.6.5.11-pve18~bpo90
ceph: 12.2.0-pve1

in case next update does not fix, I will file a bug
 
Hi RobFantini,
technically, LXC support Live migration via CRIU
but since it's not stable enough, Proxmox team didn't implement it as feature
in my opinion, this feature should be implemented and each admin will decide if use it or no.
 
I'm also seeing the same issue:
Code:
2017-10-12 14:24:53 volume 'ceph-lxc:vm-507-disk-1' is on shared storage 'ceph-lxc'
rbd: sysfs write failed
can't unmap rbd volume vm-507-disk-1: rbd: sysfs write failed
2017-10-12 14:24:53 ERROR: volume deactivation failed: ceph-lxc:vm-507-disk-1 at /usr/share/perl5/PVE/Storage.pm line 999.
2017-10-12 14:24:53 aborting phase 1 - cleanup resources

It did work on the previous packages pve-manager 5.0-31, and lxc-pve 2.0.8-3
 
thanks for reporting this, fix is in the works.
 
the pve-container package on pvetest fixes this issue, but also needs the libpve-common-perl package from pvetest. I'd recommend waiting for them to hit pve-no-subscription/pve-enterprise, unless you have a test environment and want to provide feedback :)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!