Hello,
I'm using PM on a 3 nodes cluster@home. So far only with VMs but I wanted to test containers.
I've created a simple deb9 container on shared (NFS) storage but when I want to migrate it, it tried to migrate the disk to local storage first ...
Container definition:
Storage definition:
Migration log:
If I disable the local storage in works flawlessly:
I'm not on very last update but not too far:
Thanks in advance for your help.
I'm using PM on a 3 nodes cluster@home. So far only with VMs but I wanted to test containers.
I've created a simple deb9 container on shared (NFS) storage but when I want to migrate it, it tried to migrate the disk to local storage first ...
Container definition:
root@pve1:/etc/pve/lxc# cat 104.conf
arch: amd64
cores: 1
hostname: cttest
memory: 512
net0: name=eth0,bridge=vmbr0,hwaddr=EA:24:C0:8C:BE:9D,ip=dhcp,type=veth
ostype: debian
rootfs: omv2:104/vm-104-disk-0.raw,size=2G
swap: 512
unprivileged: 1
arch: amd64
cores: 1
hostname: cttest
memory: 512
net0: name=eth0,bridge=vmbr0,hwaddr=EA:24:C0:8C:BE:9D,ip=dhcp,type=veth
ostype: debian
rootfs: omv2:104/vm-104-disk-0.raw,size=2G
swap: 512
unprivileged: 1
Storage definition:
root@pve1:/etc/pve/lxc# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content iso,images,backup,vztmpl,rootdir
maxfiles 0
shared 0
nfs: omv1
export /export/nfsproxmox
path /mnt/pve/omv1
server omv1
content iso,images,backup,vztmpl,rootdir
maxfiles 5
nodes pve2,pve1,pve3
options vers=3
rbd: ceph-pve
content images
krbd 0
monhost 10.0.0.151;10.0.0.152;10.0.0.153
nodes pve3,pve1,pve2
pool ceph-pool
username admin
nfs: omv2
export /export/nfsproxmox
path /mnt/pve/omv2
server omv2
content images,iso,rootdir,vztmpl,backup
maxfiles 5
options vers=3
dir: local
path /var/lib/vz
content iso,images,backup,vztmpl,rootdir
maxfiles 0
shared 0
nfs: omv1
export /export/nfsproxmox
path /mnt/pve/omv1
server omv1
content iso,images,backup,vztmpl,rootdir
maxfiles 5
nodes pve2,pve1,pve3
options vers=3
rbd: ceph-pve
content images
krbd 0
monhost 10.0.0.151;10.0.0.152;10.0.0.153
nodes pve3,pve1,pve2
pool ceph-pool
username admin
nfs: omv2
export /export/nfsproxmox
path /mnt/pve/omv2
server omv2
content images,iso,rootdir,vztmpl,backup
maxfiles 5
options vers=3
Migration log:
018-11-06 10:10:07 shutdown CT 104
2018-11-06 10:10:11 starting migration of CT 104 to node 'pve2' (192.168.1.151)
2018-11-06 10:10:11 volume 'omv2:104/vm-104-disk-0.raw' is on shared storage 'omv2'
2018-11-06 10:10:11 found local volume 'local:104/vm-104-disk-0.raw' (via storage)
Formatting '/var/lib/vz/images/104/vm-104-disk-0.raw', fmt=raw size=3653976064
send/receive failed, cleaning up snapshot(s)...
[break]
2018-11-06 10:10:11 starting migration of CT 104 to node 'pve2' (192.168.1.151)
2018-11-06 10:10:11 volume 'omv2:104/vm-104-disk-0.raw' is on shared storage 'omv2'
2018-11-06 10:10:11 found local volume 'local:104/vm-104-disk-0.raw' (via storage)
Formatting '/var/lib/vz/images/104/vm-104-disk-0.raw', fmt=raw size=3653976064
send/receive failed, cleaning up snapshot(s)...
[break]
If I disable the local storage in works flawlessly:
2018-11-06 10:23:18 shutdown CT 104
2018-11-06 10:23:21 starting migration of CT 104 to node 'pve2' (192.168.1.151)
2018-11-06 10:23:21 volume 'omv2:104/vm-104-disk-0.raw' is on shared storage 'omv2'
2018-11-06 10:23:21 start final cleanup
2018-11-06 10:23:21 start container on target node
2018-11-06 10:23:21 # /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=pve2' root@192.168.1.151 pct start 104
2018-11-06 10:24:08 migration finished successfully (duration 00:00:50)
TASK OK
2018-11-06 10:23:21 starting migration of CT 104 to node 'pve2' (192.168.1.151)
2018-11-06 10:23:21 volume 'omv2:104/vm-104-disk-0.raw' is on shared storage 'omv2'
2018-11-06 10:23:21 start final cleanup
2018-11-06 10:23:21 start container on target node
2018-11-06 10:23:21 # /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=pve2' root@192.168.1.151 pct start 104
2018-11-06 10:24:08 migration finished successfully (duration 00:00:50)
TASK OK
I'm not on very last update but not too far:
proxmox-ve: 5.2-2 (running kernel: 4.15.18-4-pve)
pve-manager: 5.2-9 (running version: 5.2-9/4b30e8f9)
pve-kernel-4.15: 5.2-7
pve-kernel-4.15.18-4-pve: 4.15.18-23
pve-kernel-4.15.17-3-pve: 4.15.17-14
pve-kernel-4.15.17-1-pve: 4.15.17-9
ceph: 12.2.8-pve1
corosync: 2.4.2-pve5
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-38
libpve-guest-common-perl: 2.0-17
libpve-http-server-perl: 2.0-10
libpve-storage-perl: 5.0-28
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 3.0.2+pve1-2
lxcfs: 3.0.0-1
novnc-pve: 1.0.0-2
proxmox-widget-toolkit: 1.0-20
pve-cluster: 5.0-30
pve-container: 2.0-27
pve-docs: 5.2-8
pve-firewall: 3.0-14
pve-firmware: 2.0-5
pve-ha-manager: 2.0-5
pve-i18n: 1.0-6
pve-libspice-server1: 0.12.8-3
pve-qemu-kvm: 2.11.2-1
pve-xtermjs: 1.0-5
qemu-server: 5.0-34
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.9-pve1~bpo9
pve-manager: 5.2-9 (running version: 5.2-9/4b30e8f9)
pve-kernel-4.15: 5.2-7
pve-kernel-4.15.18-4-pve: 4.15.18-23
pve-kernel-4.15.17-3-pve: 4.15.17-14
pve-kernel-4.15.17-1-pve: 4.15.17-9
ceph: 12.2.8-pve1
corosync: 2.4.2-pve5
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-38
libpve-guest-common-perl: 2.0-17
libpve-http-server-perl: 2.0-10
libpve-storage-perl: 5.0-28
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 3.0.2+pve1-2
lxcfs: 3.0.0-1
novnc-pve: 1.0.0-2
proxmox-widget-toolkit: 1.0-20
pve-cluster: 5.0-30
pve-container: 2.0-27
pve-docs: 5.2-8
pve-firewall: 3.0-14
pve-firmware: 2.0-5
pve-ha-manager: 2.0-5
pve-i18n: 1.0-6
pve-libspice-server1: 0.12.8-3
pve-qemu-kvm: 2.11.2-1
pve-xtermjs: 1.0-5
qemu-server: 5.0-34
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.9-pve1~bpo9
Thanks in advance for your help.