[SOLVED] Unable to migrate containers between nodes - storage content type mismatch

infocus13

Active Member
Jan 16, 2019
6
0
41
43
Hi there

Hoping someone could help me with an issue I've been struggling with for a few days.

I have a simple 3-node cluster (2 x HP servers and a Raspberry Pi Q device). For some reason, I am unable to migrate containers from node to node when my container storage type is set to only allow "Containers" as content. As soon as I allow "Disk image" under my Storage config for my container location, migration works.

Really hoping someone can help here as the issue is frustrating me. Same error migrating in both directions.

Error (note the "ERROR: content type 'images' is not available on storage 'zfs-containers'"):

1601892060009.png

Here is my storage configuration:

1601892129636.png

And here is my storage.cfg:

1601892176032.png


Thank you in advance.
 
Please post the output of pveversion -v of both the source and the target node.
 
Hi

Source node (pve):

Code:
root@pve:~# pveversion -v
proxmox-ve: 6.2-1 (running kernel: 5.4.44-2-pve)
pve-manager: 6.2-6 (running version: 6.2-6/ee1d7754)
pve-kernel-5.4: 6.2-4
pve-kernel-helper: 6.2-4
pve-kernel-5.4.44-2-pve: 5.4.44-2
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 3.0.0-1+pve2
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.4
libpve-access-control: 6.1-1
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.1-3
libpve-guest-common-perl: 3.0-10
libpve-http-server-perl: 3.0-5
libpve-storage-perl: 6.1-8
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.2-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-8
pve-cluster: 6.1-8
pve-container: 3.1-8
pve-docs: 6.2-4
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-2
pve-firmware: 3.1-1
pve-ha-manager: 3.0-9
pve-i18n: 2.1-3
pve-qemu-kvm: 5.0.0-4
pve-xtermjs: 4.3.0-1
qemu-server: 6.2-3
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.4-pve1

Destination node (pve2):

Code:
root@pve2:~# pveversion -v
proxmox-ve: 6.2-1 (running kernel: 5.4.44-2-pve)
pve-manager: 6.2-6 (running version: 6.2-6/ee1d7754)
pve-kernel-5.4: 6.2-4
pve-kernel-helper: 6.2-4
pve-kernel-5.4.44-2-pve: 5.4.44-2
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 3.0.0-1+pve2
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.4
libpve-access-control: 6.1-1
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.1-3
libpve-guest-common-perl: 3.0-10
libpve-http-server-perl: 3.0-5
libpve-storage-perl: 6.1-8
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.2-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-8
pve-cluster: 6.1-8
pve-container: 3.1-8
pve-docs: 6.2-4
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-2
pve-firmware: 3.1-1
pve-ha-manager: 3.0-9
pve-i18n: 2.1-3
pve-qemu-kvm: 5.0.0-4
pve-xtermjs: 4.3.0-1
qemu-server: 6.2-3
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.4-pve1

It is almost like Proxmox is expecting the Container storage location to also allow/accept VM images even though I'm trying to migrate containers and VM images are not relevant at all.

Worth adding that I am running high availability without replication on my cluster.

Changing my Container storage to accept VM images fixes the issue and migration proceeds without a hitch. Whilst I could use this as a workaround, I still want to understand why there is an issue here and whether it is a symptom of an underlying error in my storage setup.

Thank you.
 
Last edited:
So I'm going to answer my own question/problem here.

@mira gave me the idea to simply upgrade both nodes to the latest package version in the Proxmox UI. Simple upgrade, nothing fancy.

Used the workaround I mentioned above (selected Disk Image as allowed content type for my Container storage), migrated all containers/VMs from one node to another to allow for full upgrade of both nodes, rebooted and voila! Issue resolved!

I can now select the correct storage content type (Containers only for container storage and Disk Image only for VMs) without any overlaps and can flawlessly migrate containers between nodes with no errors. The migration behaviour looks the same as before. When I was having the Content Type error, the Task Viewer output looked strange.

Threat marked as solved. In the future - upgrade & reboot kids :) Thank you.