Container Creation Failure on Nodes 1 & 2 with Proxmox VE 8.2.0

shm0rt

New Member
Jan 24, 2023
14
0
1
Hello everyone,

I'm encountering an issue when trying to create a container on two of our nodes (1 & 2), while it works without any problems on node 3. Below is the error message I receive:

syslog:
Code:
Jun 04 14:00:08 pve2 pvedaemon[110971]: <root@pam> starting task UPID:pve2:00063965:05BDE8D8:665F01C8:vzcreate:132:root@pam:
Jun 04 14:00:09 pve2 kernel: libceph: mon2 (1)10.10.23.247:6789 session established
Jun 04 14:00:09 pve2 kernel: libceph: client16512834 fsid 5d35af62-62e2-4037-a56a-6317b60c5956
Jun 04 14:00:09 pve2 kernel: rbd: rbd0: capacity 8589934592 features 0x3d
Jun 04 14:00:11 pve2 pvedaemon[407909]: unable to create CT 132 - command 'mkfs.ext4 -O mmp -E 'root_owner=100000:100000' /dev/rbd-pve/5d35af62-62e2-4037-a56a-6317b60c5956/cluster/vm-132-disk-0' failed: exit code 1
Jun 04 14:00:11 pve2 pvedaemon[110971]: <root@pam> end task UPID:pve2:00063965:05BDE8D8:665F01C8:vzcreate:132:root@pam: unable to create CT 132 - command 'mkfs.ext4 -O mmp -E 'root_owner=100000:100000' /dev/rbd-pve/5d35af62-62e2-4037-a56a-6317b60c5956/cluster/vm-132-disk-0' failed: exit code 1

Although it is possible to create the container on node 3, it fails on nodes 1 and 2 with no additional errors reported.

pveversion (all nodes are the same, verified with WinMerge):
Code:
proxmox-ve: 8.2.0 (running kernel: 6.8.4-3-pve)
pve-manager: 8.2.2 (running version: 8.2.2/9355359cd7afbae4)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.8: 6.8.4-3
proxmox-kernel-6.8.4-3-pve-signed: 6.8.4-3
proxmox-kernel-6.8.4-2-pve-signed: 6.8.4-2
proxmox-kernel-6.5.13-5-pve-signed: 6.5.13-5
proxmox-kernel-6.5: 6.5.13-5
proxmox-kernel-6.5.13-3-pve-signed: 6.5.13-3
proxmox-kernel-6.5.11-4-pve-signed: 6.5.11-4
ceph: 18.2.2-pve1
ceph-fuse: 18.2.2-pve1
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx8
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.1
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.1.4
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.6
libpve-cluster-perl: 8.0.6
libpve-common-perl: 8.2.1
libpve-guest-common-perl: 5.1.2
libpve-http-server-perl: 5.1.0
libpve-network-perl: 0.9.8
libpve-rs-perl: 0.8.8
libpve-storage-perl: 8.2.1
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.2.3-1
proxmox-backup-file-restore: 3.2.3-1
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.6
proxmox-widget-toolkit: 4.2.3
pve-cluster: 8.0.6
pve-container: 5.1.10
pve-docs: 8.2.2
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.0
pve-firewall: 5.0.7
pve-firmware: 3.11-1
pve-ha-manager: 4.0.4
pve-i18n: 3.2.2
pve-qemu-kvm: 8.1.5-6
pve-xtermjs: 5.3.0-3
qemu-server: 8.2.1
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.3-pve2
 
Hi,

Are you sure that the storage you created the CT from node 1,2 are active and online? you can check from the storage using pvesm as the following command:

Bash:
pvesm status
 
Hi,

Are you sure that the storage you created the CT from node 1,2 are active and online? you can check from the storage using pvesm as the following command:

Bash:
pvesm status
Hi Moayad,

Yes I am sure that the storage I created from note 1,2 are active and online. when i execute pvesm status I get following:
Code:
Name                            Type     Status           Total            Used       Available        %
AA-backup2-XX                    pbs     active     74755176320      6859835776     67895340544    9.18%
backup2-A                        pbs     active     74755176320      6859835776     67895340544    9.18%
backup2-B                        pbs     active     74755176320      6859835776     67895340544    9.18%
backup2-C                        pbs     active     74755176320      6859835776     67895340544    9.18%
backup2-D                        pbs     active     74755176320      6859835776     67895340544    9.18%
cephfs                        cephfs     active      1035563008        43630592       991932416    4.21%
cluster                          rbd     active      1818480433       826546737       991933696   45.45%
esxi-test                       esxi   disabled               0               0               0      N/A
local                            dir     active       470831648         6908512       443742684    1.47%
 
Thank you for the output!

Can you create the container on a different storage?
When I try to create the container from the local storage and the ceph on every node (except for node3) it gives me the same error:
Code:
/dev/rbd2
The file /dev/rbd-pve/5d35af62-62e2-4037-a56a-6317b60c5956/cluster/vm-134-disk-0 does not exist and no size was specified.
Removing image: 1% complete...
Removing image: 2% complete...
Removing image: 3% complete...
Removing image: 4% complete...
Removing image: 5% complete...
Removing image: 6% complete...
Removing image: 7% complete...
Removing image: 8% complete...
Removing image: 9% complete...
Removing image: 10% complete...
Removing image: 11% complete...
Removing image: 12% complete...
Removing image: 13% complete...
Removing image: 14% complete...
Removing image: 15% complete...
Removing image: 16% complete...
Removing image: 17% complete...
Removing image: 18% complete...
Removing image: 19% complete...
Removing image: 20% complete...
Removing image: 21% complete...
Removing image: 22% complete...
Removing image: 23% complete...
Removing image: 24% complete...
Removing image: 25% complete...
Removing image: 26% complete...
Removing image: 27% complete...
Removing image: 28% complete...
Removing image: 29% complete...
Removing image: 30% complete...
Removing image: 31% complete...
Removing image: 32% complete...
Removing image: 33% complete...
Removing image: 34% complete...
Removing image: 35% complete...
Removing image: 36% complete...
Removing image: 37% complete...
Removing image: 38% complete...
Removing image: 39% complete...
Removing image: 40% complete...
Removing image: 41% complete...
Removing image: 42% complete...
Removing image: 43% complete...
Removing image: 44% complete...
Removing image: 45% complete...
Removing image: 46% complete...
Removing image: 47% complete...
Removing image: 48% complete...
Removing image: 49% complete...
Removing image: 50% complete...
Removing image: 51% complete...
Removing image: 52% complete...
Removing image: 53% complete...
Removing image: 54% complete...
Removing image: 55% complete...
Removing image: 56% complete...
Removing image: 57% complete...
Removing image: 58% complete...
Removing image: 59% complete...
Removing image: 60% complete...
Removing image: 61% complete...
Removing image: 62% complete...
Removing image: 63% complete...
Removing image: 64% complete...
Removing image: 65% complete...
Removing image: 66% complete...
Removing image: 67% complete...
Removing image: 68% complete...
Removing image: 69% complete...
Removing image: 70% complete...
Removing image: 71% complete...
Removing image: 72% complete...
Removing image: 73% complete...
Removing image: 74% complete...
Removing image: 75% complete...
Removing image: 76% complete...
Removing image: 77% complete...
Removing image: 78% complete...
Removing image: 79% complete...
Removing image: 80% complete...
Removing image: 81% complete...
Removing image: 82% complete...
Removing image: 83% complete...
Removing image: 84% complete...
Removing image: 85% complete...
Removing image: 86% complete...
Removing image: 87% complete...
Removing image: 88% complete...
Removing image: 89% complete...
Removing image: 90% complete...
Removing image: 91% complete...
Removing image: 92% complete...
Removing image: 93% complete...
Removing image: 94% complete...
Removing image: 95% complete...
Removing image: 96% complete...
Removing image: 97% complete...
Removing image: 98% complete...
Removing image: 99% complete...
Removing image: 100% complete...done.
TASK ERROR: unable to create CT 134 - command 'mkfs.ext4 -O mmp -E 'root_owner=100000:100000' /dev/rbd-pve/5d35af62-62e2-4037-a56a-6317b60c5956/cluster/vm-134-disk-0' failed: exit code 1
 
Could you please run `journalctl -f > /tmp/syslog.log` command in your node and try to create the container again... after you got the above error hit Ctrl+c on the journalctl command and then attach the syslog.log in /tmp/ directory to this thread? The log should give us more informaiton.
 
Could you please run `journalctl -f > /tmp/syslog.log` command in your node and try to create the container again... after you got the above error hit Ctrl+c on the journalctl command and then attach the syslog.log in /tmp/ directory to this thread? The log should give us more informaiton.
That is my syslog from the node (2), that failed the container creation:
Code:
Jun 24 17:22:13 pve2 systemd[3989752]: Listening on gpg-agent.socket - GnuPG cryptographic agent and passphrase cache.
Jun 24 17:22:13 pve2 systemd[3989752]: Reached target sockets.target - Sockets.
Jun 24 17:22:13 pve2 systemd[3989752]: Reached target basic.target - Basic System.
Jun 24 17:22:13 pve2 systemd[3989752]: Reached target default.target - Main User Target.
Jun 24 17:22:13 pve2 systemd[3989752]: Startup finished in 180ms.
Jun 24 17:22:13 pve2 systemd[1]: Started user@0.service - User Manager for UID 0.
Jun 24 17:22:13 pve2 systemd[1]: Started session-18925.scope - Session 18925 of User root.
Jun 24 17:22:13 pve2 sshd[3989747]: pam_env(sshd:session): deprecated reading of user environment enabled
Jun 24 17:22:13 pve2 login[3989771]: pam_unix(login:session): session opened for user root(uid=0) by root(uid=0)
Jun 24 17:22:13 pve2 login[3989785]: ROOT LOGIN  on '/dev/pts/0' from '10.10.23.249'
Jun 24 17:22:26 pve2 CRON[3989147]: pam_unix(cron:session): session closed for user root
Jun 24 17:22:38 pve2 pvedaemon[3964088]: <root@pam> starting task UPID:pve2:003CE633:101D2ABF:66798F3E:vzcreate:136:root@pam:
Jun 24 17:22:38 pve2 kernel: rbd: rbd2: capacity 8589934592 features 0x3d
Jun 24 17:22:38 pve2 pvedaemon[3991091]: unable to create CT 136 - command 'mkfs.ext4 -O mmp -E 'root_owner=100000:100000' /dev/rbd-pve/5d35af62-62e2-4037-a56a-6317b60c5956/cluster/vm-136-disk-0' failed: exit code 1
Jun 24 17:22:38 pve2 pvedaemon[3964088]: <root@pam> end task UPID:pve2:003CE633:101D2ABF:66798F3E:vzcreate:136:root@pam: unable to create CT 136 - command 'mkfs.ext4 -O mmp -E 'root_owner=100000:100000' /dev/rbd-pve/5d35af62-62e2-4037-a56a-6317b60c5956/cluster/vm-136-disk-0' failed: exit code 1
Jun 24 17:22:46 pve2 CRON[3989146]: pam_unix(cron:session): session closed for user root
 
Thank you for the output!

Could you also provide us with the output of the following command:
Bash:
pvesm status
 
Thank you for the output!

Could you also provide us with the output of the following command:
Bash:
pvesm status
here is the status of node2:
Code:
AA-backup2-XX                    pbs     active     74755144320      6957657728     67797486592    9.31%
backup2-A                        pbs     active     74755144320      6957657728     67797486592    9.31%
backup2-B                        pbs     active     74755144320      6957657728     67797486592    9.31%
backup2-C                        pbs     active     74755144320      6957657728     67797486592    9.31%
backup2-D                        pbs     active     74755144320      6957657728     67797486592    9.31%
cephfs                        cephfs     active      3564716032        43630592      3521085440    1.22%
cluster                          rbd     active      4400160918       879074198      3521086720   19.98%
esxi-test                       esxi   disabled               0               0               0      N/A
local                            dir     active       470831648         7055288       443595908    1.50%
 
Hi,

To narrow down the issue, could you please check if you can create the container on the local storage?
 
I tried to do it on every node. On every single one of them i get the same result:
Code:
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following package was automatically installed and is no longer required:
  proxmox-kernel-6.5.13-3-pve-signed
Use 'sudo apt autoremove' to remove it.
0 upgraded, 0 newly installed, 2 reinstalled, 0 to remove and 24 not upgraded.
Need to get 0 B/1,405 kB of archives.
After this operation, 0 B of additional disk space will be used.
(Reading database ... 78498 files and directories currently installed.)
Preparing to unpack .../lxc-pve_6.0.0-1_amd64.deb ...
Unpacking lxc-pve (6.0.0-1) over (6.0.0-1) ...
Preparing to unpack .../pve-container_5.1.12_all.deb ...
Unpacking pve-container (5.1.12) over (5.1.12) ...
Setting up lxc-pve (6.0.0-1) ...
Setting up pve-container (5.1.12) ...
Processing triggers for pve-manager (8.2.2) ...
Processing triggers for man-db (2.11.2-2) ...
Processing triggers for pve-ha-manager (4.0.4) ...
Processing triggers for libc-bin (2.36-9+deb12u7) ...
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!