container storage content is empty

silvered.dragon

Renowned Member
Nov 4, 2015
123
4
83
Hi to all,
after updating from proxmox 5 to 6 and ceph luminous to nautilus in a 4 node HA cluster environment, the container storage (ceph_ct) is empty and all the containers disk are instead showed under the vm storage (ceph_vm). I'm attaching some pics to understand bettere, any solution to this?
Cattura1.JPG

Cattura2.JPG

Cattura3.JPG
 
Can you please post the '/etc/pve/storage.cfg'?
 
Dear here is my config:

Code:
dir: local
        path /var/lib/vz
        content backup,iso,vztmpl
        maxfiles 5
        shared 0

lvmthin: local-lvm
        thinpool data
        vgname pve
        content images,rootdir
        nodes nodo2,nodo1,nodo3

rbd: ceph_vm
        content images
        krbd 0
        nodes nodo3,nodo2,nodo1
        pool ceph

rbd: ceph_ct
        content rootdir
        krbd 1
        nodes nodo2,nodo1,nodo3
        pool ceph

zfspool: local-zfs
        pool rpool/data
        content rootdir,images
        nodes utility
        sparse 1

zfspool: backup-pool
        pool backup_pool
        content rootdir,images
        nodes utility
        sparse 1

cifs: ts_syncro
        path /mnt/pve/ts_syncro
        server 192.168.25.100
        share TS_SYNCRO
        content iso
        nodes nodo1,nodo2,nodo3
        username administrator

nfs: Anekup
        export /mnt/ANEKUP_POOL/Proxmox_Backup
        path /mnt/pve/Anekup
        server 192.168.25.202
        content backup
        maxfiles 10
        options vers=3
 
As both storage (ceph_vm / ceph_ct) point to the same pool, you should see the same contents on both. And what do you see if you try to run 'pvesm list ceph_ct'?
 
Dear here is the output,
as you can see is blank, before the update it was showing correctly

Code:
root@nodo1:~# pvesm list ceph_ct
root@nodo1:~# pvesm list ceph_vm
ceph_vm:vm-100-disk-1   raw 161061273600 100
ceph_vm:vm-101-disk-1   raw 161061273600 101
ceph_vm:vm-102-disk-1   raw 161061273600 102
ceph_vm:vm-103-disk-1   raw 8589934592 103
ceph_vm:vm-104-disk-1   raw 32212254720 104
ceph_vm:vm-104-disk-2   raw 53687091200 104
ceph_vm:vm-105-disk-1   raw 32212254720 105
ceph_vm:vm-105-disk-2   raw 53687091200 105
ceph_vm:vm-106-disk-1   raw 64424509440 106
ceph_vm:vm-106-disk-2   raw 107374182400 106
ceph_vm:vm-106-disk-3   raw 32212254720 106
ceph_vm:vm-107-disk-1   raw 64424509440 107
ceph_vm:vm-107-disk-2   raw 107374182400 107
ceph_vm:vm-107-disk-3   raw 16106127360 107
ceph_vm:vm-108-disk-0   raw 68719476736 108
ceph_vm:vm-110-disk-1   raw 16106127360 110
ceph_vm:vm-111-disk-1   raw 6442450944 111
ceph_vm:vm-112-disk-1   raw 17179869184 112
ceph_vm:vm-113-disk-1   raw 21474836480 113
ceph_vm:vm-114-disk-1   raw 34359738368 114
ceph_vm:vm-118-disk-1   raw 85899345920 118
root@nodo1:~#
 
Hm... can you create a new CT on the ceph_ct storage?

Aside, the two separate storages are not needed anymore, as the KRBD setting only is for KVM. CTs are always started through the KRBD client.
 
Dear as you can see in the attached screen I just created a test machine in ceph_ct storage and the disk is still not showed under ceph_ct content.

I know about the KRBD client but this is a cluster that is running from years, it was initially a proxmox 3 and I have always updated using the suggested howtos during this time.

Can you please suggest me how can I remove the ceph_ct storage/unify things without loosing any data?
test1.PNG
test2.PNG
 
Dear as you can see in the attached screen I just created a test machine in ceph_ct storage and the disk is still not showed under ceph_ct content.
Odd. Could you please post a pveversion -v?

Can you please suggest me how can I remove the ceph_ct storage/unify things without loosing any data?
Either you do a 'move disk' or a manual change of the vmid.conf. In both cases the CT needs to be powered off.
 
yes of course

Code:
root@nodo1:~# pveversion -v
proxmox-ve: 6.0-2 (running kernel: 5.0.18-1-pve)
pve-manager: 6.0-5 (running version: 6.0-5/f8a710d7)
pve-kernel-5.0: 6.0-6
pve-kernel-helper: 6.0-6
pve-kernel-4.15: 5.4-7
pve-kernel-4.13: 5.2-2
pve-kernel-5.0.18-1-pve: 5.0.18-1
pve-kernel-4.15.18-19-pve: 4.15.18-45
pve-kernel-4.15.18-18-pve: 4.15.18-44
pve-kernel-4.15.18-15-pve: 4.15.18-40
pve-kernel-4.15.18-14-pve: 4.15.18-39
pve-kernel-4.15.18-12-pve: 4.15.18-36
pve-kernel-4.15.18-11-pve: 4.15.18-34
pve-kernel-4.15.18-10-pve: 4.15.18-32
pve-kernel-4.15.18-9-pve: 4.15.18-30
pve-kernel-4.15.18-7-pve: 4.15.18-27
pve-kernel-4.15.18-5-pve: 4.15.18-24
pve-kernel-4.15.18-4-pve: 4.15.18-23
pve-kernel-4.15.18-3-pve: 4.15.18-22
pve-kernel-4.15.18-2-pve: 4.15.18-21
pve-kernel-4.15.18-1-pve: 4.15.18-19
pve-kernel-4.15.17-3-pve: 4.15.17-14
pve-kernel-4.15.17-2-pve: 4.15.17-10
pve-kernel-4.15.17-1-pve: 4.15.17-9
pve-kernel-4.15.15-1-pve: 4.15.15-6
pve-kernel-4.13.16-4-pve: 4.13.16-51
pve-kernel-4.13.16-3-pve: 4.13.16-50
pve-kernel-4.13.16-2-pve: 4.13.16-48
pve-kernel-4.13.16-1-pve: 4.13.16-46
pve-kernel-4.13.13-6-pve: 4.13.13-42
pve-kernel-4.13.13-5-pve: 4.13.13-38
pve-kernel-4.13.13-4-pve: 4.13.13-35
pve-kernel-4.13.13-2-pve: 4.13.13-33
pve-kernel-4.13.13-1-pve: 4.13.13-31
pve-kernel-4.13.8-3-pve: 4.13.8-30
pve-kernel-4.13.8-2-pve: 4.13.8-28
pve-kernel-4.13.8-1-pve: 4.13.8-27
pve-kernel-4.13.4-1-pve: 4.13.4-26
pve-kernel-4.10.17-4-pve: 4.10.17-24
pve-kernel-4.10.17-2-pve: 4.10.17-20
ceph: 14.2.1-pve2
ceph-fuse: 14.2.1-pve2
corosync: 3.0.2-pve2
criu: 3.11-3
glusterfs-client: 5.5-3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.10-pve1
libpve-access-control: 6.0-2
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-3
libpve-guest-common-perl: 3.0-1
libpve-http-server-perl: 3.0-2
libpve-storage-perl: 6.0-6
libqb0: 1.0.5-1
lvm2: 2.03.02-pve3
lxc-pve: 3.1.0-61
lxcfs: 3.0.3-pve60
novnc-pve: 1.0.0-60
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.0-5
pve-cluster: 6.0-4
pve-container: 3.0-5
pve-docs: 6.0-4
pve-edk2-firmware: 2.20190614-1
pve-firewall: 4.0-6
pve-firmware: 3.0-2
pve-ha-manager: 3.0-2
pve-i18n: 2.0-2
pve-qemu-kvm: 4.0.0-3
pve-xtermjs: 3.13.2-1
qemu-server: 6.0-7
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.1-pve1
 
So far I don't see anything that would explain the issue. Do have anything related in the syslog/journal maybe?
 
Dear I have inspected the logs but there is no error or warning. This is really strange. As I told you the issue appears after upgrade from pve5 to pve6 so something goes wrong.. I'm a little scared to reboot each node of the cluster for this reason, cause maybe at reboot the cluster will not retrieve some disks.. but for sure I have rebooted one of four nodes yesterday and everything looks ok. Anyway thank you
 
Could you please post a 'ceph versions'?
 
Please run the exact command 'ceph versions'. The 's' is important. :D
 
ooh really sorry I didn't notice the s:
Code:
root@nodo1:~# ceph versions
{
    "mon": {
        "ceph version 14.2.1 (9257126ffb439de1652793b3e29f4c0b97a47b47) nautilus (stable)": 3
    },
    "mgr": {
        "ceph version 14.2.1 (9257126ffb439de1652793b3e29f4c0b97a47b47) nautilus (stable)": 3
    },
    "osd": {
        "ceph version 14.2.1 (9257126ffb439de1652793b3e29f4c0b97a47b47) nautilus (stable)": 12
    },
    "mds": {},
    "overall": {
        "ceph version 14.2.1 (9257126ffb439de1652793b3e29f4c0b97a47b47) nautilus (stable)": 18
    }
}
 
but for sure I have rebooted one of four nodes yesterday and everything looks ok.
I assume you verified, if the CT could start on that node.

Ok, two things come into my mind:
  • Connect to another node and check if the storage view has the same issue.
  • Reboot all nodes, one at a time and verify that the CTs start
    • If not, you always have the other nodes as a backup (that's why one at a time) ;)
 
I assume you verified, if the CT could start on that node.

Ok, two things come into my mind:
  • Connect to another node and check if the storage view has the same issue.
  • Reboot all nodes, one at a time and verify that the CTs start
    • If not, you always have the other nodes as a backup (that's why one at a time) ;)
Thank you for your patience:
  • I tried to connect to all the other nodes, even the fourth one that is only for utilities staffs so it doesn't have ceph installed, and the issue is still present from each node view.
  • of course this is the reason why I'm using this kind of environment, but unfortunately even rebooting the other 2 nodes the issue is still present.
Can I ask you if this is something that you can reproduce on your side too with latest proxmox?
 
Can I ask you if this is something that you can reproduce on your side too with latest proxmox?
Sadly not, but I need to say that my cluster runs with packages from the pvetest repository. They might be newer than what your nodes are running.
 
Sadly not, but I need to say that my cluster runs with packages from the pvetest repository. They might be newer than what your nodes are running.
thank you dear any way, I will try to look deeper into this with next updates.
Regards from Italy
 
on zfs-pool storage same problem. only on pve 6.0, on 5.x all ok

Code:
# cat /etc/pve/storage.cfg
dir: local
        disable
        path /var/lib/vz
        content vztmpl
        maxfiles 0
        shared 0

dir: storage
        path /storage
        content images,iso,vztmpl
        shared 0

zfspool: storage-zfs
        pool storage
        content rootdir
        sparse 0


# zfs list
NAME                        USED  AVAIL     REFER  MOUNTPOINT
archive-mirror             3.73T  3.30T     3.73T  /archive-mirror
storage                    34.1G   338G     32.8G  /storage
storage/subvol-203-disk-0  1.28G  30.7G     1.28G  /storage/subvol-203-disk-0


# pvesm list storage-zfs
#
 

Attachments

  • Снимок экрана (25).png
    Снимок экрана (25).png
    23 KB · Views: 13

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!