Ceph Nautilus and Octopus Security Update for "insecure global_id reclaim" CVE-2021-20288

Could it be, that apt-get update/apt-get dist-upgrade doesn't update the base ceph packages beyond 12.2.11?
By default, yes. As that is the Ceph version one can update from the older Proxmox VE 5.4, and we need to stay compatible with that.

If you use Ceph on a Proxmox VE setup only as client I'd still recommend setting up the Ceph repository from us (but no need for full ceph server installation) see https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#sysadmin_package_repositories_ceph and run a standard system upgrade (apt update && apt full-upgrade or do so from the web-interface) to pull in newer client and librbd versions.
A migration afterwards is then still required to load the new librbd for the VMs.
 
Thanks - that was what I suspected and after adding the ceph pve repo another full-update did the trick. The warning regarding the clients has gone away.
 
Kernel Version

Linux 5.4.162-1-pve #1 SMP PVE 5.4.162-2 (Thu, 20 Jan 2022 16:38:53 +0100)
PVE Manager Version

pve-manager/6.4-13/9f411e79




we are currently using this version and recently upgrade to that version.

and we are still having this warning. all our proxmox server including ceph has been rebooted.

we would like to upgrade to 7.x version but we want to solve this issue first before updating to 7.x

can anyone suggest a possible solution on this? or can we upgrade to 7.x without solving this issue
 
Last edited:
we are currently using this version and recently upgrade to that version.
Can you please post the full output of
Code:
pveversion -v
ceph -s
ceph mon metadata
ceph config get mon auth_allow_insecure_global_id_reclaim

here in [CODE]output...[/CODE] tags?
 
@t.lamprecht


Code:
root@pxceph3:~# pveversion -v
proxmox-ve: 6.4-1 (running kernel: 5.4.162-1-pve)
pve-manager: 6.4-13 (running version: 6.4-13/9f411e79)
pve-kernel-5.4: 6.4-12
pve-kernel-helper: 6.4-12
pve-kernel-5.3: 6.1-6
pve-kernel-5.4.162-1-pve: 5.4.162-2
pve-kernel-5.4.128-1-pve: 5.4.128-2
pve-kernel-5.4.124-1-pve: 5.4.124-2
pve-kernel-5.4.114-1-pve: 5.4.114-1
pve-kernel-5.4.106-1-pve: 5.4.106-1
pve-kernel-5.4.78-2-pve: 5.4.78-2
pve-kernel-5.4.65-1-pve: 5.4.65-1
pve-kernel-5.4.44-2-pve: 5.4.44-2
pve-kernel-5.4.41-1-pve: 5.4.41-1
pve-kernel-4.15: 5.4-12
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.3.13-1-pve: 5.3.13-1
pve-kernel-4.15.18-24-pve: 4.15.18-52
pve-kernel-4.15.18-20-pve: 4.15.18-46
pve-kernel-4.15.18-18-pve: 4.15.18-44
pve-kernel-4.15.18-17-pve: 4.15.18-43
pve-kernel-4.15.18-16-pve: 4.15.18-41
pve-kernel-4.15.18-15-pve: 4.15.18-40
pve-kernel-4.15.17-1-pve: 4.15.17-9
ceph: 14.2.22-pve1
ceph-fuse: 14.2.22-pve1
corosync: 3.1.5-pve2~bpo10+1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.22-pve2~bpo10+1
libproxmox-acme-perl: 1.1.0
libproxmox-backup-qemu0: 1.1.0-1
libpve-access-control: 6.4-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.4-4
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.2-3
libpve-storage-perl: 6.4-1
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.1.13-2
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.6-1
pve-cluster: 6.4-1
pve-container: 3.3-6
pve-docs: 6.4-2
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-4
pve-firmware: 3.3-2
pve-ha-manager: 3.1-1
pve-i18n: 2.3-1
pve-qemu-kvm: 5.2.0-6
pve-xtermjs: 4.7.0-3
qemu-server: 6.4-2
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.7-pve1



Code:
root@pxceph3:~# ceph -s
  cluster:
    id:     672d9ca3-b4b4-4313-9ecb-1dd02e8da71d
    health: HEALTH_WARN
            clients are using insecure global_id reclaim
            mons are allowing insecure global_id reclaim
 
  services:
    mon: 3 daemons, quorum pxceph,pxceph2,pxceph3 (age 2d)
    mgr: pxceph3(active, since 2d), standbys: pxceph2, pxceph
    osd: 15 osds: 15 up (since 2d), 15 in
 
  data:
    pools:   1 pools, 512 pgs
    objects: 2.25M objects, 7.8 TiB
    usage:   23 TiB used, 59 TiB / 82 TiB avail
    pgs:     512 active+clean
 
  io:
    client:   221 MiB/s rd, 2.8 MiB/s wr, 1.82k op/s rd, 316 op/s wr


Code:
root@pxceph3:~# ceph mon metadata
[
    {
        "name": "pxceph",
        "addrs": "[v2:10.10.20.20:3300/0,v1:10.10.20.20:6789/0]",
        "arch": "x86_64",
        "ceph_release": "nautilus",
        "ceph_version": "ceph version 14.2.22 (877fa256043e4743620f4677e72dee5e738d1226) nautilus (stable)",
        "ceph_version_short": "14.2.22",
        "compression_algorithms": "none, snappy, zlib, zstd, lz4",
        "cpu": "Intel(R) Xeon(R) CPU E5-1650 v4 @ 3.60GHz",
        "device_ids": "",
        "device_paths": "",
        "devices": "",
        "distro": "debian",
        "distro_description": "Debian GNU/Linux 10 (buster)",
        "distro_version": "10",
        "hostname": "pxceph",
        "kernel_description": "#1 SMP PVE 5.4.162-2 (Thu, 20 Jan 2022 16:38:53 +0100)",
        "kernel_version": "5.4.162-1-pve",
        "mem_swap_kb": "8388604",
        "mem_total_kb": "65747752",
        "os": "Linux"
    },
    {
        "name": "pxceph2",
        "addrs": "[v2:10.10.20.21:3300/0,v1:10.10.20.21:6789/0]",
        "arch": "x86_64",
        "ceph_release": "nautilus",
        "ceph_version": "ceph version 14.2.22 (877fa256043e4743620f4677e72dee5e738d1226) nautilus (stable)",
        "ceph_version_short": "14.2.22",
        "compression_algorithms": "none, snappy, zlib, zstd, lz4",
        "cpu": "Intel(R) Xeon(R) CPU E5-1650 v4 @ 3.60GHz",
        "device_ids": "",
        "device_paths": "",
        "devices": "",
        "distro": "debian",
        "distro_description": "Debian GNU/Linux 10 (buster)",
        "distro_version": "10",
        "hostname": "pxceph2",
        "kernel_description": "#1 SMP PVE 5.4.162-2 (Thu, 20 Jan 2022 16:38:53 +0100)",
        "kernel_version": "5.4.162-1-pve",
        "mem_swap_kb": "8388604",
        "mem_total_kb": "65747752",
        "os": "Linux"
    },
    {
        "name": "pxceph3",
        "addrs": "[v2:10.10.20.22:3300/0,v1:10.10.20.22:6789/0]",
        "arch": "x86_64",
        "ceph_release": "nautilus",
        "ceph_version": "ceph version 14.2.22 (877fa256043e4743620f4677e72dee5e738d1226) nautilus (stable)",
        "ceph_version_short": "14.2.22",
        "compression_algorithms": "none, snappy, zlib, zstd, lz4",
        "cpu": "Intel(R) Xeon(R) CPU E5-1650 v4 @ 3.60GHz",
        "device_ids": "",
        "device_paths": "",
        "devices": "",
        "distro": "debian",
        "distro_description": "Debian GNU/Linux 10 (buster)",
        "distro_version": "10",
        "hostname": "pxceph3",
        "kernel_description": "#1 SMP PVE 5.4.162-2 (Thu, 20 Jan 2022 16:38:53 +0100)",
        "kernel_version": "5.4.162-1-pve",
        "mem_swap_kb": "8388604",
        "mem_total_kb": "65747752",
        "os": "Linux"
    }
]

Code:
root@pxceph:~# ceph config get mon auth_allow_insecure_global_id_reclaim
Error EINVAL: unrecognized entity 'mon'


might be good to mention that we are using a repository from

deb https://enterprise.proxmox.com/debian/pve buster pve-enterprise
 
Last edited:
Hmm, seems that nautlius doesn't support the get yet, just the set - did you execute the set command already?

Also, is this ceph instance just used by the three nodes, or are there other cluster nodes or even external clients accessing the cluster? Because if that's the case ensure you add the ceph nautilus repo there too and do a plain apt update && apt full-upgrade (no need to fully install ceph server) so that they also got a recent enough client version.
 
Hmm, seems that nautlius doesn't support the get yet, just the set - did you execute the set command already?

Also, is this ceph instance just used by the three nodes, or are there other cluster nodes or even external clients accessing the cluster? Because if that's the case ensure you add the ceph nautilus repo there too and do a plain apt update && apt full-upgrade (no need to fully install ceph server) so that they also got a recent enough client version.
not yet, I thought we can only use it when we don't see the health warning.
 
Hello,
I am running two Promox clusters, one PVE only for the benefit of having a Ceph cluster running the RBD pool, so no VMs on that one. Plus, my actual VM cluster connected to this RBD pool.

I update my PVE Vm'cluster to 6.4.13 , then I update my Pve Ceph Cluster( 3 nodes) to PVE 6.4.13 and Ceph Nautilus 14.2.22.

I ve again one warning : mons are allowing insecure global_id reclaim. ( no clients warning) cf attach file.

At this step ,to be sure: can i set the : ceph config set mon auth_allow_insecure_global_id_reclaim false ? or I need to have a HEALTH OK before tape this command ?

Thanks you
 

Attachments

  • 1645026213263.png
    1645026213263.png
    47.9 KB · Views: 9
If you don't get a warning that clients are still using it, then it should be fine. Should you see any issues, you can enable it again.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!