Hello,
In follow to https://forum.proxmox.com/threads/c...l_id-reclaim-cve-2021-20288.88038/post-389914, I'm opening a new thread.
I was asked to check this :
Note : I changed VMID to PID because using VMID with lsof doesn't seems to make sense.
I don't think posting the full list of opened files is pertinent but here's the deleted files of a NOT migrated VM :
And, for comparison, here's another one on a recently live-migrated one :
Both are running on the same node. The live-migrated VM made a round trip.
I can confirm the first has not been migrated nor rebooted with uptime and PVE task history.
The Ceph Cluster is still using insecure claims :
Dunno how to check if this has been changed (some kind of ceph history ?) but this is highly improbable, at least by a human.
And to be complete, librbd package was upgraded sunday evening :
I didn't checked all of them but I can see the same results on other nodes.
Is there a way to associate this :
with a VMID ?
I'll probably live-migrate all VMs in the next days, so for me, this is not really a problem but this could potentially change the upgrade process for other users.
For reference, here's a link from Ceph about this secure claim issue :
https://docs.ceph.com/en/latest/security/CVE-2021-20288/
Best regards
In follow to https://forum.proxmox.com/threads/c...l_id-reclaim-cve-2021-20288.88038/post-389914, I'm opening a new thread.
I was asked to check this :
Code:
# qm list prints the PID
qm list
# print all open files of that process, which includes shared libraries
lsof -n -p PID
# only Linux map files that have been deleted
lsof -n -p PID| grep DEL
Note : I changed VMID to PID because using VMID with lsof doesn't seems to make sense.
I don't think posting the full list of opened files is pertinent but here's the deleted files of a NOT migrated VM :
Code:
lsof -n -p 167580| grep DEL
kvm 167580 root DEL REG 0,25 42445468 /usr/bin/qemu-system-x86_64
kvm 167580 root DEL REG 0,25 28867024 /usr/lib/x86_64-linux-gnu/libsqlite3.so.0.8.6
kvm 167580 root DEL REG 0,25 28849634 /usr/lib/x86_64-linux-gnu/libp11-kit.so.0.3.0
kvm 167580 root DEL REG 0,25 28862381 /usr/lib/x86_64-linux-gnu/libgstapp-1.0.so.0.1404.0
kvm 167580 root DEL REG 0,25 43194708 /usr/lib/ceph/libceph-common.so.0
kvm 167580 root DEL REG 0,25 42418405 /usr/lib/x86_64-linux-gnu/liblber-2.4.so.2.10.10
kvm 167580 root DEL REG 0,25 42418406 /usr/lib/x86_64-linux-gnu/libldap_r-2.4.so.2.10.10
kvm 167580 root DEL REG 0,25 28859152 /usr/lib/x86_64-linux-gnu/libcrypto.so.1.1
kvm 167580 root DEL REG 0,25 28859153 /usr/lib/x86_64-linux-gnu/libssl.so.1.1
kvm 167580 root DEL REG 0,25 37213427 /usr/lib/x86_64-linux-gnu/libapt-pkg.so.5.0.2
kvm 167580 root DEL REG 0,25 28849807 /usr/lib/x86_64-linux-gnu/libzstd.so.1.3.8
kvm 167580 root DEL REG 0,25 28851255 /lib/x86_64-linux-gnu/libudev.so.1.6.13
kvm 167580 root DEL REG 0,25 37213904 /usr/lib/x86_64-linux-gnu/libgnutls.so.30.23.2
kvm 167580 root DEL REG 0,25 28875485 /usr/lib/x86_64-linux-gnu/libjpeg.so.62.2.0
kvm 167580 root DEL REG 0,25 43194709 /usr/lib/librados.so.2.0.0
kvm 167580 root DEL REG 0,25 43194688 /usr/lib/librbd.so.1.12.0
kvm 167580 root DEL REG 0,25 28877389 /usr/lib/x86_64-linux-gnu/libcurl-gnutls.so.4.5.0
kvm 167580 root DEL REG 0,25 42419281 /usr/lib/libproxmox_backup_qemu.so.0
kvm 167580 root DEL REG 0,25 28849782 /lib/x86_64-linux-gnu/libsystemd.so.0.25.0
kvm 167580 root DEL REG 0,1 3758783448 /dev/zero
And, for comparison, here's another one on a recently live-migrated one :
Code:
lsof -n -p 1729001| grep DEL
kvm 1729001 root DEL REG 0,1 1866635234 /dev/zero
Both are running on the same node. The live-migrated VM made a round trip.
I can confirm the first has not been migrated nor rebooted with uptime and PVE task history.
The Ceph Cluster is still using insecure claims :
Code:
ceph config get mon auth_allow_insecure_global_id_reclaim
true
And to be complete, librbd package was upgraded sunday evening :
Code:
# cat /var/log/dpkg.log |grep librbd
2021-05-09 21:12:39 upgrade librbd1:amd64 15.2.10-pve1 15.2.11-pve1
2021-05-09 21:12:39 status half-configured librbd1:amd64 15.2.10-pve1
2021-05-09 21:12:39 status unpacked librbd1:amd64 15.2.10-pve1
2021-05-09 21:12:39 status half-installed librbd1:amd64 15.2.10-pve1
2021-05-09 21:12:40 status unpacked librbd1:amd64 15.2.11-pve1
2021-05-09 21:13:30 configure librbd1:amd64 15.2.11-pve1 <none>
2021-05-09 21:13:30 status unpacked librbd1:amd64 15.2.11-pve1
2021-05-09 21:13:30 status half-configured librbd1:amd64 15.2.11-pve1
2021-05-09 21:13:30 status installed librbd1:amd64 15.2.11-pve1
I didn't checked all of them but I can see the same results on other nodes.
Is there a way to associate this :
Code:
client.admin at 10.152.12.62:0/2202023404 is using insecure global_id reclaim
I'll probably live-migrate all VMs in the next days, so for me, this is not really a problem but this could potentially change the upgrade process for other users.
For reference, here's a link from Ceph about this secure claim issue :
https://docs.ceph.com/en/latest/security/CVE-2021-20288/
Best regards