For the records : we encountered another limitation today.
If you're using 'storage replication' between 2 nodes, sync from PVE7 To PVE6 node will fail with an 'Unknown option: snapshot'.
The '-snaphost' parameter has been added to pvesm in PVE7 and used to sync by PVE7.
No really a big deal...
We observed the same behavior here : VMs can be live-migrated from PVE6 to PVE7 and back AS LONG AS THEY'VE NOT BEEN STARTED ON A PVE7 node !
You can't, for example, start a VM on a PVE7 node and live-migrate it to PVE6, AFAIK that's the only limitation.
Note : the VM won't crash, it will...
That's great news !
Does someone have an approximate idea of delay between path submitted to pve-devel list and general availability ? (there's perhaps a large variation depending of the complexity and interest in the patch).
Thanks for submitting this patch @mira !
And as of I'm writing, no more AUTH_INSECURE_GLOBAL_ID_RECLAIM warning ...
# ceph health detail
HEALTH_WARN mons are allowing insecure global_id reclaim
[WRN] AUTH_INSECURE_GLOBAL_ID_RECLAIM_ALLOWED: mons are allowing insecure global_id reclaim
mon.vm10 has...
I played a bit with ceph tools and found the command ceph tell mon.\* sessions
I tried to get some infos from the MONs and I got 2 clients with "global_id_status": "reclaim_insecure". All others are in status "reclaim_ok", "new_ok" or "none" (the others MONs).
Here's the full output of a...
Hello,
So, first, yes, warnings are back but only a few at a time :
Right after upgrading, I got a dozon of them. I didn't count by it was probably one per VM + one or two per hypervisor
24h later, I got absolutely none
~48h after upgrade, I got a few (4 or less)
That's already the case at time...
OK, just opened a new specific thread here : https://forum.proxmox.com/threads/ceph-15-2-11-upgrade-insecure-client-warning-disappear-and-reappearing.89059/
Hello,
In follow to https://forum.proxmox.com/threads/ceph-nautilus-and-octopus-security-update-for-insecure-global_id-reclaim-cve-2021-20288.88038/post-389914, I'm opening a new thread.
I was asked to check this :
# qm list prints the PID
qm list
# print all open files of that process, which...
Hello,
Yes, that's pretty odd for sure.
What have been done :
Upgrade 9 nodes from 6.3-? to 6.4-5 with apt update && apt dist-upgrade
Restart all MGR, MDS and OSDs sequentially
At this stage, I got a LOT of "client is using insecure global_id reclaim" warning and one "mons are allowing...
Sorry, It seems I've not been clear enough : I didn't live-migrate virtual machines. AFAIK, the running KVM processes have not been restarted for a large majority of our KVM machines. I moved a few of them (3 of 120 actually).
That's what is surprising me (and could save painful work to others...
Hello,
We just upgraded our cluster to 6.4 (and Ceph 15.2.11) yesterday. I restarted all OSDs, MONs and MGRs. Everything went fine.
I was starting to live-migrate all VMs when I saw that I don't have the "client is using insecure global_id reclaim" warning anymore :
# ceph health detail...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.