Recent content by Altinea

  1. Combining custom cloud init with auto-generated

    Great news ! Thanks for pointing this out We will finally be able to enroll new VMs directly into automation systems at boot
  2. [SOLVED] Can I mix Promxox 6 and Proxmox 7 in the same Cluster?

    For the records : we encountered another limitation today. If you're using 'storage replication' between 2 nodes, sync from PVE7 To PVE6 node will fail with an 'Unknown option: snapshot'. The '-snaphost' parameter has been added to pvesm in PVE7 and used to sync by PVE7. No really a big deal...
  3. [SOLVED] Can I mix Promxox 6 and Proxmox 7 in the same Cluster?

    We observed the same behavior here : VMs can be live-migrated from PVE6 to PVE7 and back AS LONG AS THEY'VE NOT BEEN STARTED ON A PVE7 node ! You can't, for example, start a VM on a PVE7 node and live-migrate it to PVE6, AFAIK that's the only limitation. Note : the VM won't crash, it will...
  4. Combining custom cloud init with auto-generated

    That's great news ! Does someone have an approximate idea of delay between path submitted to pve-devel list and general availability ? (there's perhaps a large variation depending of the complexity and interest in the patch). Thanks for submitting this patch @mira !
  5. Ceph 15.2.11 upgrade : insecure client warning disappear and reappearing

    And as of I'm writing, no more AUTH_INSECURE_GLOBAL_ID_RECLAIM warning ... # ceph health detail HEALTH_WARN mons are allowing insecure global_id reclaim [WRN] AUTH_INSECURE_GLOBAL_ID_RECLAIM_ALLOWED: mons are allowing insecure global_id reclaim mon.vm10 has...
  6. Ceph 15.2.11 upgrade : insecure client warning disappear and reappearing

    I played a bit with ceph tools and found the command ceph tell mon.\* sessions I tried to get some infos from the MONs and I got 2 clients with "global_id_status": "reclaim_insecure". All others are in status "reclaim_ok", "new_ok" or "none" (the others MONs). Here's the full output of a...
  7. Ceph 15.2.11 upgrade : insecure client warning disappear and reappearing

    Hello, So, first, yes, warnings are back but only a few at a time : Right after upgrading, I got a dozon of them. I didn't count by it was probably one per VM + one or two per hypervisor 24h later, I got absolutely none ~48h after upgrade, I got a few (4 or less) That's already the case at time...
  8. Ceph Nautilus and Octopus Security Update for "insecure global_id reclaim" CVE-2021-20288

    OK, just opened a new specific thread here : https://forum.proxmox.com/threads/ceph-15-2-11-upgrade-insecure-client-warning-disappear-and-reappearing.89059/
  9. Ceph 15.2.11 upgrade : insecure client warning disappear and reappearing

    Hello, In follow to https://forum.proxmox.com/threads/ceph-nautilus-and-octopus-security-update-for-insecure-global_id-reclaim-cve-2021-20288.88038/post-389914, I'm opening a new thread. I was asked to check this : # qm list prints the PID qm list # print all open files of that process, which...
  10. Ceph Nautilus and Octopus Security Update for "insecure global_id reclaim" CVE-2021-20288

    Hello, Yes, that's pretty odd for sure. What have been done : Upgrade 9 nodes from 6.3-? to 6.4-5 with apt update && apt dist-upgrade Restart all MGR, MDS and OSDs sequentially At this stage, I got a LOT of "client is using insecure global_id reclaim" warning and one "mons are allowing...
  11. Ceph Nautilus and Octopus Security Update for "insecure global_id reclaim" CVE-2021-20288

    Am I the only one to see this unexpected (good) behavior after upgrading a Ceph cluster to 15.2.11 ?
  12. Ceph Nautilus and Octopus Security Update for "insecure global_id reclaim" CVE-2021-20288

    Nope : rbd: ceph-ssd-fast content images krbd 0 pool ceph-ssd-fast
  13. Ceph Nautilus and Octopus Security Update for "insecure global_id reclaim" CVE-2021-20288

    Sorry, It seems I've not been clear enough : I didn't live-migrate virtual machines. AFAIK, the running KVM processes have not been restarted for a large majority of our KVM machines. I moved a few of them (3 of 120 actually). That's what is surprising me (and could save painful work to others...
  14. Ceph Nautilus and Octopus Security Update for "insecure global_id reclaim" CVE-2021-20288

    Hello, We just upgraded our cluster to 6.4 (and Ceph 15.2.11) yesterday. I restarted all OSDs, MONs and MGRs. Everything went fine. I was starting to live-migrate all VMs when I saw that I don't have the "client is using insecure global_id reclaim" warning anymore : # ceph health detail...
  15. [SOLVED] Is pvelocalhost still needed ?

    Simple, brief, clear ;-) Thanks !

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!