Search results

  1. A

    Combining custom cloud init with auto-generated

    Great news ! Thanks for pointing this out We will finally be able to enroll new VMs directly into automation systems at boot
  2. A

    [SOLVED] Can I mix Promxox 6 and Proxmox 7 in the same Cluster?

    For the records : we encountered another limitation today. If you're using 'storage replication' between 2 nodes, sync from PVE7 To PVE6 node will fail with an 'Unknown option: snapshot'. The '-snaphost' parameter has been added to pvesm in PVE7 and used to sync by PVE7. No really a big deal...
  3. A

    [SOLVED] Can I mix Promxox 6 and Proxmox 7 in the same Cluster?

    We observed the same behavior here : VMs can be live-migrated from PVE6 to PVE7 and back AS LONG AS THEY'VE NOT BEEN STARTED ON A PVE7 node ! You can't, for example, start a VM on a PVE7 node and live-migrate it to PVE6, AFAIK that's the only limitation. Note : the VM won't crash, it will...
  4. A

    Combining custom cloud init with auto-generated

    That's great news ! Does someone have an approximate idea of delay between path submitted to pve-devel list and general availability ? (there's perhaps a large variation depending of the complexity and interest in the patch). Thanks for submitting this patch @mira !
  5. A

    Ceph 15.2.11 upgrade : insecure client warning disappear and reappearing

    And as of I'm writing, no more AUTH_INSECURE_GLOBAL_ID_RECLAIM warning ... # ceph health detail HEALTH_WARN mons are allowing insecure global_id reclaim [WRN] AUTH_INSECURE_GLOBAL_ID_RECLAIM_ALLOWED: mons are allowing insecure global_id reclaim mon.vm10 has...
  6. A

    Ceph 15.2.11 upgrade : insecure client warning disappear and reappearing

    I played a bit with ceph tools and found the command ceph tell mon.\* sessions I tried to get some infos from the MONs and I got 2 clients with "global_id_status": "reclaim_insecure". All others are in status "reclaim_ok", "new_ok" or "none" (the others MONs). Here's the full output of a...
  7. A

    Ceph 15.2.11 upgrade : insecure client warning disappear and reappearing

    Hello, So, first, yes, warnings are back but only a few at a time : Right after upgrading, I got a dozon of them. I didn't count by it was probably one per VM + one or two per hypervisor 24h later, I got absolutely none ~48h after upgrade, I got a few (4 or less) That's already the case at time...
  8. A

    Ceph Nautilus and Octopus Security Update for "insecure global_id reclaim" CVE-2021-20288

    OK, just opened a new specific thread here : https://forum.proxmox.com/threads/ceph-15-2-11-upgrade-insecure-client-warning-disappear-and-reappearing.89059/
  9. A

    Ceph 15.2.11 upgrade : insecure client warning disappear and reappearing

    Hello, In follow to https://forum.proxmox.com/threads/ceph-nautilus-and-octopus-security-update-for-insecure-global_id-reclaim-cve-2021-20288.88038/post-389914, I'm opening a new thread. I was asked to check this : # qm list prints the PID qm list # print all open files of that process, which...
  10. A

    Ceph Nautilus and Octopus Security Update for "insecure global_id reclaim" CVE-2021-20288

    Hello, Yes, that's pretty odd for sure. What have been done : Upgrade 9 nodes from 6.3-? to 6.4-5 with apt update && apt dist-upgrade Restart all MGR, MDS and OSDs sequentially At this stage, I got a LOT of "client is using insecure global_id reclaim" warning and one "mons are allowing...
  11. A

    Ceph Nautilus and Octopus Security Update for "insecure global_id reclaim" CVE-2021-20288

    Am I the only one to see this unexpected (good) behavior after upgrading a Ceph cluster to 15.2.11 ?
  12. A

    Ceph Nautilus and Octopus Security Update for "insecure global_id reclaim" CVE-2021-20288

    Nope : rbd: ceph-ssd-fast content images krbd 0 pool ceph-ssd-fast
  13. A

    Ceph Nautilus and Octopus Security Update for "insecure global_id reclaim" CVE-2021-20288

    Sorry, It seems I've not been clear enough : I didn't live-migrate virtual machines. AFAIK, the running KVM processes have not been restarted for a large majority of our KVM machines. I moved a few of them (3 of 120 actually). That's what is surprising me (and could save painful work to others...
  14. A

    Ceph Nautilus and Octopus Security Update for "insecure global_id reclaim" CVE-2021-20288

    Hello, We just upgraded our cluster to 6.4 (and Ceph 15.2.11) yesterday. I restarted all OSDs, MONs and MGRs. Everything went fine. I was starting to live-migrate all VMs when I saw that I don't have the "client is using insecure global_id reclaim" warning anymore : # ceph health detail...
  15. A

    [SOLVED] Is pvelocalhost still needed ?

    Simple, brief, clear ;-) Thanks !
  16. A

    [SOLVED] Is pvelocalhost still needed ?

    Hello, I'm sure why but in the past, an entry containing pvelocalhost in /etc/hosts file was mandatory to get a proper node working. It's not specified anymore here : https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Buster So, do you think this entry is still needed anymore ? (PVE6...
  17. A

    [SOLVED] Is verify task needed with ZFS-backed datastore ?

    Shame on me, you're absolutely right. I only tested from the VM backup panel, didn't know about this limitation. I don't think it's even needed to start the VM to test restoration but that's better, yeah. This thread can now be closed, thanks a gain for sharing your toughts.
  18. A

    [SOLVED] Is verify task needed with ZFS-backed datastore ?

    Thanks for thoses advices. That's another subject but restorations can't be done on another VMID so testing them is a bit painful (aka complicated). Perhaps in a future version ? Best regards
  19. A

    [SOLVED] Is verify task needed with ZFS-backed datastore ?

    Hello, On PBS, when using a ZFS datastore, should we really enable verification job ? AFAIK, when there's some kind of redundancy (multiple copies, RAIDZ or mirroring), checksum not disabled (why would you do this ?) AND ZFS scrubbing is done on a regular basis (once a week, a month ?), ZFS is...
  20. A

    Combining custom cloud init with auto-generated

    Hello, Here's a small patch that add support for vendor-data config file which is actually not used by Proxmox in the cloud-init generated defaults. So you can keep generated network-data, user-data and meta-data and add your own config. Note : this patch applies against qemu-server version...