Recent content by guillaume34500

  1. G

    [SOLVED] snapshot stopping VM

    Hello there. I can confirm i have the exact same problem. Recently updated to pve-manager/5.4-13/aee6f0ec (running kernel: 4.15.18-16-pve) and now, CentOS 7 backup bug
  2. G

    QEMU CVE

    Hello Any information about: Package : qemu CVE ID : CVE-2018-11806 CVE-2018-12617 CVE-2018-16872 CVE-2018-17958 CVE-2018-18849 CVE-2018-18954 CVE-2018-19364 CVE-2018-19489 CVE-2019-3812 CVE-2019-6778 CVE-2019-9824 CVE-2019-12155 Multiple...
  3. G

    ZombieLand / RIDL / Fallout (CVE-2018-12126, CVE-2018-12130, CVE-2018-12127, CVE-2019-11091)

    Dear, CVE-2018-12126 aka 'Fallout, microarchitectural store buffer data sampling (MSBDS)' * Mitigated according to the /sys interface: NO (Vulnerable: Clear CPU buffers attempted, no microcode; SMT vulnerable) * CPU supports the MD_CLEAR functionality: NO * Kernel supports using MD_CLEAR...
  4. G

    ZombieLand / RIDL / Fallout (CVE-2018-12126, CVE-2018-12130, CVE-2018-12127, CVE-2019-11091)

    Found: https://www.intel.com/content/dam/www/public/us/en/documents/corporate-information/SA00233-microcode-update-guidance_05132019.pdf No patch for Westmere EP-...
  5. G

    ZombieLand / RIDL / Fallout (CVE-2018-12126, CVE-2018-12130, CVE-2018-12127, CVE-2019-11091)

    Dear, I got problem. pveversion -v proxmox-ve: 5.4-1 (running kernel: 4.15.18-14-pve) pve-manager: 5.4-5 (running version: 5.4-5/c6fdb264) pve-kernel-4.15: 5.4-2 pve-kernel-4.13: 5.2-2 pve-kernel-4.15.18-14-pve: 4.15.18-39 pve-kernel-4.15.18-11-pve: 4.15.18-34 pve-kernel-4.15.18-2-pve...
  6. G

    Ceph conf

    Hi root@CEPH-01:~/cephdeploy# ceph --admin-daemon /var/run/ceph/ceph-osd.0.asok config show | grep rbd_cache "rbd_cache": "true", "rbd_cache_writethrough_until_flush": "true", "rbd_cache_size": "33554432", "rbd_cache_max_dirty": "25165824", "rbd_cache_target_dirty"...
  7. G

    Ceph conf

    Hi, My ceph.conf is [global] auth client required = cephx auth cluster required = cephx auth service required = cephx cluster network = 10.10.10.0/24 filestore xattr use omap = true fsid = 26f11204-d540-455e-869c-0d43ab729d6b keyring =...
  8. G

    Ceph conf

    Ok thx. But i have really bad performance. 17mb/s write in QEMU, but LCX have good performance. Any idea ?
  9. G

    Ceph conf

    Hello m creating a Ceph cluster and wish to know the configuration set up at proxmox (size, min_size, pg_num, crush) I want to have a single replication (I want to consume the least amount of space, while having redundancy, like RAID 5 ?) I have, for now, 3 servers each having 12 OSD 4TB SAS...