Search results

  1. D

    [SOLVED] Incentive to upgrade Ceph to Pacific 16.2.6

    We upgraded several clusters to PVE7 + Ceph Pacific 16.2.5 a couple of weeks back. We received zero performance or stability reports but did observe storage utilisation increasing consistently. After upgrading the Ceph Pacific packages to 16.2.6 on Thursday long running snaptrim operations have...
  2. D

    tpmstate0: property is not defined in schema and the schema does not allow additional properties

    Hi, We have a PVE7 + Ceph Pacific cluster with enterprise subscription where we have set one of the cluster nodes to the no-subscription repository to see the new options for vTPM support. When attempting to add TPM state we receive the following error: tpmstate0: property is not defined in...
  3. D

    PVE Enterprise subscription status checks - Firewalling

    We have restricted PVE nodes to only being able to communicate with the following hosts: [0-3].pool.ntp.org download.proxmox.com enterprise.proxmox.com ftp.debian.org security.debian.org This now predictably leads to our nodes not being able to check the subscription status. What additional...
  4. D

    [SOLVED] Ceph - Schedule deep scrubs to prevent service degradation

    Updated to support Python3 in PVE 7 (Debian 11 (bullseye)).
  5. D

    Proxmox installation via PXE: solution

    Similar instructions to previous PVE editions, with the exception that the compressed kernel image (initramfs) is compressed using zstd instead of gzip: mount -o loop /home/samba/public/IMAGES/Virtual\ Machine/Proxmox/proxmox-ve_7.0-1.iso /media/cdrom; mkdir /tftpboot/pxe/images/proxmox/7.0...
  6. D

    Kernel 5.11

    The system stays up for about 20 minutes and then appears to loose network connectivity: Jul 3 11:23:00 kvm2 systemd[1]: Starting Proxmox VE replication runner... Jul 3 11:23:01 kvm2 systemd[1]: pvesr.service: Succeeded. Jul 3 11:23:01 kvm2 systemd[1]: Started Proxmox VE replication runner...
  7. D

    Kernel 5.11

    Tried latest pve-kernel-5.11.21 and it's working thus far. CephFS does however spew out tons of errors and ceph reports MDS being unavailable. You may simply need to wait but it appeared to start working we cleared blacklisted client/port combinations. Cluster health: [admin@kvm1 ~]# ceph -s...
  8. D

    Kernel 5.11

    It appears my issue with Ceph and kernel 5.11 relates to max_osd being greater than the number of OSDs. Most probably why this can't be reproduced in a lab environment. I presume the following patch wouldn't be back ported to 5.11, any chance of cherry picking it? NB: Having max OSD larger...
  9. D

    Kernel 5.11

    Ceph Octopus, our office cluster is using older hardware and functioning perfectly with kernel 5.11 with the same OvS, KRBD and Ceph configuration. From the node that can't operate as a client when booting 5.11: [admin@kvm6a ~]# pveversion -v...
  10. D

    Kernel 5.11

    We tried Lenovo R350 servers (Intel E5-2630 aka Sandy Bridge EP) servers and couldn't get Ceph to act as a client. OSDs came up and regained health but CephFS wouldn't map and RBD images were inaccessible. PVE 6.4 with all updates and kernel 5.11.17-1-pve: Messages in /var/log/syslog: Jun 4...
  11. D

    Problem when copying template with 2+ discs

    We have a template which has two discs: When we clone this template the destination disc names are inconsistent. They are however attached in the correct order and everything works as expected, the problem as such is purely cosmetic but has lead to confusion in the past: PS: This doesn't...
  12. D

    Proxmox - PVEAuditor does not grant access to storage

    Should have clarified, the above works with: [admin@kvm1d ~]# grep inventory /etc/pve/user.cfg user:inventory@pve:1:0:Inventory:Collector:::: token:inventory@pve!audit-report:0:0:Connection from pve-test.acme.com: acl:1:/:inventory@pve:PVEAuditor:
  13. D

    Proxmox - PVEAuditor does not grant access to storage

    Thanks Fabian, I would very much like to recommend that PVEAuditor have the ability to list images in storage. Rewrote our inventory collection / audit report script to run through the ide, scsi and virtio images associated with a VM and to represent the sum of all disks as a combined value...
  14. D

    Proxmox - PVEAuditor does not grant access to storage

    We actually just want a list of the drive sizes so that we can cross check other references, is there a way to obtain a list of a qemu instance's disks? We don't appear to have access when trying your suggestion: Perhaps a bug? Reading the documentation also leads me to believe that...
  15. D

    Proxmox - PVEAuditor does not grant access to storage

    I have a relatively simple Python script which receives no results when connecting with an account that has an acl applied to '/' of PVEAuditor. I presume the issue to relate to storage not being accessible when logging in to the WebUI as that account: [admin@kvm1e ~]# grep inventory@pve...
  16. D

    Ceph Nautilus and Octopus Security Update for "insecure global_id reclaim" CVE-2021-20288

    We run KRBD by default so my understanding is that we do not have to do anything besides updating and then restarting Ceph on each node, after which we can then cut off non-compliant clients. running KRBD? cat /etc/pve/storage.cfg rbd: rbd_ssd content rootdir,images krbd 1...
  17. D

    [SOLVED] windows 2016 Optimize-Volume ReTrim

    Perhaps try with the following VirtIO drivers: https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/archive-virtio/virtio-win-0.1.190-1/virtio-win-0.1.190.iso We experienced some inconsistent behaviour on Windows Server 2019 with the latest stable (185).
  18. D

    Ceph Octopus upgrade notes - Think twice before enabling auto scale

    I would recommend Proxmox consider advising users to reduce the 'mon_osdmap_full_prune_min' from 10,000 to 1,000 to reduce space utilisation: ceph config set mon mon_osdmap_full_prune_min 1000
  19. D

    UPS - Shutdown entire cluster

    Cluster is using Ceph to provide storage to VMs. I'm essentially trying to find a way to stop all guests, cluster wide and only then shut down nodes with fencing disabled. I presume Proxmox possibly needs a sort of maintenance mode where the high availability service needs to temporarily be...