Search results

  1. D

    Ceph Dashboard (RADOS GW management problem)

    Hi, We have been using the Ceph MGR Dashboard to successfully manage S3 buckets and user accounts since Octopus (https://forum.proxmox.com/threads/pve-6-3-with-ha-ceph-iscsi.81991/). This continued to work with us migrating to Ceph Pacific and exchanging civetweb for beast, although we...
  2. D

    Loss of connectivity - OvS (apt-get -y dist-upgrade)

    PVE 7.3 results in loss of connectivity when OvS (Open vScwitch) is upgraded. OvS where 2 x 10G interfaces are bonded, vlan 1 is untagged for the node itself and vlan 100 is for Ceph and cluster communication: [root@kvm1a ~]# cat /etc/network/interfaces auto lo iface lo inet loopback auto...
  3. D

    Ceph Pacific 16.2.7

    Hi, We are affected by a bug which has been merged and backported to Ceph Pacific, to be included in the next v16.2.7 release, which was released for testing as RC1 on the 3rd of December. There is additionally a data corruption bug which recommends that people not upgrade to Pacific until...
  4. D

    Ceph Pacific (16.2.6) - Some OSDs fail to activate at boot

    We have uncovered a problem with Ceph pacific OSDs not always starting automatically after a node is restarted. This is relatively prevalent with nodes exhibiting a single OSD with this problem approximately 70% of the time. We had one occurrence where a node had two OSDs in this state, whilst...
  5. D

    PVE 7.1 - U2F broken, confused about WebAuthn

    U2F mult-factor authentication is unfortunately broken after upgrading to PVE 7.1: We had previously set the U2F AppID to point at a JSON document on a redundant web hosting service 'https://u2f.company.co.za/kvm1-appid', where this file contained a list of the possible facets that it would...
  6. D

    EFI and TPM removed from VM config when stopped, not when shutdown

    We have had good success with the Secure Boot capable EFI disks and TPM v2.0 emulation. Tested on latest no-subscription with Ceph Pacific 16.2.6. Live migrate works with Windows 11 with full disk encryption (BitLocker) and everything works just perfectly as long as one selects the...
  7. D

    [SOLVED] Incentive to upgrade Ceph to Pacific 16.2.6

    We upgraded several clusters to PVE7 + Ceph Pacific 16.2.5 a couple of weeks back. We received zero performance or stability reports but did observe storage utilisation increasing consistently. After upgrading the Ceph Pacific packages to 16.2.6 on Thursday long running snaptrim operations have...
  8. D

    tpmstate0: property is not defined in schema and the schema does not allow additional properties

    Hi, We have a PVE7 + Ceph Pacific cluster with enterprise subscription where we have set one of the cluster nodes to the no-subscription repository to see the new options for vTPM support. When attempting to add TPM state we receive the following error: tpmstate0: property is not defined in...
  9. D

    PVE Enterprise subscription status checks - Firewalling

    We have restricted PVE nodes to only being able to communicate with the following hosts: [0-3].pool.ntp.org download.proxmox.com enterprise.proxmox.com ftp.debian.org security.debian.org This now predictably leads to our nodes not being able to check the subscription status. What additional...
  10. D

    Problem when copying template with 2+ discs

    We have a template which has two discs: When we clone this template the destination disc names are inconsistent. They are however attached in the correct order and everything works as expected, the problem as such is purely cosmetic but has lead to confusion in the past: PS: This doesn't...
  11. D

    Proxmox - PVEAuditor does not grant access to storage

    I have a relatively simple Python script which receives no results when connecting with an account that has an acl applied to '/' of PVEAuditor. I presume the issue to relate to storage not being accessible when logging in to the WebUI as that account: [admin@kvm1e ~]# grep inventory@pve...
  12. D

    UPS - Shutdown entire cluster

    We have nut-server successfully monitoring a UPS with nut-client running on all nodes. When power goes away it correctly and simultaneously initiates 'init 0' on all nodes but this then causes problems. Nodes that only provide Ceph storage shut down before VMs are given a chance (yes, qemu...
  13. D

    PVE 6.3 with HA Ceph iSCSI

    Hi, To start I would not recommend that people use this to somehow cook together PVE using a remote cluster via iSCSI as storage for VMs. In our case we have a secondary cluster which used to host a multi-tenant internet based backup service which comprised of 6 servers with 310 TiB available...
  14. D

    Show SSD wearout - SAS connected SSDs

    Hi, Please may I ask that disk health displays show the media wearout indicator for SAS connected SSDs? I presume the 'Disks' information is parsed via smartctl and subsequently displays N/A due to SAS connected SSDs not showing raw value data. Herewith a snippet of the SSDs which connect via...
  15. D

    Ceph Octopus - Monitor sometimes inconsistent

    We appear to have an inconsistent experience with one of the monitors sometimes appearing to miss behave. Ceph health shows a warning with slow operations: [admin@kvm6b ~]# ceph -s cluster: id: 2a554db9-5d56-4d6a-a1e2-e4f98ef1052f health: HEALTH_WARN 17 slow ops, oldest...
  16. D

    Ceph Octopus upgrade notes - Think twice before enabling auto scale

    Hi, We've been working through upgrading individual nodes to PVE 6.3 and extended this to now include hyper converged Ceph cluster nodes. The upgrades themselves went very smoothly but the recommendation around setting all storage pools to autoscale can cause some headaches. The last paragraph...
  17. D

    Ceph RBD space reclamation

    Is the following a known bug in Ceph Nautilus v14.2.9? Running kernel RBD guest with partitions aligned to 1 MiB boundaries. If I create a 10 GiB file in a VM, delete it and then issue fstrim I get inconsistent feedback on the image space allocation: After having run 'dd if=/dev/urandom...
  18. D

    PVE 6.2 - CephFS - Problems mounting /var/lib/vz via fstab

    We really appreciate the flexibility Ceph provides and typically setup our clusters to use sparse RBD images with templates residing in a Ceph file system concurrently mounted on all our nodes. Since PVE 6.2 we are unable to mount CephFS via fstab as it says 'nonempty' is an unknown parameter...
  19. D

    [SOLVED] PVE 6.2 - Unable to start nested virtualisation guest

    Have a nested virtualisation PVE guest that has stopped working since upgrading to PVE 6.2 [admin@kvm1d ~]# cat /sys/module/kvm_intel/parameters/nested Y I temporarily remove the 'args' line from the VM configuration file, start the guest to record the 'cpu' parameters passed to the VM, shut...
  20. D

    [SOLVED] Problem with new OvS commands in libpve-common-perl 6.1-1

    Hi, Updated libpve-common-perl package contains a restructured /usr/share/perl5/PVE/Network.pm script which has problems when a VM's network interface is tagged and trunked. Error message: () ovs-vsctl: "trunks" is not a valid integer or range can't add ovs port 'tap101i0' - command...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!