Hi,
We have been using the Ceph MGR Dashboard to successfully manage S3 buckets and user accounts since Octopus (https://forum.proxmox.com/threads/pve-6-3-with-ha-ceph-iscsi.81991/). This continued to work with us migrating to Ceph Pacific and exchanging civetweb for beast, although we...
PVE 7.3 results in loss of connectivity when OvS (Open vScwitch) is upgraded.
OvS where 2 x 10G interfaces are bonded, vlan 1 is untagged for the node itself and vlan 100 is for Ceph and cluster communication:
[root@kvm1a ~]# cat /etc/network/interfaces
auto lo
iface lo inet loopback
auto...
Hi,
We are affected by a bug which has been merged and backported to Ceph Pacific, to be included in the next v16.2.7 release, which was released for testing as RC1 on the 3rd of December. There is additionally a data corruption bug which recommends that people not upgrade to Pacific until...
We have uncovered a problem with Ceph pacific OSDs not always starting automatically after a node is restarted. This is relatively prevalent with nodes exhibiting a single OSD with this problem approximately 70% of the time. We had one occurrence where a node had two OSDs in this state, whilst...
U2F mult-factor authentication is unfortunately broken after upgrading to PVE 7.1:
We had previously set the U2F AppID to point at a JSON document on a redundant web hosting service 'https://u2f.company.co.za/kvm1-appid', where this file contained a list of the possible facets that it would...
We have had good success with the Secure Boot capable EFI disks and TPM v2.0 emulation. Tested on latest no-subscription with Ceph Pacific 16.2.6. Live migrate works with Windows 11 with full disk encryption (BitLocker) and everything works just perfectly as long as one selects the...
We upgraded several clusters to PVE7 + Ceph Pacific 16.2.5 a couple of weeks back. We received zero performance or stability reports but did observe storage utilisation increasing consistently. After upgrading the Ceph Pacific packages to 16.2.6 on Thursday long running snaptrim operations have...
Hi,
We have a PVE7 + Ceph Pacific cluster with enterprise subscription where we have set one of the cluster nodes to the no-subscription repository to see the new options for vTPM support.
When attempting to add TPM state we receive the following error:
tpmstate0: property is not defined in...
We have restricted PVE nodes to only being able to communicate with the following hosts:
[0-3].pool.ntp.org
download.proxmox.com
enterprise.proxmox.com
ftp.debian.org
security.debian.org
This now predictably leads to our nodes not being able to check the subscription status. What additional...
We have a template which has two discs:
When we clone this template the destination disc names are inconsistent. They are however attached in the correct order and everything works as expected, the problem as such is purely cosmetic but has lead to confusion in the past:
PS: This doesn't...
I have a relatively simple Python script which receives no results when connecting with an account that has an acl applied to '/' of PVEAuditor. I presume the issue to relate to storage not being accessible when logging in to the WebUI as that account:
[admin@kvm1e ~]# grep inventory@pve...
We have nut-server successfully monitoring a UPS with nut-client running on all nodes.
When power goes away it correctly and simultaneously initiates 'init 0' on all nodes but this then causes problems.
Nodes that only provide Ceph storage shut down before VMs are given a chance (yes, qemu...
Hi,
To start I would not recommend that people use this to somehow cook together PVE using a remote cluster via iSCSI as storage for VMs. In our case we have a secondary cluster which used to host a multi-tenant internet based backup service which comprised of 6 servers with 310 TiB available...
Hi,
Please may I ask that disk health displays show the media wearout indicator for SAS connected SSDs? I presume the 'Disks' information is parsed via smartctl and subsequently displays N/A due to SAS connected SSDs not showing raw value data.
Herewith a snippet of the SSDs which connect via...
We appear to have an inconsistent experience with one of the monitors sometimes appearing to miss behave. Ceph health shows a warning with slow operations:
[admin@kvm6b ~]# ceph -s
cluster:
id: 2a554db9-5d56-4d6a-a1e2-e4f98ef1052f
health: HEALTH_WARN
17 slow ops, oldest...
Hi,
We've been working through upgrading individual nodes to PVE 6.3 and extended this to now include hyper converged Ceph cluster nodes. The upgrades themselves went very smoothly but the recommendation around setting all storage pools to autoscale can cause some headaches.
The last paragraph...
Is the following a known bug in Ceph Nautilus v14.2.9?
Running kernel RBD guest with partitions aligned to 1 MiB boundaries. If I create a 10 GiB file in a VM, delete it and then issue fstrim I get inconsistent feedback on the image space allocation:
After having run 'dd if=/dev/urandom...
We really appreciate the flexibility Ceph provides and typically setup our clusters to use sparse RBD images with templates residing in a Ceph file system concurrently mounted on all our nodes.
Since PVE 6.2 we are unable to mount CephFS via fstab as it says 'nonempty' is an unknown parameter...
Have a nested virtualisation PVE guest that has stopped working since upgrading to PVE 6.2
[admin@kvm1d ~]# cat /sys/module/kvm_intel/parameters/nested
Y
I temporarily remove the 'args' line from the VM configuration file, start the guest to record the 'cpu' parameters passed to the VM, shut...
Hi,
Updated libpve-common-perl package contains a restructured /usr/share/perl5/PVE/Network.pm script which has problems when a VM's network interface is tagged and trunked.
Error message:
()
ovs-vsctl: "trunks" is not a valid integer or range
can't add ovs port 'tap101i0' - command...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.