We fixed it via our ansible playbooks to the new names and now hope it doesn't happen so soon. Luckily, we knew before doing a reboot. If it happens more often in the future, we will pin the name via .link files.
Same problem here, we had to fix the config of 3 production clusters adding np0 or np1 to the 10G i40e interfaces. While I understand that this is not caused by proxmox, it's still a hassle in a production env.
Hey,
we have a proxmox cluster with globally (in the datacenter section) configured backups. In some cases users from a group that has access to a pool as PVEVMAdmin needs to reboot VMs during a backup. This is not possible during a backup run. Currently they need to ask a global admin with...
Hey, after our upgrade to Proxmox 8 with ceph 17.2.7 we observed a strange change in the /var/log/ceph/ceph.log. In this file the format for log timestaps changed to unix time stamps and we have no idea why and couldn't find anything in the ceph docs neither the release notes. All other ceph log...
Now I see this is a limit on a job basis in the API. I think it would be great to have a global limit for that like for backup restore in the datacenter options.
I also see a bwlimit in the vzdump.conf and now I'm a little confused, is this the global value?
Hey,
we have a problem with our PBS setup regarding the traffic limitations. We run PBS 2.3.2 and PVE 7.3.4 with Ceph 16.2.9 as storage backend. We have configured a limit of 120 MiB/s for backups from all networks. This works great on the PBS site but sometimes we observe much higher rates in...
I setup a up to date PBS with an up to date PVE server. I've limited the incoming rate to 50MiB/s. Looking at the system metrics the results looks quite correct, thats the good part. But I'm confused because the values shown in the backup job on the PVE server are much higher than the limit (see...
We found another issue with this command, so thanks! But unfortunatly we have the problem on all our SSD Pools and not only on the affected one. So for the fstrim problem it looks like everything is performing as expected.
Applied it and now we will see if something changes.
We will try this...
Hey,
we observe major performance issues while running fstrim on VMs backed by a SSD pool (3 replica, 50OSDs) with Ceph (16.2.7) on proxmox. We have a workload that leads to some bigger data fluctuation on our VMs (CentOS 7) . Therefore we have enabled the discard mode for the disks and run...
Hey,
I have two SSD device classes in my proxmox ceph cluster. On is the default SSD class. The other one contains SSDs of a specific size, it is called SSD4T. At the moment it looks like the device class option for new OSDs (see screenshot) is hardcoded to the three default types. Am I right...
As we got not the help we whised for we decided to disable the HA manager. We removed all VM ressources form the HA manager and also deleted the HA group. Now there is still one question. Is it normal that the status is still active for the lrm or do we need to do a further step to fully disable...
Short question about the meaning of two options:
In the web interface under Cluster -> Options I have the option "Maximal Workers/bulk-action". From observation this value defines the maximum number of parallel bulk migrations. But in the interface for bulk migrations I have the option "Parallel...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.