While waiting to understand what the issue is and the patch to be merged, I implemented this workaround
systemctl edit pvestatd.service
Add the following in the editing section:
[Service]
Restart=on-failure
save and exit
systemctl daemon-reload...
We're pleased to announce the release of Proxmox Backup Server 4.1.
This version is based on Debian 13.2 (“Trixie”), uses Linux kernel 6.17.2-1 as the new stable default, and comes with ZFS 2.3.4 for reliable, enterprise-grade storage and...
After months of hard work and collaboration with our community, we are thrilled to release the beta version of Proxmox Datacenter Manager. This version is based on the great Debian 13 "Trixie" and comes with a 6.14.11 Kernel as stable default and...
There is a new QEMU 10.1 package available in the pve-test and pve-no-subscription repositories for Proxmox VE 9.
After internally testing QEMU 10.1 for over a month and having this version available on the pve-test repository almost as long, we...
We're proud to present the next iteration of our Proxmox Virtual Environment platform. This new version 9.1 is the first point release since our major update and is dedicated to refinement.
This release is based on Debian 13.2 "Trixie" but we're...
We're proud to present the next iteration of our Proxmox Virtual Environment platform. This new version 9.1 is the first point release since our major update and is dedicated to refinement.
This release is based on Debian 13.2 "Trixie" but we're...
To use giant pages (hugepages 1024Mb) u need:
- explicitly set fixed number of such pages in boot loader (/etc/default/grub or /etc/kernel/cmdline)
- set hugepages: 1024 in vm conf file (manually)
I would also recommend setting up numa topology...
While having limited budget resources but expecting some kind of ha of your solution I assume you would even have limited man power to maintain and fix problems as they appear. So you should look for a solution which you are familar with and in...
Proxmox uses Generic ceph. there is no "other" version.
"copy redundancy" # availability. there is a limit to how much time I want to spend on this subject. I'd suggest you read and understand what ceph is, how it works, and why the limitations...
in case you are using additional dkms modules like r8168 you need to install proxmox-headers-6.17 too
so
apt install proxmox-kernel-6.17 proxmox-headers-6.17
tested on my smol - 3x Lenovo Tiny M920q Cluster, with i5-8500T/32GB/512 NVMe and...
We recently uploaded the 6.17 kernel to our repositories. The current default kernel for the Proxmox VE 9 series is still 6.14, but 6.17 is now an option.
We plan to use the 6.17 kernel as the new default for the Proxmox VE 9.1 release later in...
We are pleased to announce the first stable release of Proxmox Mail Gateway 9.0 - immediately available for download!
Twenty years after its first release, the new version of our email security solution is based on Debian 13.1 "Trixie", but...
I'm happy to report that using the latest version of Squid (19.2.3) the command ceph daemon {monId} config set mon_cluster_log_level info now does reduce the logging output. You have to execute this on every server hosting a monitor.
What version of Checkmk are you running?
Starting with 2.4 my extension was incorporated upstream and does not need to be installed separately any more.
The mk_ceph.py agent plugin (for Python 3) needs to be deployed to...
For those interested, there are other quirks using X710 on proxmox (including on MS-01, my baseline homelab!):
- VLAN stripping on SR-IOV VFs
- LLDP offload not reporting to linux kernel
- Asymmetric speed due to TX checksum offload
See...
I just ran into the same prollem.
After doing a
systemctl reset-failed ceph-mgr@%YOUR-NODE-NAME-HERE%.service
i was able to start the managers again
I'm on Ceph pacific 16.2.9 & Proxmox 7.3-3
The updated packages are in the process of being uploaded to the no-subscription repos, so expect them to be available rather soon. I can't give an exact ETA on how long it will take, but it should be done by the end of today, most likely.