To use giant pages (hugepages 1024Mb) u need:
- explicitly set fixed number of such pages in boot loader (/etc/default/grub or /etc/kernel/cmdline)
- set hugepages: 1024 in vm conf file (manually)
I would also recommend setting up numa topology...
While having limited budget resources but expecting some kind of ha of your solution I assume you would even have limited man power to maintain and fix problems as they appear. So you should look for a solution which you are familar with and in...
Proxmox uses Generic ceph. there is no "other" version.
"copy redundancy" # availability. there is a limit to how much time I want to spend on this subject. I'd suggest you read and understand what ceph is, how it works, and why the limitations...
in case you are using additional dkms modules like r8168 you need to install proxmox-headers-6.17 too
so
apt install proxmox-kernel-6.17 proxmox-headers-6.17
tested on my smol - 3x Lenovo Tiny M920q Cluster, with i5-8500T/32GB/512 NVMe and...
We recently uploaded the 6.17 kernel to our repositories. The current default kernel for the Proxmox VE 9 series is still 6.14, but 6.17 is now an option.
We plan to use the 6.17 kernel as the new default for the Proxmox VE 9.1 release later in...
We are pleased to announce the first stable release of Proxmox Mail Gateway 9.0 - immediately available for download!
Twenty years after its first release, the new version of our email security solution is based on Debian 13.1 "Trixie", but...
I'm happy to report that using the latest version of Squid (19.2.3) the command ceph daemon {monId} config set mon_cluster_log_level info now does reduce the logging output. You have to execute this on every server hosting a monitor.
What version of Checkmk are you running?
Starting with 2.4 my extension was incorporated upstream and does not need to be installed separately any more.
The mk_ceph.py agent plugin (for Python 3) needs to be deployed to...
For those interested, there are other quirks using X710 on proxmox (including on MS-01, my baseline homelab!):
- VLAN stripping on SR-IOV VFs
- LLDP offload not reporting to linux kernel
- Asymmetric speed due to TX checksum offload
See...
I just ran into the same prollem.
After doing a
systemctl reset-failed ceph-mgr@%YOUR-NODE-NAME-HERE%.service
i was able to start the managers again
I'm on Ceph pacific 16.2.9 & Proxmox 7.3-3
The updated packages are in the process of being uploaded to the no-subscription repos, so expect them to be available rather soon. I can't give an exact ETA on how long it will take, but it should be done by the end of today, most likely.
following output:
balloon: actual=8192 max_mem=8192 last_update=1756638096
interesting, last update August 31. 01:51 oclock. about that time i did a minor PVE update + reboot + updated virtio drivers to 0.1.271
checked on different VM where...
pve-manager/9.0.6/49c767b70aeb6648 (running kernel: 6.14.11-1-pve)
one of the windows VMs shows 100.xx% memory usage in PVE VM summary.
windows 11 guest, memory 16384MB, ballooning enabled, minimum memory 16384MB.
virtio 0.1.271 (updated from...
See https://pve.proxmox.com/wiki/Upgrade_from_8_to_9#VM_Memory_Consumption_Shown_is_Higher
Is the Ballooning Device enabled and is the BallooningService running?
What you are seeing is a very early hang during initrd load on the Proxmox installer, and since it happens on both Proxmox 9.x and 8.x across multiple identical Supermicro H13 systems, it strongly suggests a firmware or compatibility issue rather...
The ACME support has a large number of plugins for DNS challenges, but there does not seem to be an option to run a custom script, or make a customer HTTP(S) request.
I run my own DNS servers, and already wrote a script which certbot is able to...