In latest versions of kernel 5.2+ we can disable mitigations by "mitigation=off" kernel command, but as Proxmox 6 is using kernel 5.0 I've disabled them by using this grub config
GRUB_CMDLINE_LINUX="net.ifnames=0 biosdevname=0 pti=off spectre_v2=off nospec_store_bypass_disable mds=off"
For better Ceph performance, I need to disable all kind of kernel protections for CPU vulnerabilities,
Is there any guide on how to ask kernel in Proxmox to do so?
Servers are being used only for Ceph so I have no security concerns regarding disabling CPU vulnerabilities.
I setup OVS switch on 10 nodes which are interconnected via GRE tunnels, but I can see Proxmox is not letting us create more than 4094 vlans because tag ID larger than 4094 is not validated in Proxmox interface or API.
Is there any restriction based on GRE overlay that prevent us having more vlans?
In proxmox documents I can see it's advised to set rstp_designated_path_cost for physical ports, but as I want to create a mesh network with openvswitch containing 20 nodes which all are connected together via GRE is it necessary to set path_cost for GRE too?
I've tried command below but it...
I wonder if there's any exporter that gather guest usage information from inside of them via GuestAgent and export them to prometheus?
If not is it reasonable to use GuestAgent for gathering resource usage of VMs?
I've used a simple iptables rule to test some idea, but I can see it's not picking any packet on tap interface.
Is that normal?
root@node01:~# iptables -I FORWARD -i tap101i0
root@node01:~# iptables -L FORWARD -v -n
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target...
I've a cluster of 40 Proxmox nodes and I want to start offer private network to my clients,
Each client of me may have few VMs on different nodes, and it's not static, they may add/remove VMs.
My question is if I want to setup OVS based private networking it's necessary to add GRE for each two...
I've build a cloned VM using linked clone from Proxmox interface, but consider when I want to rebuild that VM, I will need to remove the primary disk of that VM and add an other one which is based on an other base image, ( for example changing the OS of that VM from Ubuntu to CentOS )
After running "modprobe ib_umad" on both Proxmox nodes, OpenSM detects the ports but still the link is in initialization status:
service opensm status
● opensm.service - LSB: Start opensm subnet manager.
Loaded: loaded (/etc/init.d/opensm; generated)
Active: active (exited) since Fri...