Search results

  1. M

    Proxmox VE 9.1. kernel stack

    Hello. With the new Proxmox VE 9.1. with the next kernel: 6.17.2-1-pve #1 SMP PREEMPT_DYNAMIC PMX 6.17.2-1 (2025-10-21T11:55Z) We have a constant process stack (stack traces like): INFO: task iou-wrk-730901:813300 <reader> blocked on an rw-semaphore likely owned by task khugepaged:225...
  2. M

    Opt-in Linux 6.11 Kernel for Proxmox VE 8 available on test & no-subscription

    Thank you for the info, just find the clarification here: https://pve.proxmox.com/wiki/Proxmox_VE_Kernel
  3. M

    Opt-in Linux 6.11 Kernel for Proxmox VE 8 available on test & no-subscription

    Hello. Hope that the next LTS Kernel for Proxmox VE will be 6.12. as it is declared as new official LTS kernel, as visible in: https://www.kernel.org/category/releases.html Best regards, Hrvoje.
  4. M

    Preferred Method to Make ethtool Changes Persistent Across Reboots and Updates?

    <-- Already posted a new thread on this topic: https://forum.proxmox.com/threads/new-feature-proposal-network-interfaces-advanced-option-s.155432/
  5. M

    New feature proposal: Network interfaces advanced option(s)

    Hello. After years of Network tuning many hypervisors like: OpenStack, Proxmox VE, and now also Red Hat/Oracle Virtualization, but in general Linux based ones, i can conclude that there are a constant tuning needs in the network section of hypervisors. Here i am talking about the situations...
  6. M

    Preferred Method to Make ethtool Changes Persistent Across Reboots and Updates?

    Hello. Maybe to consider to add at least a few mostly common options in the GUI form, for the network interfaces like: 1. Multiqueue numbers - info with ethtool -l eth0. 2. RSS (part of the Multiqueue for RX in most cases) - info with ethtool -x eth0. 3. LRO - info with ethtool -k eth0 | grep...
  7. M

    Menu: Datacenter --> Storage --> Add

    Hello. I have a one small proposal for changing the Storage drop down menu on: Datacenter ==> Storage --> Add. In the current state we have all the storage types in one place, no matter if they are: local, NAS, SAN or other types. For the older and advanced users this is not an issue, but for...
  8. M

    Inofficial proxmox-backup-client RPM builds for RHEL-based distros

    It works also for Rocky Linux 9. Thank you for the job done. To Proxmox team: is it possible to include it in the official repo ?
  9. M

    How do SDN IPAM plugins assign addresses to VMs?

    Related to DHCP and Open Stack: DHCP on Open Stack More details may be seen on: 1. OpenStack --> DHCP (dnsmasq) 2. Networking-Nova 3. Details on design (dnsmasq+network namespaces inside Open Stack)
  10. M

    How do SDN IPAM plugins assign addresses to VMs?

    DHCP support will be great (inside VLAN for example). So, DHCP support is crucial in an environment where network segmentation is made on an easy way (VLAN). In my case, simple dnsmasq is great for that job. DNS service inside (VLAN) is also welcome. As i remember on Open Stack it is done in a...
  11. M

    Reserving HW resources for Hypervisor (physical machine) itself

    Interesting post for further investigation on this topic https://null-src.com/posts/qemu-optimization/post.php
  12. M

    Reserving HW resources for Hypervisor (physical machine) itself

    Hello. One and easy solution for isolating CPU cores from task/process scheduler is an option that you can use on kernel, and it must be defined in GRUB(2). This option is: isolcpus For example, if you want to reserve CPU cores 0 and 1, just add this opction at the end of the grub config file...
  13. M

    Purpose of the internal interfaces

    Hello. I just want to understand the standard OpenvSwitch/Linux native interfaces used in Proxmox VE, when using SDN/VNets: VNetINT is name of my VNet Looking at the OpenvSwitch level: ovs-vsctl list interface | egrep "^name|^status|^type" name : vmbr0 --> This is a Bridge...
  14. M

    Rocky Linux Template

    Hello. I think that the latest Rocky Linux images can be found here: https://uk.lxd.images.canonical.com/images/rockylinux/
  15. M

    KVM: VMXNET3 vs. VirtIO

    Maybe to late, but (answer from VMware): It looks that VMXNET3 is not emulated but using some kind of paravirtualization (at least when using on ESXi, not sure how it may work on KVM/QEMU environment --> it may be emulated or?). On the other side VirtIO s entirely paravirtualized (no...
  16. M

    VirtIO-IOMMU

    So in the future it will be available as an (advanced) option [did not read the diff/changes in code]?. P.S. I assume that IOMMU part in GRUB2 kernel configuration will also be added (for Intel: intel_iommu=on and eventually: iommu=pt) ?
  17. M

    VirtIO-IOMMU

    Hello. Since new kernel (5.14) supports VirtIO-IOMMU, maybe it would be a nice one to have it as an advanced option to use (as option: enable/disable). Linux kernel 5.14. and VirtIO-IOMMU More details can be found here: QEMU - VT-d (IOMMU) VirtIO-IOMMU explained Br, Hrvoje.
  18. M

    SDN Mutlicast for vxlans

    For now, i have no problems, but reading that it is in use in VMware and OpenStack, i was thinking that there must be a bigger reason for it (for example it is "lighter" on MAC learning etc.)
  19. M

    SDN Mutlicast for vxlans

    Multicast within VXLAN is a very good option; nicely described here: https://www.slideshare.net/enakai/how-vxlan-works-on-linux And here (on VMware side, but it is the same as in may be in Proxmox VE) is the good introductions of the issues with/without VXLAN multicast...
  20. M

    SDN Mutlicast for vxlans

    Hello. As i find out, some examples related to VXLAN and Multicast can be found here (didn't test it): https://git.proxmox.com/?p=pve-docs.git;a=blob_plain;f=vxlan-and-evpn.adoc;hb=HEAD As i understand the config, the difference is (inside VXLAN interface section): In Unicast: iface vxlan2...