Search results

  1. spirit

    NICs inoperative in ProxMox

    say thanks you to systemd. the nic naming is based of pci slot ordering. sometimes, when adding a pcie devices (or nvme drive), the internal order can change. (depend of the motherboard). pve9 have a new feature to add a statc name "nicX" based on mac-address, like 10year ago before this shit...
  2. spirit

    A large number of dropped packets

    do you use bonding on your proxmox node ? if yes, which mode ? dropped traffic could be multicast or unicast flood with destination ip is not the ip of our vm. (check also if mac address ageing timeout not too low on your physical switch)
  3. spirit

    A large number of dropped packets

    always use virtio. e1000 don't have any acceleration
  4. spirit

    Ceph rbd du shows usage 2-4x higher than inside VM

    I known that ext4 had problem with discard in the past (not about fragmentation, but discard not always working). Personally, I'm using xfs in production, and I never had this problem (on 4000 vms)
  5. spirit

    Ceph rbd du shows usage 2-4x higher than inside VM

    do you have any snapshot in theses vms ? (because triming on snapshot will take more space instead reduce the space)
  6. spirit

    SDN overlay network in routed mesh setup

    they are an option on the zone: "exit nodes local routing"
  7. spirit

    ZFS mirror on 2x Crucial T705 (PCIe 5.0) causing txg_sync hangs under write load – no NVMe errors in dmesg

    (small reminder: don't use zfs on consumer ssd/nvme . they can't handle a lot of fsync because they don't have a PLP/powercapacitor), and zfs do a lot of sync. It's really like 200~1000 iops max with this kind of drive.
  8. spirit

    Network Optimization for High-Volume UDP Traffic in PVE

    yes, I was thinking exactly the same
  9. spirit

    Network Optimization for High-Volume UDP Traffic in PVE

    this is normal, don't use vmxnet3 or e1000, they are full software emulation. you need to use virtio which use vhost-net offloading on pvehost. your cpu is quite old, and it's possible that spectre/meltdown/.... mitigation impact performance nano /etc/default/grub to...
  10. spirit

    Network Optimization for High-Volume UDP Traffic in PVE

    250566pps is quite low, I mean , you should reach 1~2mpps for any each packetsize. I remember to reach easily 7~9gbit with 1core/thread with standard 1500mtu. (with epyc v3 3,5ghz and cpu forced to max frequency)
  11. spirit

    Network Optimization for High-Volume UDP Traffic in PVE

    as far I remember, virtio-net is limit is around 2millions pps by core (depend of the cpu frequency). The only way is to increase number of queue on the virtio nic. (if you are cpu limited, you should see a vhost-net process at 100% on the pve host) doing iperf with big packet will not help to...
  12. spirit

    evpn? network segmentation?

    I think it could be done with a dedicated interface in each zone/vrf, (not sure if a vlan tagged interface could work to avoid the need to have dedicated interfaces). That's why I'm doing it with my physical router/switch currently, with a lot of zones,it's more simplier
  13. spirit

    Does EVPN Zone support 'pve' IPAM to trigger PowerDNS updates?

    currently ipam/dns are working with dhcp only, and dhcp is working with simple zone. it's working in progress to add feature on other zones.
  14. spirit

    Proxmox SDN Traffic breakout Interface and routing

    if you have talking about the vxlan tunnels themself or the bgp peers, they are simply using the route to reach to remote peers ips. so you can make simple routes on your host if needed. or do you want PBR specifically for the vxlan udp port on a different nic ????
  15. spirit

    Proxmox SDN Traffic breakout Interface and routing

    do you have an example of what you need to do with manual routes to be sure to understand what you need? on the underlay, evpn/vxlan are using peers adress list to establish vxlan tunnel, and the vxlan tunnels are working in default vrf only. in the overlay, in evpn, if you define an...
  16. spirit

    Proxmox VE 9.1.1 dnsmasq issue

    currently the gateway ip from gui is only apply to simple zone && evpn zone as the ip is pushed on the vnet on all hosts. (or you could have ip conflict on layer2 zones like vlan,vxlan,....) you can add an ip with add in /etc/network/interfaces of the node iface vnet100 address...
  17. spirit

    evpn? network segmentation?

    each zone is a different vrf in evpn with their own routing table. (only using exit-node is doing a vrf route leak between the zone && default zone. but If you use multiple zones, I'll command to use physical routers as exit evpn nodes) so yes, you have overlaping ip range, you have firewall...
  18. spirit

    [SOLVED] VM slow IO read from historian DB

    your fio show 0 problem with random io read
  19. spirit

    [SOLVED] VM slow IO read from historian DB

    so, it's a windows vm ? (0.1.271 driver , try to keep this one, because the more recent have known bug). fio seem to push hardware to max (500MB/s seem to be the pci bus limit of the raid controller) try to avoid writeback on vm side, as you already have writeback on your controller. do you...
  20. spirit

    Upgrade from 6.3-9 to 7.x

    my oldest cluster have been updated from pve4->5->6>7>8>9 without any reinstall ;)