Search results

  1. S

    EVPN SDN issues after Upgrade Proxmox VE from 7 to 8

    please share your configurations (/etc/pve/sdn/*) && /etc/network/interfaces. + pveversion -v. I can't help without them. (The frr bug from this thread is already fixed in offiial repo with 8.5.2 release)
  2. S

    Ceph: Balancing disk space unequally!?!?!?!

    how many pgs do you have ?
  3. S

    does PVE SDN support multiple identicle subnets in multiple zones?

    yes, currently it's done with a simple "post-up iptables ...", so reexecuting same command each time. It need to be polished. (Maybe managed by a service similar to pve-firewall) It could be great to known what change exactly is not working without edit vm device. (Maybe it's a bug, maybe not)...
  4. S

    does PVE SDN support multiple identicle subnets in multiple zones?

    For evpn, a different vrf with a different routing table is used for each zone, so I think it could work. (but I never tested it). Do you have tested to enable 1exit-node (where this node have physical access to 192.168.10.1) + enable s-nat on the subnet ? I'm not sure about how is working the...
  5. S

    Custom IPAM plugins - NIPAP

    so you need to create a plugin file in /usr/share/perl5/PVE/Network/SDN/Ipams/ (you can look at /usr/share/perl5/PVE/Network/SDN/Ipams/NetboxPlugin.pm && /usr/share/perl5/PVE/Network/SDN/Ipams/PhpIpamPlugin.pm to see how it's work , nothing complex) Then this plugin need to be loaded at the...
  6. S

    SDN broken after underlying network change

    It's in sdn code, we lookup for ifaces in the bridge with a spefic regex (eth*, en*,bond*). because we need to exclude other virtual interfaces,etc... we really can't get it work with custom names /usr/share/perl5/PVE/Network/SDN/Zones/Plugin.pm sub get_bridge_ifaces { my ($bridge) =...
  7. S

    Unable to remove ghost thinpool after a drive failure.

    you can edit /etc/pve/storage.cfg manually, and remove the entry manually.
  8. S

    Multiple Nics and SDN

    evpn is an overlay network running over underlay tcp network, so that mean that it's not managing itself the nic (and the aggregation/balancing). It's simply use the network ip to reach the peers ip you have defined. if you want to use multiples nics: what you can do: -1) do a bond with the 2...
  9. S

    Upgrade Proxmox without Internet.

    https://forum.proxmox.com/threads/proxmox-offline-mirror-released.115219/
  10. S

    Ceph cluster full

    Also, note that ceph have some security to not reach 100%, it's going read only when you reach around 95% ceph osd set-full-ratio 0.95 maybe can you try to increase it to 98%, to be able to write again a little bit ceph osd set-full-ratio 0.98
  11. S

    Nvidia vGPU mdev and live migration

    oh great :) I known somebody with a cluster with telsa cards for testing. I'll try next month to do test again. @badji ping. Time to test gpu migration again ^_^
  12. S

    Ceph cluster full

    ceph osd pool set <poolname> size 2 (but you can also do it through the proxmox gui, editing the pool )
  13. S

    Integration with OVS/OVN

    yes, I can try to look at it. (I'll have time next week) you can sent it to my work email : alexandre.derumier@groupe-cyllene.com If you want to submit it to proxmox dev teams directly, you need to follow theses rules: https://pve.proxmox.com/wiki/Developer_Documentation
  14. S

    Ceph cluster full

    if your pool use size=3 (replication x3), you can go down to size=2. (or size 2->size1, but be carefull)
  15. S

    Nvidia vGPU mdev and live migration

    1-3: try to follow this guide https://gitlab.com/polloloco/vgpu-proxmox to unlock it add add mdev support for 4, yes it need to be rebuild with NV_KVM_MIGRATION_UAP=1 in the makefile. (Note that this flag was changing last year, nvidia was doing changes between version, adding/removing it...
  16. S

    Nvidia vGPU mdev and live migration

    yes, I was able to do it last year with 5.15 kernel and a specify nvidia driver version. (and patched for mdev support, because nvidia is locking the drivers without license). I'll try to rework on it for proxmox8 but I'm not sure that nvidia support kernel 6.2 yet. I'll integrate it to the...
  17. S

    SDN broken after underlying network change

    arf, damned systemd. maybe simply: /etc/default/grub GRUB_CMDLINE_LINUX="net.ifnames=0" to revert to old native kernel ethx without need to use link
  18. S

    SDN broken after underlying network change

    Also, maybe can you try to use ethX instead lanX, maybe they are some parsers not working correctly with custom names for interfaces.
  19. S

    SDN broken after underlying network change

    Maybe try to restart the node ? if the SDN don't show any error in gui, that mean that config is correctly applied and running. can you send content of /etc/network/interfaces.d/sdn ?
  20. S

    Cloud-init doesn't configure Netplan network in VM with Debian12

    afaik, cloud-init on debian generated config for ifupdown only, in /etc/network/interfaces.d/cloud-init.cfg netplan is only generted for ubuntu.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!