Search results

  1. S

    [SOLVED] SDN VLAN communication with physical Network

    no, you don't. (it's only if you want to add an ip for your proxmox host in this vlan). you're setup is fine. Are you sure that your physical switch port for proxmox is correctly in trunk mode and allow the vlan 66 ?
  2. S

    PVE 8.2.2 default route goes missing after reboot.

    be carefull that if you use vlan-aware vmbr0, and tag vlan on bond0.X directly, you can use the same vlan for vms, because the traffic will never reach vmbr0. (it'll be forced to go to bond0.x )
  3. S

    Multiple bridges to one bond?

    you can also use the sdn feature at datacenter level. create a vlan zone , then create a vnet for each vlan.
  4. S

    SDN problems with Netbox as IPAM

    can you open a bug on bugzilla.proxmox.com ? I'll try to add support for ip range creation.
  5. S

    RE-IP Proxmox VE and Ceph

    AFAIK, the clean official way is: you need to create new monitors and delete the olders after. (and both need to be able to communicate during the transitions). But I think it's possible to dump, modify && reinject the monmap, but it's not easy...
  6. S

    PVE 8.2.2 default route goes missing after reboot.

    if you use vlan-aware bridge, you should use <vmbr.X> instead "bond0.X" for your vlans ip address. auto vmbr0 iface vmbr0 inet manual bridge-ports bond0 bridge-stp off bridge-fd 0 bridge-vlan-aware yes bridge-vids 2-4094 auto vmbr0.200 iface vmbr0.200 inet static...
  7. S

    Proxmox generate 2 mac address visibile on the switch not allowed by the data center

    Here a summary to help people: bridge-disable-mac-learning 1 This is disabling unicast flood && bridge learning on the bridge. (proxmox register manually vm mac on ports) That mean that traffic incoming to the server is not forwarded to the vms if the destination mac is different than the vm...
  8. S

    Proxmox generate 2 mac address visibile on the switch not allowed by the data center

    "bridge-disable-mac-learning 1" still apply for bridged setup at hetzner + if you use proxmox firewall, dont use "reject", use "drop"
  9. S

    SDN problems with Netbox as IPAM

    ok, got it! As far I remeber, currently, we don't create the range in netbox ipam (or other external). We only create the subnet if it's not existing in ipam. I think it's because we don't have a specific api call when add/del a range. (it's just an value option of the subnet), so we should...
  10. S

    SDN: BGP controllers share routes to IPs of VMs incorrectly

    >>We need to access a variety of VMs from the host system and the other way around, e.g. for the monitoring system, local apt repository, LDAP >>Server and some more. do you really need to access to vms from each nodes of your pve cluster ? I mean, it's really a problem from the exit-node...
  11. S

    SDN: BGP controllers share routes to IPs of VMs incorrectly

    you can add "nf_conntrack_allow_invalid: 1" in host.fw and add "net.ipv4.conf.default.rp_filter=0" in sysctl.conf too.
  12. S

    SDN problems with Netbox as IPAM

    @shanreich maybe related to this recent commit ?
  13. S

    OVS IntPort equivalent for Linux bridge? (SDN bridges refers)

    it can be done in /etc/network/interfaces no vlan ----------- iface vmbr0 address .... vlan aware on specific vlan --------------------------------------- iface vmbr0.X address .... on a sdn vnet directly ------------------------------- iface <vnetid> address .....
  14. S

    Number of disks on Ceph storage?

    if you have min_size=2 for your pool. (and size=3), if you loose 2 disks (ON DIFFERENT SERVERS), you'are going to have readonly PG. (until you have replicated them). if you use min_size=1, it's still work with 1 disk.
  15. S

    [SOLVED] CEPH Reef osd still shutdown

    I don't think it's reated to numa. I have some virtual ceph cluster, where numa is not present, I have exactly same warning message , and ceph osd numa-status is also empty, and everything is working fine. Maybe you could try to increase debug level in ceph.conf : debug_osd = 20 for...
  16. S

    [SOLVED] CEPH Reef osd still shutdown

    It's not related. set_numa_affinity is done once at osd service start. it's seem than you're osd is restarting multiple is loop, then after 5 restart, it's going in protection to avoir infinite loop and impact on the cluster. do you have logs in /var/log/ceph/ceph-osd.*.log ?
  17. S

    Proxmox SDN, openvswitch, and linux bridges

    Well, technically it's possible to implemented ovs plugin. (I'm currently helping a user to create an ovn plugin). But for my point of view, openflow sdn controllers are dead since end of 201x. Because they are centralized controllers and generaly are not "standard" and can't be integrated...
  18. S

    SDN (software defined networking) vlans and jumbo frames (mtu 9000)?

    this patch has been sent https://bugzilla.proxmox.com/show_bug.cgi?id=5324 but not yet applied. you can use a vlan-aware vmbr0, sdn + mtu is working fine with it currently.
  19. S

    Random 6.8.4-2-pve kernel crashes

    no crash since 4 days with 6.8.4-3-pve, seem to be fixed for my bug ! :)

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!