Search results

  1. M

    Best choice for Datastore filesytem

    Thank you for you answer. Yes, sure, but it's not so viable in a cloud setup. So I'll go ext4. Thanks.
  2. M

    Best choice for Datastore filesytem

    Hi, I'm installing Proxmox Backup Server on a virtual machine. The VM resides on a cloud PVE Cluster with ZFS storage. I was wondering which could be the best filesystem for PBS datastores. Nesting ZFS could lead to high memory overhead. What do you think? Thank you. Massimo
  3. M

    VXLAN encryption

    Hi Spirit, in my humble opinion Wireguard is definetely the way to go. Way easier to setup and very resilient connections even on poor networks. In the meantime my dirty ipsec setup is up and running since july.
  4. M

    VXLAN encryption

    After some tests I saw no performance difference between 1450 and lower MTU. So I stick to 1450 but to be honest I haven't verified if fragmentation occours. From an operational point of view, everything seems ok with MTU 1450.
  5. M

    VXLAN encryption

    I manage a 4 nodes Proxmox cluster. Nodes are located in two different datacenters and connected through public network. Till SDN there was no L2 shared between nodes private (aka host only) network. Using SDN (eg. vxlan zones) it's possible to distribute interconnected bridges allowing a bunch...
  6. M

    proxmox 7.0 sdn beta test

    Arghh..... sorry :-) You meant MACsec ON TOP of vxlan. Yes, this is an option!
  7. M

    proxmox 7.0 sdn beta test

    Thank you very much for your answer. I fear that MACsec is not an option since it is a layer 2 protocol. My 2 boxes seat in different datacenters. I'll try the ipsec way. If I find an elegant solution I could post a small guide.
  8. M

    proxmox 7.0 sdn beta test

    Well, connecting 2 remote Proxmox boxes, in example using vxlan tunnels, works really fine. So far so good. I was wondering: what options are there to add a security/crypto layer? Obviously I mean without external devices/apps.
  9. M

    proxmox 7.0 sdn beta test

    Thank you Thomas. I confirm the patch fix the problem.
  10. M

    proxmox 7.0 sdn beta test

    pve-manager 6.2-5 (6.2-6 same) seems to introduce a bug with sdn. Creating a zone the zone is listed within nodes items (without icon). Till 6.2-4 clicking on it a vnet lists is shown also allowing to set permissions. After 6.2-5 clicking on it breaks extjs interface. Is this bug known?
  11. M

    proxmox 7.0 sdn beta test

    Fine thanks! IMHO this is the right way. I can confirm that adding 'auto ...' fix the problem. BTW: other tests are all going fine! (OVS + VXLAN)
  12. M

    proxmox 7.0 sdn beta test

    Fine, thanks! Yes, sure but this is not required with other nics. Anyway it's enough easy to remember to flag Autostart in the gui.
  13. M

    proxmox 7.0 sdn beta test

    Found another problem. libpve-network-perl_0.4-6 breaks LXC; container don't start. Was working with libpve-network-perl_0.4-4
  14. M

    proxmox 7.0 sdn beta test

    Maybe I found a problem. After updating ifupdown2 some nics don't come up at boot time anymore. This is the setup: auto lo iface lo inet loopback iface eno1 inet manual iface eno2 inet manual auto unt iface unt inet static address 192.168.200.232/24 gateway 192.168.200.254...
  15. M

    proxmox 7.0 sdn beta test

    Thanks for answer. IMHO it's not worth to waste time improving this kind of dynamic switch. What we have now is more than enough. Definetely a warning that switching from OVS to Linux bridge may require a reboot we'll be fine. I'll keep on testing.
  16. M

    proxmox 7.0 sdn beta test

    Ok, I know that this is not strictly related to the new SDN features but as ifupdown2 was modified.... first question: is it supposed to work a switch from linux bridge to OVS bridge without a reboot? I've made these tests: 1) FROM: Linux bridge (single NIC) assigned ip -> TO: OVS bond...
  17. M

    proxmox 7.0 sdn beta test

    Fine! I take a look right now...
  18. M

    OVS the right solution ?

    Negligible. To be honest our customers tipically run storage critical loads more than network intensive. The most common setup are a couple of 10 GbE OVS bonded with balance SLB and some VLANs. On a fair number of VMs load balancing is quite good.
  19. M

    OVS the right solution ?

    Hi, in production we use extensively OVS mainly for the balance slb feature. Is somehow possible to use linux bridges to bond interface obtaining fault tolerance AND load balancing using unmanaged switches (no LACP) ? I know that LACP is definetely a best practice, sadly often it's not possible...
  20. M

    proxmox 7.0 sdn beta test

    Hi, first of all: thank you very much for this fantastic job!! I'll starting extensive testing in my lab, particularly focused on VXLAN and OpenVSwitch. For now let me report a small typo: root@munich ~ # apt info libpve-network-perl ... Description: Proxmox VE storage management library...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!