Hi,
I'm installing Proxmox Backup Server on a virtual machine.
The VM resides on a cloud PVE Cluster with ZFS storage.
I was wondering which could be the best filesystem for PBS datastores.
Nesting ZFS could lead to high memory overhead.
What do you think?
Thank you.
Massimo
Hi Spirit,
in my humble opinion Wireguard is definetely the way to go.
Way easier to setup and very resilient connections even on poor networks.
In the meantime my dirty ipsec setup is up and running since july.
After some tests I saw no performance difference between 1450 and lower MTU.
So I stick to 1450 but to be honest I haven't verified if fragmentation occours.
From an operational point of view, everything seems ok with MTU 1450.
I manage a 4 nodes Proxmox cluster.
Nodes are located in two different datacenters and connected through public network.
Till SDN there was no L2 shared between nodes private (aka host only) network.
Using SDN (eg. vxlan zones) it's possible to distribute interconnected bridges allowing a bunch...
Thank you very much for your answer.
I fear that MACsec is not an option since it is a layer 2 protocol.
My 2 boxes seat in different datacenters.
I'll try the ipsec way.
If I find an elegant solution I could post a small guide.
Well, connecting 2 remote Proxmox boxes, in example using vxlan tunnels, works really fine.
So far so good.
I was wondering: what options are there to add a security/crypto layer?
Obviously I mean without external devices/apps.
pve-manager 6.2-5 (6.2-6 same) seems to introduce a bug with sdn.
Creating a zone the zone is listed within nodes items (without icon).
Till 6.2-4 clicking on it a vnet lists is shown also allowing to set permissions.
After 6.2-5 clicking on it breaks extjs interface.
Is this bug known?
Maybe I found a problem.
After updating ifupdown2 some nics don't come up at boot time anymore.
This is the setup:
auto lo
iface lo inet loopback
iface eno1 inet manual
iface eno2 inet manual
auto unt
iface unt inet static
address 192.168.200.232/24
gateway 192.168.200.254...
Thanks for answer.
IMHO it's not worth to waste time improving this kind of dynamic switch.
What we have now is more than enough.
Definetely a warning that switching from OVS to Linux bridge may require a reboot we'll be fine.
I'll keep on testing.
Ok, I know that this is not strictly related to the new SDN features but as ifupdown2 was modified....
first question: is it supposed to work a switch from linux bridge to OVS bridge without a reboot?
I've made these tests:
1)
FROM: Linux bridge (single NIC) assigned ip -> TO: OVS bond...
Negligible.
To be honest our customers tipically run storage critical loads more than network intensive.
The most common setup are a couple of 10 GbE OVS bonded with balance SLB and some VLANs.
On a fair number of VMs load balancing is quite good.
Hi,
in production we use extensively OVS mainly for the balance slb feature.
Is somehow possible to use linux bridges to bond interface obtaining fault tolerance AND load balancing using unmanaged switches (no LACP) ?
I know that LACP is definetely a best practice, sadly often it's not possible...
Hi,
first of all: thank you very much for this fantastic job!!
I'll starting extensive testing in my lab, particularly focused on VXLAN and OpenVSwitch.
For now let me report a small typo:
root@munich ~ # apt info libpve-network-perl
...
Description: Proxmox VE storage management library...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.