As you will see from my post i am not a network expert at all. So I started my Proxmox journey with just one of the builtin NICs (RJ45 2,5GB). This one was automatically configured during installation of all three nodes.
But since every node also has two 10GB SFP+ interfaces I bought a 10GB switch and a bunch of cables and would like to make best use of these. Only problem: I have no idea how to configure this. At least I have an idea how it could work in principle:
- use link aggregation to get the maximum possible speed
- have a separate bridge with a separate IP4 net without gateway (since I only have a gateway for my main network)
- use this new bridge for HA traffic (replication) and migration of VMs, later hopefully also for Ceph, perhaps other cluster-internal (management) traffic
At the moment I have 4 NVMe in each of my nodes. Two of these are already set up as a ZFS mirror since I got the advice that this is the most reliable option. But since I have the disks and the network speed I would like to at least experiment with Ceph. So the intended configuration should support both the existing ZFS HA traffic (away from the existing bridge) as well as future Ceph traffic. The existing 2.5GB connection should be only used for the VMs then.
Can anyone help me how to set this up? Is there a sample /etc/network/interfaces file for such a configuration? Are there other config files I will have to deal with?
But since every node also has two 10GB SFP+ interfaces I bought a 10GB switch and a bunch of cables and would like to make best use of these. Only problem: I have no idea how to configure this. At least I have an idea how it could work in principle:
- use link aggregation to get the maximum possible speed
- have a separate bridge with a separate IP4 net without gateway (since I only have a gateway for my main network)
- use this new bridge for HA traffic (replication) and migration of VMs, later hopefully also for Ceph, perhaps other cluster-internal (management) traffic
At the moment I have 4 NVMe in each of my nodes. Two of these are already set up as a ZFS mirror since I got the advice that this is the most reliable option. But since I have the disks and the network speed I would like to at least experiment with Ceph. So the intended configuration should support both the existing ZFS HA traffic (away from the existing bridge) as well as future Ceph traffic. The existing 2.5GB connection should be only used for the VMs then.
Can anyone help me how to set this up? Is there a sample /etc/network/interfaces file for such a configuration? Are there other config files I will have to deal with?