proxmox configuration

tlc

New Member
Jan 7, 2026
2
0
1
I have two dell r750 servers. i have installed proxmox in them. However I currently have no license,

I want to configure Ceph storage Cluster and storage pool, 10G network card bonding, VM level HA.

How can i do this ?
 
I have two dell r750 servers. i have installed proxmox in them. However I currently have no license,

I want to configure Ceph storage Cluster and storage pool, 10G network card bonding, VM level HA.

How can i do this ?
Your issue is not the license, but is your server count. You normally need 3 servers to setup a Ceph cluster. Ceph is installable without a paid license.
 
For the bonding it depends on your switch features, but I also recommend to use a different VLANS in this setup, i mean one for cluster, another for Ceph public, another for Ceph data, and another for management network (access to Proxmox GUI)
 
For bonding you need to remove vmbr0, create first a bond, over this bond create again vmbr0, and over this vmbr0 create the VLANS asI told before. DO NOT APPLY BEFORE CREATING THE MANAGE (access to Proxmox GUI) SETTINGS, or you lose the access to manage Proxmox. (I recommend to create a copy of network config the file interfaces in /etc/network folder)
You can do this using Linux Bridge (the default settings in Proxmox) or installing OpenVSwitch who has better performance.

For the settings of bonding it depends of how many switches do you have and how they are configured. For example if you have the ability to setup a MLAG (a LAG between two swithes, you can use MLAG-BalanceTCP), other settings are available depending on the config of your switches.

If you have only one switch you can use:
Code:
auto bond0
iface bond0 inet manual
        bond-slaves eno0 eno1
        bond-miimon 100
        bond-mode 802.3ad
        bond-xmit-hash-policy layer2
auto vmbr0
iface vmbr0 inet static
        address 192.168.1.100/24
        gateway 192.168.1.1
        bridge-ports bond0
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094

auto vmbr0.100
iface vmbr0.100 inet static
        address 10.0.100.100/24


auto vmbr0.200
iface vmbr0.200 inet static
        address 10.0.200.100/24
 
Last edited:
If you use OpenVSwicth with MLAG then you can use these settings

Code:
auto bond0
iface bond0 inet manual
    ovs_bonds eno1 eno2
    ovs_type OVSBond
    ovs_bridge vmbr0
    ovs_options lacp=active bond_mode=balance-tcp
    
    
auto vmbr0
iface vmbr0 inet manual
    ovs_type OVSBridge
    ovs_ports bond0 cluster management cephcluster cephdata
    
auto management
iface management inet static
    address 10.0.20.100/24
    gateway 10.0.20.1
    ovs_type OVSIntPort
    ovs_bridge vmbr0
    ovs_options tag=20

auto cluster
iface cluster inet static
    address 10.0.21.100/28
    ovs_type OVSIntPort
    ovs_bridge vmbr0
    ovs_options tag=21
    
auto cephcluster
iface cephcluster inet static
    address 10.0.22.100/28
    ovs_type OVSIntPort
    ovs_bridge vmbr0
    ovs_options tag=22
    
auto cephdata
iface cephdata inet static
    address 10.0.23.100/28
    ovs_type OVSIntPort
    ovs_bridge vmbr0
    ovs_options tag=23
 
  • Like
Reactions: news