Is my network setup is correct ?

Saxophone

New Member
Sep 3, 2024
8
0
1
I am trying to make a proxmox cluster that uses the built in ceph storage to create a pool where I can store all of my files, photos, videos, movies, music and the lot. Basically one giant hard drive that I can connect my home computers to via 10g network. I would also like the ability to remote into the pool to watch movies on a tablet or laptop when away from home. I am new to proxmox and ceph and have never worked with vm's before. My guess is that there will be one vm for serving out the ceph pool, and another vm for hosting plex. But I really don't know what would be considered best practices.

I have 4 supermicro servers with 12 hdd's each, supermicro motherboard, 128g ram, 2 ssd's for OS.

I am using the 1g motherboard nic for administering the proxmox cluster through the web interface on 192.168.1.x. Proxmox calls this nic eno1 and made it a bridge on vmbr0

I have a 2 port 10g nic that I am using one of the ports to connect to 10.10.30.x using a proxmox bridge vmbr1 that is for my ceph frontend network. Ceph documentation suggested this.

The second port is currently not used.

I have a second 2 port 10g nic that I have bonded together in proxmox with bond0 to connect to 10.10.40.x with bridge vmbr2 that is my ceph backend network. Also suggested in ceph documentation.

I want to make sure this is correct because I want 10g networking to my home computer.
 
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

iface eno1 inet manual
#1G for pve & ceph managment

iface eno2 inet manual
#1g not used

iface enxbe3af2b6059f inet manual
#motherboard ipmi

auto enp65s0f0
iface enp65s0f0 inet manual
#10g for ceph backend

auto enp65s0f1
iface enp65s0f1 inet manual
#10g for ceph backend

auto enp66s0f0
iface enp66s0f0 inet manual
#10g for ceph frontend

iface enp66s0f1 inet manual
#10g not used

auto bond0
iface bond0 inet manual
bond-slaves enp65s0f0 enp65s0f1
bond-miimon 100
bond-mode balance-alb
#ceph backend bond

auto vmbr0
iface vmbr0 inet static
address 192.168.1.25/24
gateway 192.168.1.1
bridge-ports eno1
bridge-stp off
bridge-fd 0

auto vmbr1
iface vmbr1 inet static
address 10.10.30.35/24
bridge-ports enp66s0f0
bridge-stp off
bridge-fd 0

auto vmbr2
iface vmbr2 inet static
address 10.10.40.45/24
bridge-ports bond0
bridge-stp off
bridge-fd 0

source /etc/network/interfaces.d/*
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!