VLAN / Networking

Ceph stores 3 copies of everything and tries to spread it across as many disks and nodes as possible for redundancy. It always tries to keep those 3 copies. If one node or OSD goes down, you lose one copy of everything stored on that node/OSD, so it has to create a new copy of everything lost from the remaining 2 copies on the remaining nodes/OSDs. This is also a thing you have to plan for. All nodes will have to always keep enough free space to compensate when a node fails.
 
Ceph stores 3 copies of everything and tries to spread it across as many disks and nodes as possible for redundancy. It always tries to keep those 3 copies. If one node or OSD goes down, you lose one copy of everything stored on that node/OSD, so it has to create a new copy of everything lost from the remaining 2 copies on the remaining nodes/OSDs. This is also a thing you have to plan for. All nodes will have to always keep enough free space to compensate when a node fails.

if i have a vm called 100 on node 1, the disk on the ceph storage will be distributed between the 3 nodes on the ceph storage.
if node 1 goes down, the disk is still on node 2 & 3 so it'll migrate itself (if configurated to) on node 2 or 3, and it'll be immediate no ?
 
if node 1 goes down, the disk is still on node 2 & 3 so it'll migrate itself (if configurated to) on node 2 or 3, and it'll be immediate no ?
Yes, but then you only got 2 copies of the disk and it will target to have 3 copies, so it will rebalance and store an additional third copy on nodes 2 and 3.

I'm no ceph expert, but I think for proper selfhealing and rebalance you need 4+ ceph nodes. With 3 nodes and 1 failing it should be in a degraded state.
 
Last edited:
Yes, but then you only got 2 copies of the disk and it will target to have 3 copies, so it will rebalance and store an additional third copy on nodes 2 and 3.

I'm no ceph expert, but I think for proper selfhealing and rebalance you need 4+ ceph nodes. With 3 nodes and 1 failing it should be in a degraded state.
Hey again !

Let's say i now have 2 sfp+ 10g NIC on each node.
Should i :
- put 3 of them (one for each node) on vlan5 for public ceph and the second one on VLAN 6 for private ceph and use the 1G NIC for the VM and the corosync ?
- put public and private on the same sfp+ 10G NIC, use the 1G NIC for clustering and use the second sfp+ 10G NIC for the VM ?
- interconnect the 3 nodes with the 2 sfp+ 10G NICs to each other and use the 1G NIC for the VM and the corosync ?
 
Last edited:
You don't need two bridges. A single VLAN aware bridge and then a vlan interface (like "vmbr1.10") on top where oyu set your gateway and IP.

Would it look like this ?
And then i attach the vmbr0 bridge to my VMs and tag them in their own vlan ?

With ens34 and ens35 my sfp+ 10G for private and public ceph networks

1714243111837.png
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!