Hello,
I was looking into Proxmox set up with Ceph on my full NVME servers. At first, I was looking into 40GB but that wasn't enough with SSD.
Used following documents as guide line but wanted to get some feedback on my setup/settings (not implemented yet)
https://forum.proxmox.com/threads/proxmox-ve-ceph-benchmark-2020-09-hyper-converged-with-nvme.76516/
https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server
Here is the config
OSD node(1-5)
ens3f0 & eno3f1 Bond0 - 192.168.1.101 to 105 (Mesh - 2 x 100GB) - Connected to other nodes directly
eno1 - 10.123.123.101 to 105 (proxmox mgmt - 1GB) - Connected to 1GB switch
ens5f0np0 & ens5f0np0 Bond1 - 10.123.122.101 to 105 (proxmox cluster - 2 x 10GB) - Connected to 10GB switch
ens1f0np0 & ens1f1np1 Bond2 - 10.123.0.0/24 (VM network - 2 x 10GB) - Connected to 10GB switch
MON node(1-3)
eno3 - 10.123.123.201 to203 (proxmox mgmt - 1GB) - Connected to 1GB switch
eno1 & eno2 - Bond0 - 10.123.122.201 to 203 (proxmox cluster - 2 x 10GB) - Connected to 10GB switch
I was looking into Proxmox set up with Ceph on my full NVME servers. At first, I was looking into 40GB but that wasn't enough with SSD.
Used following documents as guide line but wanted to get some feedback on my setup/settings (not implemented yet)
https://forum.proxmox.com/threads/proxmox-ve-ceph-benchmark-2020-09-hyper-converged-with-nvme.76516/
https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server
Here is the config
OSD node(1-5)
ens3f0 & eno3f1 Bond0 - 192.168.1.101 to 105 (Mesh - 2 x 100GB) - Connected to other nodes directly
eno1 - 10.123.123.101 to 105 (proxmox mgmt - 1GB) - Connected to 1GB switch
ens5f0np0 & ens5f0np0 Bond1 - 10.123.122.101 to 105 (proxmox cluster - 2 x 10GB) - Connected to 10GB switch
ens1f0np0 & ens1f1np1 Bond2 - 10.123.0.0/24 (VM network - 2 x 10GB) - Connected to 10GB switch
MON node(1-3)
eno3 - 10.123.123.201 to203 (proxmox mgmt - 1GB) - Connected to 1GB switch
eno1 & eno2 - Bond0 - 10.123.122.201 to 203 (proxmox cluster - 2 x 10GB) - Connected to 10GB switch