Thank you so much. Worked like a charm. Separate switch for the ceph cluster network is definitely the way to go. Combining the proxmox and ceph public networks means I can keep everything on 10gbe. Perfect.
Background:
4 nodes
3 nodes have 2x10gbe and 2x1gbe
1 node has 1x10gbe and 2x1gbe
1 x unfi 16xg
NEW 1x mikrotik 4 port 10gbe switch
I had everything setup and working:
10.0.90.0/28 Proxmox corosync on 1gbe interfaces
10.095.0/28 Ceph cluster AND public network on 10gbe interfaces
10.0.50.0/24...
Right now I have a three node cluster. Two of the nodes have ECC memory and the other one is a NUC.
Is there any way to preference writes such that all incoming data hits one of the 2 ECC nodes before being replicated to the NUC node?
I’ve used primary-affinity on the OSDs attached to the NUC...
Sorted.
Didn’t actually have to install the driver - it’s already part of the kernel.
All I needed to do was authorise the thunderbolt device and it shows up.
If anyone needs the instructions I will post them here.
Hi,
I have a NUC with a thunderbolt 3 port to which I have connected a QNA-T310G1S. Its not showing up in lspci.
00:00.0 Host bridge: Intel Corporation Xeon E3-1200 v6/7th Gen Core Processor Host Bridge/DRAM Registers (rev 03)
00:02.0 VGA compatible controller: Intel Corporation Iris Plus...
Hi,
I’m in the planning stages of setting up a three node cluster. 2 of the nodes will have dual 10gbe and dual 1gbe NICs, and the other has a single 10gbe Nic and a dual 1gbe Nic.
Reading through the documentation and doing searches it seems that Proxmox wants 2 NICs - 1 for Corosync and one...
Alternatively, I could just leave the one drive in the nuc connected to SATA. But that would leave an asymmetric OSD setup with 2 nodes having 3-4OSDs per node and the nuc only having a single OSD.
I think I’ll just have to bite the bullet. Trial it all and see if it bites me in the b**t.
Yeah, The USB-C enclosure isn’t an elegant solution, but a stop gap until I can get another node up. It’s a 4 disk enclosure that appears as single disks. Bandwidth is limited to 5gbps but I’m only running 3 3tb disks. I’m not expecting performance, but do want reliability. I’ve already got a...
So I’m trying to cobble together a homelab cluster. Its a mixed bag with an e3-1230v3 16gb ECC node, an intel nuc 32gb non-ecc (USB 3 hdd external enclosure) and soon an AMD 3700x with 32gb ECC ram.
How does Ceph play with such an asymmetric setup? Each node will have 3x3TB hdd and 10gbe (the...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.