add "mtu 9000" to the iface definitions of the interfaces:
iface enp4s0 inet manual
mtu 9000
iface enp6s0 inet manual
mtu 9000
iface enx8cae4cee5150 inet manual
mtu 9000
you also have to change mtu in the VM's, the local PC's and _all_ your switches have to support it.
I...
ceph is very dependent on latency. It works very well with at least 10 Ge Network and fast SSD OSD's or HDD OSD's backed by SSD Journals also with modest size clusters.
Have a look on the bios settings for USB, maybe some change here is possible, it trys the USB stick as CD
Also maybe you have a real CD drive to test, or maybe a USB CD Drive?
you can also try this method: https://forum.proxmox.com/threads/proxmox-installation-via-pxe-solution.8484/
if speed is not the problem, but capacity, you can use S-ATA Enterprise Disks also, we use many in 2 TByte + sizes.
SAS is a must on HA Filer configuration, where you want to failover disk pools between two heads.
SAS is faster for high speed SSD's as there is 12 GBit/s (and 24 Gbit/s coming...
There is no advantage in Raspberry's in your configuration.
No you will not need to restore, just get one of the failed nodes up again. If your hardware is so flaky that you fear so much failures, then the hardware should be deployed to the trash bin before you ever put a system on it.
With...
this is complete unneccessary, and wll not help with CEPH anyway.
If you have 3 nodes running CEPH you will need 2 of them up anyway, as the usual rule for pool's is min 2 may 3
That means at least two copies of a object have to be available, and they need to be on different hosts.
And be...
Solution 3, you should _never_ run a virtualisation host with DHCP, always use fixed IP as possible,
BTW. it is not necessary to give the virtualisation host an IP in a network, as long as there is not a special reachability requirement. Just the VM's need to have access to the lan.
1 node can fail
An additional witness box does not help, as you need n/2+1 nodes as quorum to avoid split brain situations (which will be really bad).
So a 4 node cluster will need 3 nodes alive, so the witness box does not help in this case.
The witness box is a good idea to run a 2 node...
Ahh, sorry was a misreading from me, of course the first disk should be OS only, so:
2 x TB Sata + Journal SSD or the alternative with 3 Disks and the PCI NVME Adapter (delock has cheap NVME Adapters)
I would not recommend installing OS on USB Stick, they would not live long enough .......
instead of the disk configuration you mentioned:
3x 2TB Sata + 1 Journal SSD or alternativ: 4 x 2TB Sata + 1 M.2 NVME SSD (on a cheap PCIe M,2 Adapter)
For the network: 1G Ethernet is a little bit slow for CEPH, better to use 10 G Ethernet
Maybe the following: 2 x 1 G Ports LACP for VM...
no real idea, but some thought's:
-> the HP P800 is some oldish 3 GBit SAS -> maybe not very well fitted for SSD's
-> A Raid 0 should be fast, but do you really want to live with the risk?
if one of your ssd's goes bad, you will loose anything
-> Better try to get a cheap SAS HBA (maybe...
You probably have no journal / block.db on SSD for your OSD's?
It would greatly help to add SSD's for a journal (aka block.db with bluestore). You could combine all journals for one host on one SSD
E.g. I use Samsung 960 EVO Nvme SSD (256 Gbytes) with a cheap PCIe -> NVME Adapter board with a...
The combination IP + Portnumber is the adress of a single service!
See it like Housenumber + Flatnumber (or do you want to have people coming into your flat erradically?)
Of course it is possible to do some Loadbalancing for some services (probably in one VM), but all VM's behind the balancer...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.