If you have the time and storage to do full backups, do a clean install. Less issues this way. Can backup the VMs to Proxmox Backup Server.
Promxox Ceph allows you to update each node a time. Just following those instructions carefully.
It's considered best practice to separate Corosync, Ceph Public and Private to separate networking infrastructure.
I didn't setup my 5-node cluster this way but have a primary 10GbE active setup in a fault-tolerance mode. There's a second NIC on standby to step-in if the primary fails. It's...
If you don't really want to deal with a switch, I suggest a 4 x 10GbE network card.
I run a 3-node Ceph cluster on 12-year old servers using a full-mesh 4 x 1GbE broadcast network https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server#Broadcast_Setup
I use the last two ports on the...
I run a 3-node and 5-node all SAS HDD Ceph cluster. The 3-node is on 12-year old servers using a full-mesh 4 x 1GbE broadcast network. The 5-node Ceph cluster is Dell 12th-gen servers using 2 x 10GbE networking to ToR switches.
Not considered best practice but the Corosync, Ceph Public &...
Do you have a Linux SME (subject matter expert) on call? If not, may want to contract for 3rd-party Proxmox support or stick with what you know, which is Windows.
If you decide to continue with Proxmox, I highly recommend homogeneous hardware which means same CPU, RAM, and storage.
As for the...
It's considered best practice to separate storage into OS and data (VMs, etc) filesystems. I don't use SSDs but if I did I would make sure they are enterprise quality for the intensive writes. In my case, the clusters are enterprise servers using SAS HDDs.
Also considered best practice to...
I tried once installing PVE using UEFI. I think I had the same issues you had. So, I just setup to boot in BIOS mode.
Also, I'm using a H310 flashed in IT mode with https://fohdeesha.com/docs/perc.html
Best practice for Ceph is lots of nodes with fewer OSDs versus fewer nodes with more OSDs.
This is to spread out the I/O load.
In order to avoid split-brain issues and have quorum, you'll want an odd number of nodes.
Best bang for the currency will be used enterprise servers. Specifically 13th-gen Dells. It has built-in drive controller which can used as either IR or IT mode (need IT mode for ZFS/Ceph) and a built-in rNDC (rack network daughter card) upgradable to 10GbE networking (fiber or wired or both)...
I use the following optimizations in a 5-node 12th-gen Dell cluster using SAS drives:
Set write cache enable (WCE) to 1 on SAS drives
Set VM cache to none
Set VM to use VirtIO-single SCSI controller and enable IO thread and discard option
Set VM CPU type to 'host'
Set VM CPU...
Granted that flash is the way to go but I do backup two Ceph clusters with a Dell R200 and Dell R620 using SAS drives.
These Dells are decommissioned and still functional so they make great PBS servers.
Not the fastest but it does backup/restore just fine.
PBS benchmarks...
It's true that you need a minimum of 3-nodes for Ceph but it's highly recommended to get mores nodes.
With that being said, I do run a 3-node full-mesh broadcast bonded 1GbE Ceph Quincy cluster on 14-year old servers using 8xSAS drives per node (2 of them are used for OS boot drives using ZFS...
May want to read this https://forum.proxmox.com/threads/2-node-cluster-w-shared-disc.109269
If you really want a 2-node "shared" storage, can use ZFS replication and use a 3rd non-cluster RPI/VM/PC as a QDevice for quorum.
I run a full-mesh broadcast bonded 1GbE 3-node Ceph cluster on 14-year...
Just created a Proxmox Offline Mirror instance.
I've noticed the setup wizard for creating a Ceph mirror does not include an option to mirror the Quincy release of Ceph.
How can I manually create it?
May want to use that SSD also for the Ceph DB as well. Will help with writes to the SAS drives.
The SSD is enterprise class, correct? Ceph eats consumer SSDs like nobody's business.
Here is my Ceph VM optimizations I use:
Set write cache enable (WCE) to 1 on SAS drives
Set VM cache to none...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.