I do know when flashing a Dell HBA to IT mode, it changes the order of the disks per https://forums.servethehome.com/index.php?threads/guide-flashing-h310-h710-h810-mini-full-size-to-it-mode.27459/page-2#post-255082 and...
This is what I use to increase IOPS on a Ceph cluster using SAS drives, YMMV:
Set write cache enable (WCE) to 1 on SAS drives
Set VM cache to none
Set VM CPU type to 'host'
Set RBD pool to use the 'krbd' option
Use VirtIO-single SCSI controller and enable IO thread and discard option
On Linux...
In split-brain situations, each node will vote for the other node, hence you get a deadlock. A QDevice will randomly vote for a node in a 2-node cluster, breaking the tie.
To avoid split-brain issues in the future, number of nodes need to be odd.
Can always setup a quorum device on a RPI or a VM on a non-cluster host https://pve.proxmox.com/wiki/Cluster_Manager#_corosync_external_vote_support
May want to look at various Ceph cluster benchmark papers online like this one https://forum.proxmox.com/threads/proxmox-ve-ceph-benchmark-2020-09-hyper-converged-with-nvme.76516/
Will give you an idea on design.
Another option is a full-mesh Ceph cluster https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server
It's what I use on 13-year old servers. I did bond the 1GbE and used broadcast mode. Works surprisingly well.
I used a IPv4 link-local address of 169.254.x.x/24 for both Corosync, Ceph...
Updated a Ceph cluster to PVE 7.2 without any issues.
I've just noticed I'm using the wrong network/subnet for the Ceph public, private and Corosync networks.
It seems my searching skills are failing me on how to re-IP Ceph & Corosync networks.
Any URLs to research this issue?
Thanks for...
I suggest setting processor type to "host" and hard disk cache to "none". I also use SCSCI single controller with discard and iothread to "on". Also set the Linux IO scheduler to "none/noop".
Yeah, dracut is like "sysprep" for Linux.
Good deal on figuring out how to import the virtual disks.
Since all my Linux VMs are BIOS based, I don't use UEFI. Guess Proxmox enables secure boot when using UEFI.
Linux is kinda indifferent in base hardware changes as long as you run "dracut -fv --regenerate-all --no-hostonly" prior to migrating to new virtualization platform.
If chosing UEFI for the firmware, then I think you need a GPT disk layout on the VM being migrated. If using BIOS as the...
Since it seems you are going with Ceph, I suggest the following optimizations to get better IOPS:
1. Set VM cache to none
2. VirtIO SCSI Single controller with discard and IO thread enabled
3. On Linux VMs, set the IO scheduler to none or noop
4. Turn on write-cache enable (WCE) on SAS drives...
May want to change the VM disk cache to none. I got a significant increase in IOPS from writeback.
I also have WCE (write-cache enabled) on the SAS drives. Set it with "sdparm --set WCE --save /dev/sd[x]"
Don't know the answer to your question but I thought you needed an odd number of nodes for quorum?
For example, I had a 4 node Ceph cluster but I use a QDevice for quorum https://pve.proxmox.com/wiki/Cluster_Manager#_corosync_external_vote_support
It's considered best practice to have 2 physically separate cluster (Corosync) links obviously connected to 2 different switches. Corosync wants low latency not bandwidth, so 2x1GbE.
If you have SAS drives, you can run the following command as root: sdparm --set=WCE --save /dev/sd[X].
To confirm write cache is enabled as root, run: sdparm --get=WCE /dev/sd[X]
1 = on
0 = off
I currently have 2 Proxmox Ceph clusters.
One is 3 x 1U 8-bay SAS servers using a full-mesh network (2 x 1GbE bonded). 2 of the drives bays are ZFS mirrored for Proxmox itself and the rest of the drive bays are OSDs (18 total). Works very well for 12-year old hardware. This is a stage cluster...
I have the following network setup:
192.168.1.0/24 VLAN 10
pve1.host.local 192.168.1.11/24
pve2.host.local 192.168.1.12/24
pve3.host.local 192.168.1.13/24
192.168.2.0/24 VLAN 20
pbs.guest.local 192.168.2.254/24
Each VLAN is protected by a firewall.
Per...
If you are open to used servers, head on over to labgopher.com
Best bang for the buck are the Dell 12-generation servers, i.e., R620/R720.
However, I run Proxmox Ceph on 10-year server hardware. Works very well.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.