To avoid split-brain issues in the future, number of nodes need to be odd.
Can always setup a quorum device on a RPI or a VM on a non-cluster host https://pve.proxmox.com/wiki/Cluster_Manager#_corosync_external_vote_support
May want to look at various Ceph cluster benchmark papers online like this one https://forum.proxmox.com/threads/proxmox-ve-ceph-benchmark-2020-09-hyper-converged-with-nvme.76516/
Will give you an idea on design.
Another option is a full-mesh Ceph cluster https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server
It's what I use on 13-year old servers. I did bond the 1GbE and used broadcast mode. Works surprisingly well.
I used a IPv4 link-local address of 169.254.x.x/24 for both Corosync, Ceph...
Updated a Ceph cluster to PVE 7.2 without any issues.
I've just noticed I'm using the wrong network/subnet for the Ceph public, private and Corosync networks.
It seems my searching skills are failing me on how to re-IP Ceph & Corosync networks.
Any URLs to research this issue?
Yeah, dracut is like "sysprep" for Linux.
Good deal on figuring out how to import the virtual disks.
Since all my Linux VMs are BIOS based, I don't use UEFI. Guess Proxmox enables secure boot when using UEFI.
Linux is kinda indifferent in base hardware changes as long as you run "dracut -fv --regenerate-all --no-hostonly" prior to migrating to new virtualization platform.
If chosing UEFI for the firmware, then I think you need a GPT disk layout on the VM being migrated. If using BIOS as the...
Since it seems you are going with Ceph, I suggest the following optimizations to get better IOPS:
1. Set VM cache to none
2. VirtIO SCSI Single controller with discard and IO thread enabled
3. On Linux VMs, set the IO scheduler to none or noop
4. Turn on write-cache enable (WCE) on SAS drives...
May want to change the VM disk cache to none. I got a significant increase in IOPS from writeback.
I also have WCE (write-cache enabled) on the SAS drives. Set it with "sdparm --set WCE --save /dev/sd[x]"
Don't know the answer to your question but I thought you needed an odd number of nodes for quorum?
For example, I had a 4 node Ceph cluster but I use a QDevice for quorum https://pve.proxmox.com/wiki/Cluster_Manager#_corosync_external_vote_support
I currently have 2 Proxmox Ceph clusters.
One is 3 x 1U 8-bay SAS servers using a full-mesh network (2 x 1GbE bonded). 2 of the drives bays are ZFS mirrored for Proxmox itself and the rest of the drive bays are OSDs (18 total). Works very well for 12-year old hardware. This is a stage cluster...
I have the following network setup:
192.168.1.0/24 VLAN 10
192.168.2.0/24 VLAN 20
Each VLAN is protected by a firewall.
If you are open to used servers, head on over to labgopher.com
Best bang for the buck are the Dell 12-generation servers, i.e., R620/R720.
However, I run Proxmox Ceph on 10-year server hardware. Works very well.
According to this post https://forum.proxmox.com/threads/virtio-scsi-vs-virtio-scsi-single.28426 virtio uses a single controller per disk just like for virtio scsi single.
As to which one is "faster", no idea.
This is fixed.
Was several issues.
First was updating Ansible to latest version to use the Proxmoxer pip module to support the Proxmox 6.x APIs to create VMs.
Second was that the behavior of the "connection: local" being used playbook-wide has changed since Ansible 2.8.5. I add to add...
I forgot the command-line kung-fu to tell urllib3 to ignore the self-signed certs on the Proxmox hosts. I have the same error as this post https://learn.redhat.com/t5/Automation-Management-Ansible/Ansible-proxmox-kvm-module/td-p/3935
Anyone remember the command-line or environment variable to...