Search results

  1. 4 Node Cluster fail after 2 node are offline

    To avoid split-brain issues in the future, number of nodes need to be odd. Can always setup a quorum device on a RPI or a VM on a non-cluster host
  2. Design - Network for CEPH

    May want to look at various Ceph cluster benchmark papers online like this one Will give you an idea on design.
  3. Proxmox + Ceph 3 Nodes cluster and network redundancy help

    Another option is a full-mesh Ceph cluster It's what I use on 13-year old servers. I did bond the 1GbE and used broadcast mode. Works surprisingly well. I used a IPv4 link-local address of 169.254.x.x/24 for both Corosync, Ceph...
  4. Re-IP Ceph cluster & Corosync

    Updated a Ceph cluster to PVE 7.2 without any issues. I've just noticed I'm using the wrong network/subnet for the Ceph public, private and Corosync networks. It seems my searching skills are failing me on how to re-IP Ceph & Corosync networks. Any URLs to research this issue? Thanks for...
  5. Proxmox Cluster, ceph and VM restart takes a long time

    May want to search on-line for blog posts on how other people setup a 2-node Proxmox cluster with a witness device (qdevice).
  6. SSD performance way different in PVE shell vs Debian VM

    I suggest setting processor type to "host" and hard disk cache to "none". I also use SCSCI single controller with discard and iothread to "on". Also set the Linux IO scheduler to "none/noop".
  7. hardware selection for modern OSes

    Yeah, dracut is like "sysprep" for Linux. Good deal on figuring out how to import the virtual disks. Since all my Linux VMs are BIOS based, I don't use UEFI. Guess Proxmox enables secure boot when using UEFI.
  8. hardware selection for modern OSes

    Linux is kinda indifferent in base hardware changes as long as you run "dracut -fv --regenerate-all --no-hostonly" prior to migrating to new virtualization platform. If chosing UEFI for the firmware, then I think you need a GPT disk layout on the VM being migrated. If using BIOS as the...
  9. 3 nodes cluster with local shared storage and HA

    Since it seems you are going with Ceph, I suggest the following optimizations to get better IOPS: 1. Set VM cache to none 2. VirtIO SCSI Single controller with discard and IO thread enabled 3. On Linux VMs, set the IO scheduler to none or noop 4. Turn on write-cache enable (WCE) on SAS drives...
  10. 3 nodes cluster with local shared storage and HA

    This. I run a full-mesh 3-node Ceph cluster on 12-year old servers. Since it's running Debian, I have zero issues.
  11. Enterprise SSD, Dell R730xd servers, 20Gbps link, still Ceph iops is too low

    May want to change the VM disk cache to none. I got a significant increase in IOPS from writeback. I also have WCE (write-cache enabled) on the SAS drives. Set it with "sdparm --set WCE --save /dev/sd[x]"
  12. 12 Node CEPH How Many Nodes Can Be Fail ?

    Don't know the answer to your question but I thought you needed an odd number of nodes for quorum? For example, I had a 4 node Ceph cluster but I use a QDevice for quorum
  13. Proxmox/Ceph hardware and setup

    It's considered best practice to have 2 physically separate cluster (Corosync) links obviously connected to 2 different switches. Corosync wants low latency not bandwidth, so 2x1GbE.
  14. Set write_cache for ceph disks persistent

    If you have SAS drives, you can run the following command as root: sdparm --set=WCE --save /dev/sd[X]. To confirm write cache is enabled as root, run: sdparm --get=WCE /dev/sd[X] 1 = on 0 = off
  15. Advice for new Hyper converged platform

    I currently have 2 Proxmox Ceph clusters. One is 3 x 1U 8-bay SAS servers using a full-mesh network (2 x 1GbE bonded). 2 of the drives bays are ZFS mirrored for Proxmox itself and the rest of the drive bays are OSDs (18 total). Works very well for 12-year old hardware. This is a stage cluster...
  16. Firewall ports to allow when Proxmox and PBS on different networks?

    I have the following network setup: VLAN 10 VLAN 20 pbs.guest.local Each VLAN is protected by a firewall. Per...
  17. need hardware recommendation for 3 Node Cluster

    If you are open to used servers, head on over to Best bang for the buck are the Dell 12-generation servers, i.e., R620/R720. However, I run Proxmox Ceph on 10-year server hardware. Works very well.
  18. VirtIO vs SCSI

    According to this post virtio uses a single controller per disk just like for virtio scsi single. As to which one is "faster", no idea.
  19. [SOLVED] Proxmoxer proxmox_kvm ignore urllib3 self-signed cert error?

    This is fixed. Was several issues. First was updating Ansible to latest version to use the Proxmoxer pip module to support the Proxmox 6.x APIs to create VMs. Second was that the behavior of the "connection: local" being used playbook-wide has changed since Ansible 2.8.5. I add to add...
  20. [SOLVED] Proxmoxer proxmox_kvm ignore urllib3 self-signed cert error?

    I forgot the command-line kung-fu to tell urllib3 to ignore the self-signed certs on the Proxmox hosts. I have the same error as this post Anyone remember the command-line or environment variable to...


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!