Search results

  1. J

    C6320, LSI2008 HBA and disk order

    I do know when flashing a Dell HBA to IT mode, it changes the order of the disks per https://forums.servethehome.com/index.php?threads/guide-flashing-h310-h710-h810-mini-full-size-to-it-mode.27459/page-2#post-255082 and...
  2. J

    How to get better performance in ProxmoxVE + CEPH cluster

    This is what I use to increase IOPS on a Ceph cluster using SAS drives, YMMV: Set write cache enable (WCE) to 1 on SAS drives Set VM cache to none Set VM CPU type to 'host' Set RBD pool to use the 'krbd' option Use VirtIO-single SCSI controller and enable IO thread and discard option On Linux...
  3. J

    4 Node Cluster fail after 2 node are offline

    In split-brain situations, each node will vote for the other node, hence you get a deadlock. A QDevice will randomly vote for a node in a 2-node cluster, breaking the tie.
  4. J

    4 Node Cluster fail after 2 node are offline

    To avoid split-brain issues in the future, number of nodes need to be odd. Can always setup a quorum device on a RPI or a VM on a non-cluster host https://pve.proxmox.com/wiki/Cluster_Manager#_corosync_external_vote_support
  5. J

    Design - Network for CEPH

    May want to look at various Ceph cluster benchmark papers online like this one https://forum.proxmox.com/threads/proxmox-ve-ceph-benchmark-2020-09-hyper-converged-with-nvme.76516/ Will give you an idea on design.
  6. J

    Proxmox + Ceph 3 Nodes cluster and network redundancy help

    Another option is a full-mesh Ceph cluster https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server It's what I use on 13-year old servers. I did bond the 1GbE and used broadcast mode. Works surprisingly well. I used a IPv4 link-local address of 169.254.x.x/24 for both Corosync, Ceph...
  7. J

    Re-IP Ceph cluster & Corosync

    Updated a Ceph cluster to PVE 7.2 without any issues. I've just noticed I'm using the wrong network/subnet for the Ceph public, private and Corosync networks. It seems my searching skills are failing me on how to re-IP Ceph & Corosync networks. Any URLs to research this issue? Thanks for...
  8. J

    Proxmox Cluster, ceph and VM restart takes a long time

    May want to search on-line for blog posts on how other people setup a 2-node Proxmox cluster with a witness device (qdevice).
  9. J

    SSD performance way different in PVE shell vs Debian VM

    I suggest setting processor type to "host" and hard disk cache to "none". I also use SCSCI single controller with discard and iothread to "on". Also set the Linux IO scheduler to "none/noop".
  10. J

    hardware selection for modern OSes

    Yeah, dracut is like "sysprep" for Linux. Good deal on figuring out how to import the virtual disks. Since all my Linux VMs are BIOS based, I don't use UEFI. Guess Proxmox enables secure boot when using UEFI.
  11. J

    hardware selection for modern OSes

    Linux is kinda indifferent in base hardware changes as long as you run "dracut -fv --regenerate-all --no-hostonly" prior to migrating to new virtualization platform. If chosing UEFI for the firmware, then I think you need a GPT disk layout on the VM being migrated. If using BIOS as the...
  12. J

    3 nodes cluster with local shared storage and HA

    Since it seems you are going with Ceph, I suggest the following optimizations to get better IOPS: 1. Set VM cache to none 2. VirtIO SCSI Single controller with discard and IO thread enabled 3. On Linux VMs, set the IO scheduler to none or noop 4. Turn on write-cache enable (WCE) on SAS drives...
  13. J

    3 nodes cluster with local shared storage and HA

    This. I run a full-mesh 3-node Ceph cluster on 12-year old servers. Since it's running Debian, I have zero issues.
  14. J

    Enterprise SSD, Dell R730xd servers, 20Gbps link, still Ceph iops is too low

    May want to change the VM disk cache to none. I got a significant increase in IOPS from writeback. I also have WCE (write-cache enabled) on the SAS drives. Set it with "sdparm --set WCE --save /dev/sd[x]"
  15. J

    12 Node CEPH How Many Nodes Can Be Fail ?

    Don't know the answer to your question but I thought you needed an odd number of nodes for quorum? For example, I had a 4 node Ceph cluster but I use a QDevice for quorum https://pve.proxmox.com/wiki/Cluster_Manager#_corosync_external_vote_support
  16. J

    Proxmox/Ceph hardware and setup

    It's considered best practice to have 2 physically separate cluster (Corosync) links obviously connected to 2 different switches. Corosync wants low latency not bandwidth, so 2x1GbE.
  17. J

    Set write_cache for ceph disks persistent

    If you have SAS drives, you can run the following command as root: sdparm --set=WCE --save /dev/sd[X]. To confirm write cache is enabled as root, run: sdparm --get=WCE /dev/sd[X] 1 = on 0 = off
  18. J

    Advice for new Hyper converged platform

    I currently have 2 Proxmox Ceph clusters. One is 3 x 1U 8-bay SAS servers using a full-mesh network (2 x 1GbE bonded). 2 of the drives bays are ZFS mirrored for Proxmox itself and the rest of the drive bays are OSDs (18 total). Works very well for 12-year old hardware. This is a stage cluster...
  19. J

    Firewall ports to allow when Proxmox and PBS on different networks?

    I have the following network setup: 192.168.1.0/24 VLAN 10 pve1.host.local 192.168.1.11/24 pve2.host.local 192.168.1.12/24 pve3.host.local 192.168.1.13/24 192.168.2.0/24 VLAN 20 pbs.guest.local 192.168.2.254/24 Each VLAN is protected by a firewall. Per...
  20. J

    need hardware recommendation for 3 Node Cluster

    If you are open to used servers, head on over to labgopher.com Best bang for the buck are the Dell 12-generation servers, i.e., R620/R720. However, I run Proxmox Ceph on 10-year server hardware. Works very well.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!