Search results

  1. J

    What's Next?

    +1 ESXi already does this and I think XCP-NG.
  2. J

    disk IO slow down proxmox system

    This is what I use to optimize IOPS: Set write cache enable (WCE) to 1 on SAS drives Set VM cache to none Set VM to use VirtIO-single SCSI controller and enable IO thread and discard option Set VM CPU type to 'host' Set VM VirtIO Multiqueue to number of cores/vCPUs If using Linux: Set Linux...
  3. J

    Why are my VMs so slow?

    This is what I use to optimize IOPS: Set write cache enable (WCE) to 1 on SAS drives Set VM cache to none Set VM to use VirtIO-single SCSI controller and enable IO thread and discard option Set VM CPU type to 'host' Set VM VirtIO Multiqueue to number of cores/vCPUs If using Linux: Set Linux...
  4. J

    Will a Dell PowerEdge R720 be up to the task?

    Per https://www.dell.com/support/manuals/en-us/poweredge-r720/720720xdom/system-memory?guid=guid-7550b0f0-b658-4f09-a3f8-668d9ced36ae&lang=en-us, LRDIMM not supported on R720s with 3.5 drives. I do know it works fine with regular RDIMMs.
  5. J

    [SOLVED] 10GbE IOPS different on host vs VM

    Thanks. That fixed the network IOPS issue.
  6. J

    [SOLVED] 10GbE IOPS different on host vs VM

    Yes, Im using VirtIO for VM networking. What value should I use the VM's VirtIO Multiqueue? I think I read it should match the number of cores of the CPU?
  7. J

    Advice on Ceph for Homelab

    I have a 3-node Ceph cluster running on 13-year old server hardware. It's using full-mesh broadcast bonded 1GbE networking working just fine. That's right, the Ceph public, private, and Corosync traffic over 1GbE networking just fine. There is even a 45Drives blog on using 1GbE networking in...
  8. J

    [SOLVED] 10GbE IOPS different on host vs VM

    I have a standalone Dell R620 ZFS host with 10GbE networking. I get the full network IOPS when doing a wget of a file on the R620. However, when I run the same wget inside a VM, I do NOT get the same network IOPS. Maybe I get about 60-75% network IOPS. Anything on the host and/or VM settings...
  9. J

    10 Gb card : Broadcom 57810 vs Intel X520

    I use the Dell X540/I350 rNDC (rack network daughter card) in a 5-node Ceph cluster without issues. Since the R320 doesn't have a rNDC slot, you need to get a PCIe one. I do know that Mellanox Connect-X3 is well supported. I stick with Intel NICs except for X710. Stay away from them.
  10. J

    C6320, LSI2008 HBA and disk order

    I do know when flashing a Dell HBA to IT mode, it changes the order of the disks per https://forums.servethehome.com/index.php?threads/guide-flashing-h310-h710-h810-mini-full-size-to-it-mode.27459/page-2#post-255082 and...
  11. J

    How to get better performance in ProxmoxVE + CEPH cluster

    This is what I use to increase IOPS on a Ceph cluster using SAS drives, YMMV: Set write cache enable (WCE) to 1 on SAS drives Set VM cache to none Set VM CPU type to 'host' Set RBD pool to use the 'krbd' option Use VirtIO-single SCSI controller and enable IO thread and discard option On Linux...
  12. J

    4 Node Cluster fail after 2 node are offline

    In split-brain situations, each node will vote for the other node, hence you get a deadlock. A QDevice will randomly vote for a node in a 2-node cluster, breaking the tie.
  13. J

    4 Node Cluster fail after 2 node are offline

    To avoid split-brain issues in the future, number of nodes need to be odd. Can always setup a quorum device on a RPI or a VM on a non-cluster host https://pve.proxmox.com/wiki/Cluster_Manager#_corosync_external_vote_support
  14. J

    Design - Network for CEPH

    May want to look at various Ceph cluster benchmark papers online like this one https://forum.proxmox.com/threads/proxmox-ve-ceph-benchmark-2020-09-hyper-converged-with-nvme.76516/ Will give you an idea on design.
  15. J

    Proxmox + Ceph 3 Nodes cluster and network redundancy help

    Another option is a full-mesh Ceph cluster https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server It's what I use on 13-year old servers. I did bond the 1GbE and used broadcast mode. Works surprisingly well. I used a IPv4 link-local address of 169.254.x.x/24 for both Corosync, Ceph...
  16. J

    Re-IP Ceph cluster & Corosync

    Updated a Ceph cluster to PVE 7.2 without any issues. I've just noticed I'm using the wrong network/subnet for the Ceph public, private and Corosync networks. It seems my searching skills are failing me on how to re-IP Ceph & Corosync networks. Any URLs to research this issue? Thanks for...
  17. J

    Proxmox Cluster, ceph and VM restart takes a long time

    May want to search on-line for blog posts on how other people setup a 2-node Proxmox cluster with a witness device (qdevice).
  18. J

    SSD performance way different in PVE shell vs Debian VM

    I suggest setting processor type to "host" and hard disk cache to "none". I also use SCSCI single controller with discard and iothread to "on". Also set the Linux IO scheduler to "none/noop".
  19. J

    hardware selection for modern OSes

    Yeah, dracut is like "sysprep" for Linux. Good deal on figuring out how to import the virtual disks. Since all my Linux VMs are BIOS based, I don't use UEFI. Guess Proxmox enables secure boot when using UEFI.
  20. J

    hardware selection for modern OSes

    Linux is kinda indifferent in base hardware changes as long as you run "dracut -fv --regenerate-all --no-hostonly" prior to migrating to new virtualization platform. If chosing UEFI for the firmware, then I think you need a GPT disk layout on the VM being migrated. If using BIOS as the...