Proxmox runs on top of Debian, so if it's supported on Debian, it should work on Proxmox.
Proxmox will definitely burn out any flash storage (that includes SD and consumer SSD) if it's not enterprise grade.
I use 2 x SAS HDDs for Proxmox OS itself being mirrored by ZFS RAID-1.
Then use...
Since Proxmox runs on top of Debian and not a proprietary kernel, I'm partial to Supermicro blade servers.
Can't get more generic than that. More info at https://www.supermicro.com/en/products/blade
May also want to check out their Twin series of servers which supports nodes.
If the nodes in your cluster have the same CPU family type, live migration should work with the VM CPU type set to 'host'.
For example, I can live migrate between a R720 and R820 because they both have Sandy Bridge CPUs.
For RHEL and derivatives, on the original VM run the following:
# dracut --force --verbose --no-hostonly
The above was required to migrate from ESXi to KVM. Should work for KVM to KVM.
The root cause of this issue is the compilation of RHEL 9 and it's derivatives to use the x86-64-v2 instruction set https://developers.redhat.com/blog/2021/01/05/building-red-hat-enterprise-linux-9-for-the-x86-64-v2-microarchitecture-level#background_of_the_x86_64_microarchitecture_levels
More...
This is what I use to optimize IOPS:
Set write cache enable (WCE) to 1 on SAS drives
Set VM cache to none
Set VM to use VirtIO-single SCSI controller and enable IO thread and discard option
Set VM CPU type to 'host'
Set VM VirtIO Multiqueue to number of cores/vCPUs
If using Linux:
Set Linux...
This is what I use to optimize IOPS:
Set write cache enable (WCE) to 1 on SAS drives
Set VM cache to none
Set VM to use VirtIO-single SCSI controller and enable IO thread and discard option
Set VM CPU type to 'host'
Set VM VirtIO Multiqueue to number of cores/vCPUs
If using Linux:
Set Linux...
Per https://www.dell.com/support/manuals/en-us/poweredge-r720/720720xdom/system-memory?guid=guid-7550b0f0-b658-4f09-a3f8-668d9ced36ae&lang=en-us, LRDIMM not supported on R720s with 3.5 drives. I do know it works fine with regular RDIMMs.
Yes, Im using VirtIO for VM networking.
What value should I use the VM's VirtIO Multiqueue? I think I read it should match the number of cores of the CPU?
I have a 3-node Ceph cluster running on 13-year old server hardware. It's using full-mesh broadcast bonded 1GbE networking working just fine. That's right, the Ceph public, private, and Corosync traffic over 1GbE networking just fine. There is even a 45Drives blog on using 1GbE networking in...
I have a standalone Dell R620 ZFS host with 10GbE networking.
I get the full network IOPS when doing a wget of a file on the R620.
However, when I run the same wget inside a VM, I do NOT get the same network IOPS. Maybe I get about 60-75% network IOPS.
Anything on the host and/or VM settings...
I use the Dell X540/I350 rNDC (rack network daughter card) in a 5-node Ceph cluster without issues.
Since the R320 doesn't have a rNDC slot, you need to get a PCIe one.
I do know that Mellanox Connect-X3 is well supported.
I stick with Intel NICs except for X710. Stay away from them.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.