I use SAS HDDs in production. Since they are meant to be used on HW RAID controllers with a BBU, their write cache is turned off. May want to check if the HDDs have their cache enabled. VMs range from databases to DHCP/PXE servers. Not hurting for IOPS.
I use the following optimizations learned...
Migrated off TrueNAS SCALE to Proxmox because didn't have full CLI functionality. Used the LXC *Arr scripts from here https://tteck.github.io/Proxmox
I am using privileged containers because didn't want to configure UID/GUID remapping. Using Homarr as the jumping point to other *Arr LXCs.
Boot...
I use POM in production on stand-alone bare-metal servers (which also doubles as a PBS server) and in a Debian 12 VM. It works. Just make sure to change /etc/apt repo files to point to the POM server.
When I migrated 13th-gen Dells to Proxmox Ceph, I swapped out the PERC for a Dell HBA330 (which is a pure IT-mode controller).
Then I used ZFS RAID-1 to mirror Proxmox. Rest of drives are OSDs.
I don't think the HW RAID PERC can be configured in both HW RAID mode and pass-through mode at the...
I've used the Proxmox ESXi migration tool on production VMs with no issues. You'll need to disable session timeouts on the ESXi host(s). Make sure the VMs are off and NO snapshots.
ZFS works fine on SATA drives. ZFS provides snapshots, rollbacks, compression, and error checking of data and...
You need to disable maxSessionCount and sessionTimeout on the ESXi host per https://pve.proxmox.com/wiki/Migrate_to_Proxmox_VE#Automatic_ESXi_Import:_Step_by_Step
Yeah, really. By default, SAS HDDs ship with write cache disabled. The reason being that these drives were meant to be used on a HW RAID controller with BBU. The HW RAID will do the caching on behalf of of the drives.
So, when I converted the Dells over to Proxmox Ceph and replaced the HW RAID...
See my reply at https://forum.proxmox.com/threads/debian-11-not-booting-with-virtio-scsi-single-but-works-with-vmware-pvscsi.144806/post-652170
It's still valid for Proxmox 8.2.x
For proof-of-concept, 3-nodes will suffice.
For production, you really, really want a minimum of 5-nodes. That way, 2-nodes can fail and still have 3-nodes for quorum.
I converted a fleet of 13th-gen Dells which used to run VMware vSphere over to Proxmox Ceph.
Made sure all the nodes had the...
Since Proxmox is Debian with a custom Ubuntu kernel, pretty much runs on a 64-bit CPU with Intel VT/AMD-V hardware virtualization.
Have it running on Intel Sandy Bridge, Haswell, and Broadwell CPU generations with high CPU core counts with no issues.
Supposedly someone over at r/Proxmox got it working at 6.0 and 5.5.
Can always manually copy over the .vmdk and .vmdk-flat files over and do a 'qemu-img convert'.
I manage several production 5- and 7- node Proxmox Ceph clusters. Why 5 or 7 nodes? Well, with 3 nodes, you can only tolerate a single node failure and I believe because of lack of quorum, no writing of data will occur. Strongly suggest 5-nodes at minimum that way one can tolerate 2 node...
Yes, that will work.
I don't deal with tower servers but with rack-mounted 4-drive servers, I use RAID-10 for both Proxmox and VMS. Not considered best practice but wanted the IOPs.
The VMs are backed up to a separate bare-metal Proxmox Backup Server.
Seeing that you have non-identical hardware, using the HPE as a SAN should work. May want to use ESOS [Enterprise Storage OS] (esos-project.com) for the SAN software.
Then setup the IBMs as as PVE cluster.
If the hardware was identical, I would recommend Ceph. You really, really want identical...
Just migrated half-dozen Linux RHEL clones VMs from ESXi 7.x to Proxmox 8.1.x.
The steps are:
1) Remove open-vm-tools from ESXi VM
2) Install qemu-guest-agent on ESXi VM
3) Remove ESXi networking from ESXI Linux VM
4) Remove ESXi Linux VM networking config file
5) Run as root 'dracut -fv -N...
May have to do a full system reset.
Go into the Lifecycle controller via F10 and goto Hardware Configuration -> Re-purpose or Retire.
This may "unstuck" the rNDC.
After this, boot with Arch Linux and see what /var/log/messages and dmesg says.
Also check to see what the iDRAC web interface...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.