When VMware/Dell dropped official support for 12th-gen Dells, I migrated the machines to Proxmox after I flashed the PERCs to IT-mode.
Zero issues.
Only hardware failures are the SAS HDDs but that's easily fixed in ZFS/Ceph.
For used, can't beat enterprise servers with built-in IPMI and optional 10GbE. Curated list at labgopher.com. Dell R230 with hot-swap drives will be a good choice.
For new, Supermicro motherboards from mini-ITX to ATX form-factor. Comes with IPMI and optional 10GbE and SAS controller...
Since I don't use any AMD CPUs in work or home (use Intel exclusively with zero issues), I've had read about issues with Debian/Proxmox with obscure AMD settings.
If you don't plan to create a cluster (highly recommend homogeneous hardware for clustering), get as much memory for the server as...
If you are needing high IOPS for transaction workloads, Ceph will not be the answer. It really wants lots of nodes of homogeneous hardware.
With that being said, I did convert a fleet of 12th-gen Dells (VMware/Dell dropped official support for ESXi/vSphere 7) with built-in 2 x 10GbE to create a...
I use Dell and Supermicro in production. Supermicro's are less expensive than Dells.
These systems don't particular need high IOPS, so they are using SAS HDDs. They do have max RAM installed.
I won't bother with 1GbE. 10GbE or higher is what you want.
Since databases do require high IOPS if...
I use H330s in a R630 Ceph cluster.
Flashed with latest firmware version?
Did you delete any virtual disks before switching the H330 to HBA mode? If you don't do this step, you won't see the physical drives.
I get 100's in write IOPS and read IOPS are 2x-3x of write IOPS using SAS 10K RPM HDDs. This is with a 5-node Dell 12th-gen 16-drive bay servers using 10GbE.
I am guessing you are using consumer SSDs? They bottleneck real quick once their internal cache is filled. You'll want enterprise SSD...
If this is production, I would wait until 8 matures. All zero releases regardless of software vendor is "buggy". Still got another year of support for PVE 7 anyhow.
On 12th-gen Dells, you can flash the PERC to IT-mode via https://fohdeesha.com/docs/perc.html
On 13th-gen Dells, configure the PERC (flashed to latest firmware) to HBA/IT-mode. Delete any existing RAID volumes.
I would ZFS mirror (RAID-1) two drives for Proxmox use the rest as Ceph OSDs...
I used this guide https://unixcop.com/migrate-virtual-machine-from-vmware-esxi-to-proxmox-ve
The -flat.vmdk is the actual virtual disk. The vmdk is the metafile which qm convert needs to describe the -flat.vmdk file.
Seems like the issue is RAM.
I have a bunch of 12th and 13th-gen Dells using E5 CPUs. They have between 256GB and 512GB RAM. No issues.
I do recommend a clean install. Backup first though.
Yeah, yeah, I know. EOL CPU.
Ceph was working fine under Proxmox 7 using the same CPU.
I did pve7to8 upgrade and a clean install of Proxmox 8.
Both situations got the 'Caught signal (illegal instruction)' when attempting to start up a Ceph monitor.
It's either pointing to a bad binary or...
Did a clean install of Proxmox 8 and using the no-sub Quincy repository.
Still got the 'Caught signal (illegal instruction)' message.
It's pointing a bad re-compile or the Ceph monitor binaries no longer are supported on AMD Opteron 2427 CPUs.
Ceph was working fine under Proxmox 7.
My next step was to re-create the monitors manually by disabling the service and removing /var/lib/ceph/mon/<hostname> directory.
Then ran 'pveceph mon create'. After awhile it timed-out. Running 'journalctl on the failed monitor service shows the following:
Jun 25 13:29:03 pve-test-7-to-8...
Did the 'pve7to8 --full' on a 3-node Ceph Quincy cluster, no issues were found.
Both PVE and Ceph were upgraded and 'pve7to8 --full' mentioned a reboot was required.
After reboot, got "Ceph got timeout (500)" error.
"ceph -s" shows nothing.
No monitors, no managers, no mds.
Any suggestions...
I used this guide https://unixcop.com/migrate-virtual-machine-from-vmware-esxi-to-proxmox-ve for migrating ESXi VMs to Proxmox.
You still need to run dracut to include all the drivers before migration.
I use the 'qm importdisk' command on the .vmdk file itself which is the metadata file which...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.