This thread at https://forums.servethehome.com/index.php?threads/enterprise-ssd-small-deals.48343 has a list of enterprise SSDs for sale.
Another option is getting a server that supports U.2/U.3 flash drives. They can be surprisingly cheaper...
I compile this utility to format SAS drives to 512-bytes or do a low-level format https://github.com/ahouston/setblocksize
You'll need to install the sg3_utils package.
While doing that, might as install sdparm package and enable the write...
As I mentioned before, so much drama with PERC controllers in HBA-mode. Just swap them out for a Dell HBA330 true IT-mode storage controller. Your future self will thank you. They are cheap to get.
I use Dell Intel X550 rNDC in production without issues. Both the 2x1GbE-2x10GbE and 4x10GbE versions.
The 10GbE uses the ixgbe driver and the 1GbE uses the igb driver.
Use 'dmesg -t' to confirm. Obviously flash the rNDC to the latest firmware...
It's these one-off situations with the megaraid_sas driver and just installing a Dell HBA330 using the much simpler mpt3sas driver will avoid all this drama. LOL.
In addition, the Dell HBA330 is very cheap to get.
While it's true 3-nodes is the minimum for a Ceph cluster, you can only lose 1 node before losing quorum. You'll really want 5-nodes. Ceph is a scale-out solution. More nodes/OSDs = more IOPS. Can lose 2 nodes and still have quorum.
While...
You need to really confirm that write cache enable is turned on via 'dmesg -t' output on each drive. If the write/read cache is disabled, it really kills the IOPS.
While technically 3-nodes is indeed the bare minimum for Ceph, I don't consider...
Been migrating 13G Dells VMware vSphere clusters over to Proxmox Ceph using SAS drives and 10GbE networking on isolated switches. Ceph is scale out solution, so more nodes = more IOPS. Not hurting for IOPS on 5-, 7-, 9-, 11-node clusters. Just...
Disclaimer: I do NOT use ZFS shared storage in production. I use Ceph for shared storage. I do use ZFS on standalone servers. I ZFS RAID-1 to mirror Proxmox on small 76GB SAS drives.
With that being said, your best bet is ZFS over iSCSI per...
I had to actually pin the 6.17.2-1 kernel on a R530 BOSS-S1 PBS instance. It locks up with the 6.17.2-2 kernel. Something obviously changed with 6.17.2-2.
I use the Proxmox VE Helper-Scripts (LXC) to provide NFS/CIFS/Samba file sharing https://community-scripts.github.io/ProxmoxVE
No issues sharing ZFS pools. There are other scripts to manage media and other services.
Running latest kernel (6.17.2-2) without issues on 13G Dells. Also running fine on 12G, 11G, 10G Dells too.
These 13G Dells do have the latest firmware and BIOS. UEFI, Secure Boot, X2APIC, and IOAT DMA are enabled. I do NOT have SR-IOV enabled.
A long time ago, in a galaxy far, far away, I use to run a media server under Windows but the constant patching kept breaking my instance.
Migrated over to Arch Linux and mounted the NTFS drive read only under Linux and copied over the media to...
I'm still on Proxmox 8 at home.
I do NOT use a transceiver at home. I use a direct-attached cable (DAC) from the NIC to the switch.
I believe the issues with the ConnectX-3 not working with Proxmox 9 may have to do with transceiver...