Search results

  1. J

    Broadcom or Intel network card support

    I use Intel NICs in production except for the 700-series on Dell servers without issues. I do have Dell servers with Broadcom 1GbE which work fine but don't have having any 10/25GbE Broadcoms in production. May want to ask your question on Reddit.
  2. J

    Migration from VMware - vSAN or Ceph?

    At work, migrated from VMware to Proxmox. The question was which filesystem provides native snapshots? For standalone servers, it was ZFS. For clusters, it was Ceph. I think of Ceph as an open-source version of vSAN. Researched Ceph and find out it wants homogeneous hardware. So made sure the...
  3. J

    Proxmox Ceph Cluster Optimizations?

    May also want to ask your question on Reddit's r/ceph
  4. J

    VMware to Proxmox migration

    Took a look at the SCv3020 manual and it seems this storage unit supports direct connection to SAS. Proxmox should be able to see each individual drive. I use ZFS in production on standalone servers for its data/metadata error checking, compression, and snapshots. For max IOPS, I use ZFS...
  5. J

    HW RAID or ZFS on Dell PowerEdge R630

    At work, I'm involved with migrating Dells from VMware to Proxmox. I just make sure all hardware and firmware is the same (CPU, NIC, Storage, RAM, etc). I do swap out the PERC for Dell HBA330. I use two small drives and use ZFS RAID-1 for Proxmox. Rest of drives are for VMs/data. ZFS provides...
  6. J

    Production server - advice - Asus/Supermicro

    I use Supermicros at work and at home without issues. Also use Dells at work without issues. Supermicro configurator at wiredzone.com
  7. J

    VMware to Proxmox migration

    Should just work since the 9300 uses the LSI 3008 chipset which works with mpt2sas driver. I use the Dell HBA330 in production which has the same chipset and runs without issues.
  8. J

    10GBE cluster without switch

    I stood up a 3-node full-mesh broadcast Ceph Squid test cluster using these instructions https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server#Broadcast_Setup Zero issues. I did use a 169.254.3.0/24 network for Ceph public, private, & Corosync traffic since 169.254.0.0/16 is a IPv4...
  9. J

    Dell PERC IT Mode To IR Mode

    ZFS provides bit-rot protection, snapshots, and compression. I use a H200 (flashed to IT-mode) in a Dell R200 as a bare-metal Proxmox Backup Server using ZFS RAID-1. No issues.
  10. J

    Exploring High-Availability in Proxmox: Centralized Storage with Local SSD Failover

    Another option is ZFS replication at intervals. It won't be real-time replication like Ceph though. I've ran Ceph using 1GbE full-mesh broadcast network on 14-year old servers and it worked surprisingly well. I don't recommend it for production though.
  11. J

    8 server and so what?

    It is but I do have experience managing proprietary hyper-converged infrastructure (HCI) platforms at work like VMware vSAN and Nutanix AHV. I'm also involved with another project at work in which they are migrating to Hyper-V HCI. Can't beat Proxmox & Ceph as a HCI platform.
  12. J

    Hardware for a build

    Reason I brought up Supermicro at all was that they offer a range of motherboards from mini-ITX to ATX form factors. You can optionally get 10GbE networking (both wired & optic), SAS controller (for more drives), and either an embedded or socketed CPU. I use a Supermicro 5019S-M-G1585L as a...
  13. J

    Hardware for a build

    I run Proxmox on 12th, 13th, & 14th-gen Dells at work. Can't beat used 13th-gen Dells for the value. For example, the R730xd. Can use the rear drives for mirroring Proxmox and front drives for data/VMs. Can replace the PERC with HBA330 for true IT-mode storage controller. Can optionally get...
  14. J

    HPE and Dell Raid Controllers - No JBOD mode and CEPH

    While it's true PERCs do support HBA passthrough, they use the megaraid driver. Whereas, a Dell HBA, like a HBA330, uses the much simpler mp3sas driver. I swapped out all PERCs from 13th & 14th-gen Dells and replaced them with HBA330s. No issues.
  15. J

    8 server and so what?

    I run a 10GbE Ceph in production. Since Ceph is a scale-out solution, I would make a 7-node Ceph cluster and use the other server as a bare-metal Proxmox Backup Server or use it a replacement server if other nodes fails. Just make sure all hardware is the same (RAM, CPU, NIC, Storage, firmware...
  16. J

    New Ceph cluster advice

    I run a 5-node R720 Ceph cluster in production. I made sure to flash the PERC to IT-mode using this guide https://fohdeesha.com/docs/perc.html Ceph is scale out and loves lots of OSDs. Not hurting for IOPS with workloads ranging from DBs to DHCP servers. I use the following optimizations...
  17. J

    PVE 8.2 on Dell R420

    Way better off flashing the H310 to IT-mode and using ZFS RAID. H310 RAID by default has very low queues which leads to terrible IOPS. Flashing it to IT-mode gets you the max queues. Flash using this guide https://fohdeesha.com/docs/perc.html I have a fleet of 12th-gen Dells PERCs flashed to...
  18. J

    Why Does ZFS Hate my Server

    Typically with enterprise servers, SAS HDDs have write cache disabled because they are configured to be used HW RAID with a BBU. The HW RAID has built-in RAM cache. I too had terrible IOPS especially with Ceph when used with an IT-mode disk controller, ie, Dell H310 flashed to IT-mode or Dell...
  19. J

    Dell VxRail converted to ProxMox+Ceph?

    Technically it should work. May want to ask your question at the Promox sub-Reddit forum. I do run production Proxmox Ceph clusters at work on Dells with no issues. I just made sure all hardware (RAM, CPU, NIC, Storage) is the same and running an IT-mode disk controller.
  20. J

    Building Proxmox HCI Proof of Concept - Dell R240

    A 3-node Proxmox Ceph PoC does work. Don't recommend it in production in which you want a minimum of 5-nodes, so can lose 2 nodes and still be in business. I do have BOSS-S1 setup in ZFS RAID-1 to boot Proxmox. I use a Dell HBA300 for "true" IT-mode functionality. While 64GB will work for a...