Search results

  1. J

    HELP! - Almost quiting to VMware...

    May want to look at using an IT/HBA-mode controller. I avoid HW RAID controllers when possible.
  2. J

    Request for Hardware Recommendations

    May want to look at embedded CPU motherboards using an Intel Xeon-D or Atom. Optionally comes with 10GbE (fiber and/or copper) and/or SAS controller. Supermicro and ASRock Rack makes them. I use the Supermicro configurator at wiredzone.com
  3. J

    New Proxmox box w/ ZFS - R730xd w/ PERC H730 Mini in “HBA Mode” - BIG NO NO?

    Not running any H730 in HBA mode yet but running H330 in HBA in production. No issues with SAS drives. Don't use any SSDs though.
  4. J

    Dell R430

    No issues with Proxmox on Dell 13th-gen servers. Just need to configure the PERC to HBA mode for use with ZFS/Ceph. Delete any existing virtual disks before switching the PERC to HBA mode otherwise the PERC will NOT see the drives. Needless to say, update all firmware before clean installing...
  5. J

    Combine a hyper-converged proxmox cluster with a separate ceph cluster - anybody done that?

    Don't see why not. Just don't run any VMs on the dedicated Ceph cluster. Never tried it since I do use HCI Ceph cluster without issue.
  6. J

    3 node mesh Proxmox and Ceph

    I got my 3-node Ceph cluster using a full mesh broadcast topology https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server#Broadcast_Setup using 169.254.x.x/24 (IPv4 link-local addresses) Obviously does not use any switches. Can't expand it either unless one uses a switch. I run both...
  7. J

    Clean Proxmox & Ceph Install issues

    Sounds like your issue is an EOL CPU like mine was https://forum.proxmox.com/threads/proxmox-8-ceph-quincy-monitor-no-longer-working-on-amd-opteron-2427.129613 I've since decommissioned the Opteron servers and replaced them with 12th-gen Dells.
  8. J

    Hard Disk recommendation

    Backblaze issues quarterly reports on failure rates on their fleet of HDDs https://www.backblaze.com/blog/backblaze-drive-stats-for-q2-2023/ I prefer Hitachi. If no Hitachi, then WD Purple HDDs.
  9. J

    Install Ceph on newly built 5-node Proxmox VE 8.0.3

    I used the following steps on Promox 7. Should be same for Proxmox 8. 1. Make sure disk controller is in IT/HBA-mode 2. Put servers in cluster 3. Click "Install Ceph" button on each node 5. Create OSDs, Monitors, MDS as needed 4. $$$
  10. J

    Proxmox in production

    When VMware/Dell dropped official support for 12th-gen Dells, I migrated the machines to Proxmox after I flashed the PERCs to IT-mode. Zero issues. Only hardware failures are the SAS HDDs but that's easily fixed in ZFS/Ceph.
  11. J

    Hardware recommendation

    For used, can't beat enterprise servers with built-in IPMI and optional 10GbE. Curated list at labgopher.com. Dell R230 with hot-swap drives will be a good choice. For new, Supermicro motherboards from mini-ITX to ATX form-factor. Comes with IPMI and optional 10GbE and SAS controller...
  12. J

    Hardware recommendation

    Since I don't use any AMD CPUs in work or home (use Intel exclusively with zero issues), I've had read about issues with Debian/Proxmox with obscure AMD settings. If you don't plan to create a cluster (highly recommend homogeneous hardware for clustering), get as much memory for the server as...
  13. J

    New HA Cluster Storage Hardware

    If you are needing high IOPS for transaction workloads, Ceph will not be the answer. It really wants lots of nodes of homogeneous hardware. With that being said, I did convert a fleet of 12th-gen Dells (VMware/Dell dropped official support for ESXi/vSphere 7) with built-in 2 x 10GbE to create a...
  14. J

    Need suggetion for setup

    I use Dell and Supermicro in production. Supermicro's are less expensive than Dells. These systems don't particular need high IOPS, so they are using SAS HDDs. They do have max RAM installed. I won't bother with 1GbE. 10GbE or higher is what you want. Since databases do require high IOPS if...
  15. J

    r730xd H330 passthrough fail

    I use H330s in a R630 Ceph cluster. Flashed with latest firmware version? Did you delete any virtual disks before switching the H330 to HBA mode? If you don't do this step, you won't see the physical drives.
  16. J

    Ceph very slow rebalancing ~300Kib

    I get 100's in write IOPS and read IOPS are 2x-3x of write IOPS using SAS 10K RPM HDDs. This is with a 5-node Dell 12th-gen 16-drive bay servers using 10GbE. I am guessing you are using consumer SSDs? They bottleneck real quick once their internal cache is filled. You'll want enterprise SSD...
  17. J

    Proxmox VE 8.0 released!

    If this is production, I would wait until 8 matures. All zero releases regardless of software vendor is "buggy". Still got another year of support for PVE 7 anyhow.
  18. J

    Proxmox/Ceph Disk Layout

    On 12th-gen Dells, you can flash the PERC to IT-mode via https://fohdeesha.com/docs/perc.html On 13th-gen Dells, configure the PERC (flashed to latest firmware) to HBA/IT-mode. Delete any existing RAID volumes. I would ZFS mirror (RAID-1) two drives for Proxmox use the rest as Ceph OSDs...
  19. J

    installing ceph can i use the same IP

    Can use the same one. It's preferable to have a separate physical network but it will still work with a single physical network.
  20. J

    Migration from VMWare ESXi to Proxmox

    I used this guide https://unixcop.com/migrate-virtual-machine-from-vmware-esxi-to-proxmox-ve The -flat.vmdk is the actual virtual disk. The vmdk is the metafile which qm convert needs to describe the -flat.vmdk file.