Search results

  1. J

    New Server - which CPU?

    2-node clusters will need a QDevice or any even number of hosts in a cluster. The QDevice will provide a vote to break a tie since each host will vote for itself. Ideally, you want an odd-number of servers for quorum, ie, 3, 5, 7, etc. QDevice can be any device. For example, I use a bare-metal...
  2. J

    Advice on Server Purchase and implementation of system

    Just make sure the solid-state storage is using power-loss protection (PLP) [enterprise] otherwise going get real bad IOPS.
  3. J

    [SOLVED] Windows VM Poor Performance on Ceph

    I don't use any production Ceph clusters using solid-state drives. Yup, it's all SAS 15K HDDs. Don't run Windows either, just all Linux VMs. If going to use solid-state drives, you want enterprise sold-state storage with power-loss protection (PLP). With that being said, I use the following VM...
  4. J

    [SOLVED] Network best practice?

    I use this config in production. YMMV. # Configure Dell rNDC X540/I350 Quad NIC card with 10GbE active and 1GbE as backup # VLAN 10 = Management network traffic # VLAN 20 30 40 = VM network traffic # auto lo iface lo inet loopback iface eno1 inet manual...
  5. J

    Recommendation for Datacenter grade switch for running Ceph

    For new, Arista. For less expensive option, Mikrotik. Pick your speed, medium (wired and/or optic), and number of ports. I use both without issues at 10GbE. They do make higher speed switches.
  6. J

    Migrate VMware VMs to Proxmox

    I used these guides: https://unixcop.com/migrate-virtual-machine-from-vmware-esxi-to-proxmox-ve/ https://knowledgebase.45drives.com/kb/kb450414-migrating-virtual-machine-disks-from-vmware-to-proxmox/ You will need the .vmdk metafile and the flat.vmdk file for 'qemu-img convert' command.
  7. J

    2U 4 nodes 24 drives suggestions

    Depends on use case. I don't use HW RAID, so it's either ZFS for stand-alone or Ceph for cluster (3-node minium but I use 5-node). In either case, I use 2 small storage devices to mirror Proxmox via ZFS RAID-1. Then rest of storage for data/VMs.
  8. J

    Is a GlusterFS Hyperconverged Proxmox with Hardware Raid doable?

    That's good info to know. I've changed the VM's disk cache to writeback and will monitor performance. I also have standalone ZFS servers and also changed the cache to writeback. Agree. You want enterprise sold-state storage with power-loss protection (PLP).
  9. J

    Is a GlusterFS Hyperconverged Proxmox with Hardware Raid doable?

    My optimizations are for a SAS HDD environment. I don't use any SATA/SAS SSDs in the Promox Ceph clusters I manage. I also don't run any Windows VMs. All the VMs are Linux which are a mix of Centos 7 (EOL this year)/Rocky Linux (RL) 8 & 9/Alma Linux (AL) 8 & 9. For the RL/AL VMs, I set the...
  10. J

    Is a GlusterFS Hyperconverged Proxmox with Hardware Raid doable?

    I run a 3-node full-mesh broadcast (https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server#Broadcast_Setup) Ceph cluster without issues. I flashed the PERCs to IT-mode. Corosync, Ceph public & private, Migration network traffic all go through the full-mesh broadcast network. IOPS for...
  11. J

    HELP! - Almost quiting to VMware...

    May want to look at using an IT/HBA-mode controller. I avoid HW RAID controllers when possible.
  12. J

    Request for Hardware Recommendations

    May want to look at embedded CPU motherboards using an Intel Xeon-D or Atom. Optionally comes with 10GbE (fiber and/or copper) and/or SAS controller. Supermicro and ASRock Rack makes them. I use the Supermicro configurator at wiredzone.com
  13. J

    New Proxmox box w/ ZFS - R730xd w/ PERC H730 Mini in “HBA Mode” - BIG NO NO?

    Not running any H730 in HBA mode yet but running H330 in HBA in production. No issues with SAS drives. Don't use any SSDs though.
  14. J

    Dell R430

    No issues with Proxmox on Dell 13th-gen servers. Just need to configure the PERC to HBA mode for use with ZFS/Ceph. Delete any existing virtual disks before switching the PERC to HBA mode otherwise the PERC will NOT see the drives. Needless to say, update all firmware before clean installing...
  15. J

    Combine a hyper-converged proxmox cluster with a separate ceph cluster - anybody done that?

    Don't see why not. Just don't run any VMs on the dedicated Ceph cluster. Never tried it since I do use HCI Ceph cluster without issue.
  16. J

    3 node mesh Proxmox and Ceph

    I got my 3-node Ceph cluster using a full mesh broadcast topology https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server#Broadcast_Setup using 169.254.x.x/24 (IPv4 link-local addresses) Obviously does not use any switches. Can't expand it either unless one uses a switch. I run both...
  17. J

    Clean Proxmox & Ceph Install issues

    Sounds like your issue is an EOL CPU like mine was https://forum.proxmox.com/threads/proxmox-8-ceph-quincy-monitor-no-longer-working-on-amd-opteron-2427.129613 I've since decommissioned the Opteron servers and replaced them with 12th-gen Dells.
  18. J

    Hard Disk recommendation

    Backblaze issues quarterly reports on failure rates on their fleet of HDDs https://www.backblaze.com/blog/backblaze-drive-stats-for-q2-2023/ I prefer Hitachi. If no Hitachi, then WD Purple HDDs.
  19. J

    Install Ceph on newly built 5-node Proxmox VE 8.0.3

    I used the following steps on Promox 7. Should be same for Proxmox 8. 1. Make sure disk controller is in IT/HBA-mode 2. Put servers in cluster 3. Click "Install Ceph" button on each node 5. Create OSDs, Monitors, MDS as needed 4. $$$
  20. J

    Proxmox in production

    When VMware/Dell dropped official support for 12th-gen Dells, I migrated the machines to Proxmox after I flashed the PERCs to IT-mode. Zero issues. Only hardware failures are the SAS HDDs but that's easily fixed in ZFS/Ceph.