Search results

  1. J

    Proxmox installation - hard disks and PowerEdge R620

    Dell BOSS-S1 cards are technically not supported on 13th-gen Dells but it does work. This server was previously an ESXi host doing backups which ESXi recognized the card during install. It does show up as a install target during Proxmox installation. The server that is installed is a 2U LFF...
  2. J

    [SOLVED] Install Proxmox 8.1 on Boss-N1 and using Dell PERC H965i Controller

    Since Dell BOSS cards are used to mirror the OS, I can confirmed that on the BOSS-S1 on a 13th-gen Dell, you can install Proxmox on it. I did use the CLI to configure a mirror. It does show up as an install target during Proxmox install. I used XFS as the file system. Before I did that, I...
  3. J

    Proxmox installation - hard disks and PowerEdge R620

    Considered best practice to mirror the OS drives with small drives. But with a 8-bay R620, yeah, you lose 2 drive bays. I previously used ZFS RAID-6 (still losing 2 drives but any two drives can fail before data is lost). I went ahead and mirrored the OS and setup ZFS RAID-50. One last option...
  4. J

    New Server - which CPU?

    2-node clusters will need a QDevice or any even number of hosts in a cluster. The QDevice will provide a vote to break a tie since each host will vote for itself. Ideally, you want an odd-number of servers for quorum, ie, 3, 5, 7, etc. QDevice can be any device. For example, I use a bare-metal...
  5. J

    Advice on Server Purchase and implementation of system

    Just make sure the solid-state storage is using power-loss protection (PLP) [enterprise] otherwise going get real bad IOPS.
  6. J

    [SOLVED] Windows VM Poor Performance on Ceph

    I don't use any production Ceph clusters using solid-state drives. Yup, it's all SAS 15K HDDs. Don't run Windows either, just all Linux VMs. If going to use solid-state drives, you want enterprise sold-state storage with power-loss protection (PLP). With that being said, I use the following VM...
  7. J

    [SOLVED] Network best practice?

    I use this config in production. YMMV. # Configure Dell rNDC X540/I350 Quad NIC card with 10GbE active and 1GbE as backup # VLAN 10 = Management network traffic # VLAN 20 30 40 = VM network traffic # auto lo iface lo inet loopback iface eno1 inet manual...
  8. J

    Recommendation for Datacenter grade switch for running Ceph

    For new, Arista. For less expensive option, Mikrotik. Pick your speed, medium (wired and/or optic), and number of ports. I use both without issues at 10GbE. They do make higher speed switches.
  9. J

    Migrate VMware VMs to Proxmox

    I used these guides: https://unixcop.com/migrate-virtual-machine-from-vmware-esxi-to-proxmox-ve/ https://knowledgebase.45drives.com/kb/kb450414-migrating-virtual-machine-disks-from-vmware-to-proxmox/ You will need the .vmdk metafile and the flat.vmdk file for 'qemu-img convert' command.
  10. J

    2U 4 nodes 24 drives suggestions

    Depends on use case. I don't use HW RAID, so it's either ZFS for stand-alone or Ceph for cluster (3-node minium but I use 5-node). In either case, I use 2 small storage devices to mirror Proxmox via ZFS RAID-1. Then rest of storage for data/VMs.
  11. J

    Is a GlusterFS Hyperconverged Proxmox with Hardware Raid doable?

    That's good info to know. I've changed the VM's disk cache to writeback and will monitor performance. I also have standalone ZFS servers and also changed the cache to writeback. Agree. You want enterprise sold-state storage with power-loss protection (PLP).
  12. J

    Is a GlusterFS Hyperconverged Proxmox with Hardware Raid doable?

    My optimizations are for a SAS HDD environment. I don't use any SATA/SAS SSDs in the Promox Ceph clusters I manage. I also don't run any Windows VMs. All the VMs are Linux which are a mix of Centos 7 (EOL this year)/Rocky Linux (RL) 8 & 9/Alma Linux (AL) 8 & 9. For the RL/AL VMs, I set the...
  13. J

    Is a GlusterFS Hyperconverged Proxmox with Hardware Raid doable?

    I run a 3-node full-mesh broadcast (https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server#Broadcast_Setup) Ceph cluster without issues. I flashed the PERCs to IT-mode. Corosync, Ceph public & private, Migration network traffic all go through the full-mesh broadcast network. IOPS for...
  14. J

    HELP! - Almost quiting to VMware...

    May want to look at using an IT/HBA-mode controller. I avoid HW RAID controllers when possible.
  15. J

    Request for Hardware Recommendations

    May want to look at embedded CPU motherboards using an Intel Xeon-D or Atom. Optionally comes with 10GbE (fiber and/or copper) and/or SAS controller. Supermicro and ASRock Rack makes them. I use the Supermicro configurator at wiredzone.com
  16. J

    New Proxmox box w/ ZFS - R730xd w/ PERC H730 Mini in “HBA Mode” - BIG NO NO?

    Not running any H730 in HBA mode yet but running H330 in HBA in production. No issues with SAS drives. Don't use any SSDs though.
  17. J

    Dell R430

    No issues with Proxmox on Dell 13th-gen servers. Just need to configure the PERC to HBA mode for use with ZFS/Ceph. Delete any existing virtual disks before switching the PERC to HBA mode otherwise the PERC will NOT see the drives. Needless to say, update all firmware before clean installing...
  18. J

    Combine a hyper-converged proxmox cluster with a separate ceph cluster - anybody done that?

    Don't see why not. Just don't run any VMs on the dedicated Ceph cluster. Never tried it since I do use HCI Ceph cluster without issue.
  19. J

    3 node mesh Proxmox and Ceph

    I got my 3-node Ceph cluster using a full mesh broadcast topology https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server#Broadcast_Setup using 169.254.x.x/24 (IPv4 link-local addresses) Obviously does not use any switches. Can't expand it either unless one uses a switch. I run both...
  20. J

    Clean Proxmox & Ceph Install issues

    Sounds like your issue is an EOL CPU like mine was https://forum.proxmox.com/threads/proxmox-8-ceph-quincy-monitor-no-longer-working-on-amd-opteron-2427.129613 I've since decommissioned the Opteron servers and replaced them with 12th-gen Dells.