Search results

  1. J

    Proxmox installation - hard disks and PowerEdge R620

    Considered best practice to mirror the OS drives with small drives. But with a 8-bay R620, yeah, you lose 2 drive bays. I previously used ZFS RAID-6 (still losing 2 drives but any two drives can fail before data is lost). I went ahead and mirrored the OS and setup ZFS RAID-50. One last option...
  2. J

    New Server - which CPU?

    2-node clusters will need a QDevice or any even number of hosts in a cluster. The QDevice will provide a vote to break a tie since each host will vote for itself. Ideally, you want an odd-number of servers for quorum, ie, 3, 5, 7, etc. QDevice can be any device. For example, I use a bare-metal...
  3. J

    Advice on Server Purchase and implementation of system

    Just make sure the solid-state storage is using power-loss protection (PLP) [enterprise] otherwise going get real bad IOPS.
  4. J

    [SOLVED] Windows VM Poor Performance on Ceph

    I don't use any production Ceph clusters using solid-state drives. Yup, it's all SAS 15K HDDs. Don't run Windows either, just all Linux VMs. If going to use solid-state drives, you want enterprise sold-state storage with power-loss protection (PLP). With that being said, I use the following VM...
  5. J

    [SOLVED] Network best practice?

    I use this config in production. YMMV. # Configure Dell rNDC X540/I350 Quad NIC card with 10GbE active and 1GbE as backup # VLAN 10 = Management network traffic # VLAN 20 30 40 = VM network traffic # auto lo iface lo inet loopback iface eno1 inet manual...
  6. J

    Recommendation for Datacenter grade switch for running Ceph

    For new, Arista. For less expensive option, Mikrotik. Pick your speed, medium (wired and/or optic), and number of ports. I use both without issues at 10GbE. They do make higher speed switches.
  7. J

    Migrate VMware VMs to Proxmox

    I used these guides: https://unixcop.com/migrate-virtual-machine-from-vmware-esxi-to-proxmox-ve/ https://knowledgebase.45drives.com/kb/kb450414-migrating-virtual-machine-disks-from-vmware-to-proxmox/ You will need the .vmdk metafile and the flat.vmdk file for 'qemu-img convert' command.
  8. J

    2U 4 nodes 24 drives suggestions

    Depends on use case. I don't use HW RAID, so it's either ZFS for stand-alone or Ceph for cluster (3-node minium but I use 5-node). In either case, I use 2 small storage devices to mirror Proxmox via ZFS RAID-1. Then rest of storage for data/VMs.
  9. J

    Is a GlusterFS Hyperconverged Proxmox with Hardware Raid doable?

    That's good info to know. I've changed the VM's disk cache to writeback and will monitor performance. I also have standalone ZFS servers and also changed the cache to writeback. Agree. You want enterprise sold-state storage with power-loss protection (PLP).
  10. J

    Is a GlusterFS Hyperconverged Proxmox with Hardware Raid doable?

    My optimizations are for a SAS HDD environment. I don't use any SATA/SAS SSDs in the Promox Ceph clusters I manage. I also don't run any Windows VMs. All the VMs are Linux which are a mix of Centos 7 (EOL this year)/Rocky Linux (RL) 8 & 9/Alma Linux (AL) 8 & 9. For the RL/AL VMs, I set the...
  11. J

    Is a GlusterFS Hyperconverged Proxmox with Hardware Raid doable?

    I run a 3-node full-mesh broadcast (https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server#Broadcast_Setup) Ceph cluster without issues. I flashed the PERCs to IT-mode. Corosync, Ceph public & private, Migration network traffic all go through the full-mesh broadcast network. IOPS for...
  12. J

    HELP! - Almost quiting to VMware...

    May want to look at using an IT/HBA-mode controller. I avoid HW RAID controllers when possible.
  13. J

    Request for Hardware Recommendations

    May want to look at embedded CPU motherboards using an Intel Xeon-D or Atom. Optionally comes with 10GbE (fiber and/or copper) and/or SAS controller. Supermicro and ASRock Rack makes them. I use the Supermicro configurator at wiredzone.com
  14. J

    New Proxmox box w/ ZFS - R730xd w/ PERC H730 Mini in “HBA Mode” - BIG NO NO?

    Not running any H730 in HBA mode yet but running H330 in HBA in production. No issues with SAS drives. Don't use any SSDs though.
  15. J

    Dell R430

    No issues with Proxmox on Dell 13th-gen servers. Just need to configure the PERC to HBA mode for use with ZFS/Ceph. Delete any existing virtual disks before switching the PERC to HBA mode otherwise the PERC will NOT see the drives. Needless to say, update all firmware before clean installing...
  16. J

    Combine a hyper-converged proxmox cluster with a separate ceph cluster - anybody done that?

    Don't see why not. Just don't run any VMs on the dedicated Ceph cluster. Never tried it since I do use HCI Ceph cluster without issue.
  17. J

    3 node mesh Proxmox and Ceph

    I got my 3-node Ceph cluster using a full mesh broadcast topology https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server#Broadcast_Setup using 169.254.x.x/24 (IPv4 link-local addresses) Obviously does not use any switches. Can't expand it either unless one uses a switch. I run both...
  18. J

    Clean Proxmox & Ceph Install issues

    Sounds like your issue is an EOL CPU like mine was https://forum.proxmox.com/threads/proxmox-8-ceph-quincy-monitor-no-longer-working-on-amd-opteron-2427.129613 I've since decommissioned the Opteron servers and replaced them with 12th-gen Dells.
  19. J

    Hard Disk recommendation

    Backblaze issues quarterly reports on failure rates on their fleet of HDDs https://www.backblaze.com/blog/backblaze-drive-stats-for-q2-2023/ I prefer Hitachi. If no Hitachi, then WD Purple HDDs.
  20. J

    Install Ceph on newly built 5-node Proxmox VE 8.0.3

    I used the following steps on Promox 7. Should be same for Proxmox 8. 1. Make sure disk controller is in IT/HBA-mode 2. Put servers in cluster 3. Click "Install Ceph" button on each node 5. Create OSDs, Monitors, MDS as needed 4. $$$

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!