Search results

  1. J

    4 identical nodes | I need some recommendations

    I have a 4-node R630 setup but using SAS drives and no SSDs. The default Ceph replication of 3/2 works fine. Can do maintenance on 1 node while the other 3 still have quorum. Technically should have a QDevice for production but since this setup is a test cluster, I'm OK with just 4.
  2. J

    Using a Proxmox role to restrict VMs to a specific network?

    Got some summer interns incoming and they will have access to a Proxmox cluster. Is there a Proxmox role to restrict VMs to a specific network? Obviously they will need access to the management network but VMs will be on a different network for Internet access. If no such option is...
  3. J

    Re-importing existing OSDs after Proxmox reinstall additional steps?

    After a 3-node test Ceph cluster refused to boot after a lengthy power outage, I reinstalled Proxmox. After creating the Ceph monitors on each host, ran 'ceph-volume lvm activate --all' on the existing OSDs which ran without errors. However, still no OSDs. I'm guessing the new Ceph monitors...
  4. J

    [SOLVED] Migration from vCenter 7 to Proxmox VE

    Old school way is to use qemu-img convert command. New school way is to use Clonezilla over the network. Plenty of blog/videos on how to do it.
  5. J

    Hardware Compatibility List > Component Level?

    Proxmox uses Debian as the OS. Over at debian.org should tell you what hardware is supported. Debian just runs the vanilla Linux kernel, IMO. Proxmox does offer the 5.x LTS kernel and the 6.x kernel for newer hardware.
  6. J

    Dell Server Hardware Advice

    If buying used, I prefer 13th-gen Dells, specifically the R730xd in which you can use the rear drives as RAID-1 OS boot drives. If looking for something else newer, then 14th-gen Dells, like the R740xd. I would not get 15th-gen Dells until the Linux drivers for that hardware has matured.
  7. J

    Ceph HDDs slow

    I use SAS HDDs in production. I use the following optimizations learned through trial-and-error: Set write cache enable (WCE) to 1 on SAS drives (sdparm -s WCE=1 -S /dev/sd[x]) Set VM cache to none Set VM to use VirtIO-single SCSI controller and enable IO thread and discard option...
  8. J

    vSphere migration to Proxmox

    When vSphere/ESXi 7 came out, VMware/Delll dropped official production support for 12th-gen Dells. Switch the entire fleet of 12th-gen Dells after flashing the disk controllers to IT-mode to Proxmox running ZFS and/or Ceph. No issues. I did use "qemu-img convert" initially but just learned...
  9. J

    Btrfs vs ZFS on RAID1 root partition

    I've had ZFS RAID-1 failing before on OS boot drives. It shows the zpool as degraded. It still boots. BTRFS RAID-1 is another story. If a drive fails, you are toast if just using 2 drives. Use it when you don't care about booting.
  10. J

    micro server Cluster with Ceph

    Best bang for the buck is used enterprise servers. Take a look at used Dell 13th-gen servers. Can optionally upgrade the internal NICs to 10GbE (fiber, wired, or both). The PERC H330 can be configured for HBA/IT-mode or get the PERC HBA330. I like the R730xd and can use the rear drives as...
  11. J

    Dell R730 UEFI boot installation issues

    Weirdly, I can install latest UEFI Proxmox on Dell R630 but NOT a Dell R730.
  12. J

    BTRFS RAID1, totally useless?

    I run BTRFS RAID-1 on boot drives on 12-year old servers for RAM reasons. Only run it on equipment you can recover from or don't care about.
  13. J

    cookbook for building proxmox-backup-client on CentOS/RHEL 7

    Would be great if you can add a SHA-256 checksum file for your RPMs to confirm download integrity.
  14. J

    CEPH or Cluster first

    For a 3-node cluster setup, I recommend a full-mesh broadcast setup https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server#Broadcast_Setup Works quite well on 12-year old servers using 1GbE.
  15. J

    Best way to migrate large volume from Ceph to ZFS

    If you have a spare machine with capacity another option is Promox Backup Server. Old -> PBS -> New
  16. J

    What filesystem should i Use

    May want to search for "reducing ssd wear proxmox" or take a look at this https://forum.proxmox.com/threads/minimizing-ssd-wear-through-pve-configuration-changes.89104/#post-391299 I've used both ZFS and BTRFS to mirror boot HDDs on servers. The rest of drives are used with RAIDZ1/2 or Ceph.
  17. J

    Migrate VMware-VM to Proxmox

    I do it the old-school way and copy over the vmdk (metadata) and vmdk-flat (data) files and use "qemu-img convert" on the new host to convert them to raw format. I did this when I migrated from ESXi to Proxmox. Plenty of videos and blogs on how to do this.
  18. J

    Linux VMs extremely slow performance

    I use the following trial-and-error optimizations for the 3 & 5-node Ceph cluster all running Linux VMs: Set write cache enable (WCE) to 1 on SAS drives (sdparm -s WCE=1 -S /dev/sd[x]) Set VM cache to none Set VM to use VirtIO-single SCSI controller and enable IO thread and discard...
  19. J

    Enterprise Hardware Suggestions

    May want to try the Ceph sub-Reddit. I do know that 45Drives creates and supports their Ceph hardware options and do love Proxmox.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!