Search results

  1. J

    Ceph Cluster - Slow performance

    If this 3-node cluster is never going to be expanded, create a full-mesh broadcast network per https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server#Example and https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server#Broadcast_Setup This setup removes the switch and put the...
  2. J

    How many nodes in a single cluster with Ceph requirement?

    I use Dell R630s in production as Proxmox Ceph clusters. These were converted from VMware/vSphere. They all have the same hardware (CPU 2 x 2650v4, Storage SAS 10K, Storage controller HBA330, RAM 512GB, NIC Intel X550 10GbE) running latest firmware. Ceph & Corosync network traffic on isolated...
  3. J

    About Proxmox VE & PBS features

    May be the issue is with the megaraid_sas driver not able to use "mixed-mode" RAID configuration (RAID and HBA [passthrough]-mode) is the root cause. No idea since I don't use mixed-mode. Still stand with my recommendation on getting a used Dell HBA330 controller. They are cheap to get. Got one...
  4. J

    Server Suggestions

    If it matters, the Dells in production are 13th-generation using Intel Broadwell CPUs. Still firmware supported by Dell. They will eventually get EOL'd by Dell but then again, I have 10th-, 11th-, 12th-gen Dells running Proxmox. I recommend single-socket servers. If you want to save even more...
  5. J

    Proxmox HA cluster with Ceph

    Yes, for live migration, manual/HA. Hence, should really have same amount memory across nodes.
  6. J

    Server Suggestions

    Migrating Dell VMware clusters at work to Proxmox. I just make sure all the hardware is the same (CPU, memory, storage, storage controller, networking, firmware, etc). Swapped out Dell PERCs for Dell HBA330s since ZFS & Ceph don't work with RAID controllers. Standalone Dells are running ZFS...
  7. J

    Proxmox HA cluster with Ceph

    Ceph really, really wants homogeneous hardware. Meaning same CPU, memory, networking, storage, storage controller, firmware, etc. While it's true you can run a 3-node cluster, you can only have a 1-node outage. With 5-nodes, can have 2-node outage. So, for production, 5-node minimum. With that...
  8. J

    About Proxmox VE & PBS features

    I use a Dell PERC H730P in a Proxmox Backup Server (Dell R530). I deleted the virtual disks before converting it to HBA-mode. It's also running the latest firmware from Dell. Proxmox uses the megaraid_sas driver. As for the other Dells in production, I swapped out the PERCs for Dell HBA330s...
  9. J

    Proxmox installation failed on Dell R340

    The 2 x R530s in question are Proxmox Backup Servers. I did a clean install of PBS 4. It was previously running PBS 3. May want to double check the PERC settings in the BIOS again and confirm it's in HBA mode. Another option it put in a single unrelated drive and see if at least Proxmox and...
  10. J

    Proxmox installation failed on Dell R340

    Before converting a RAID controller to HBA mode, you'll want to delete any virtual disks first. If you miss this step, you can do the following: # wipefs -af /dev/sd[x] # sgdisk -Z /dev/sd[x] # wipefs -af /dev/sd[x] Where x is the drive letter(s). This should remove any metadata on the...
  11. J

    PVE Best practices and White paper

    Like with any deployment, start with a proof-of-concept testbed. As you know, separate OS from data. So, use small drives to mirror the OS. I use ZFS RAID-1. Rest of bigger drives for VMs/data. Strongly suggest to use a IT/HBA-mode storage controller. I use a Dell HBA330 in production with no...
  12. J

    Proxmox installation failed on Dell R340

    Just get a used Dell HBA330 storage controller. Flash latest firmware and Proxmox will use the simpler mpt3sas driver. I run it in production with both ZFS and Ceph with no issues.
  13. J

    proxmox install& best practice

    Best practice is to separate OS and data storage. So, install OS on small boot drives mirrored. I use ZFS RAID-1. Then rest of drives for VMs/data. Your future self will thank you.
  14. J

    Backup Server compatibility between versions.

    Just used PBS 3 to backup to PVE 9 hosts. No issues. Once the backup was done, clean installed PBS 4.
  15. J

    Proxmox 9 in a 3 node cluster with CEPH VM migration use VMBR0 network? how to get faster VM migration speed?

    You can use either the GUI Datacenter option or the edit /etc/pve/datacenter.cfg and change the migration network, ie, migration: type=insecure,network=169.254.1.0/24 Obviously, you need a migration network for this to work. Also, if this is an isolated network, can use the insecure option for...
  16. J

    10Gb network adapter

    I use Intel X550 and Intel X540 10GbE without issues in production. They are running latest firmware. For 1GbE, I use Intel i350 with latest firmware also without issues. I use Mellanox ConnectX-3 with latest firmware at home but obviously use a supported version in production.
  17. J

    [SOLVED] Proxmox VE Incorrect Time

    It's always DNS. When given the chance, use static IPs.
  18. J

    (Compatibility) Is there a problem restoring a pve8 VM backup (PBS) to a pve9?

    Shouldn't be. I was able to restore a PVE 7 VM to PVE 8 last year. Just did a PVE 8 VM to PVE 9 last week. I upgrade PVE first and do PBS last.
  19. J

    Hardware question: Hardware Raid for OS (Proxmox ve)?

    True. BOSS-S1 is SATA only. I have it mirrored booting Proxmox using XFS.
  20. J

    Hardware advice (or "questionable Proxmox performance on nice box")

    Been migrating Dell servers at work to Proxmox from VMware. While it's true the PERC RAID controller can be switched to HBA-mode, it uses the megaraid_sas driver which has cause issues in the past. I decided to replace the PERC with a Dell HBA330 controller and it uses the much simpler mpt3sas...