Recent content by jdancer

  1. J

    PVE Best practices and White paper

    Like with any deployment, start with a proof-of-concept testbed. As you know, separate OS from data. So, use small drives to mirror the OS. I use ZFS RAID-1. Rest of bigger drives for VMs/data. Strongly suggest to use a IT/HBA-mode storage controller. I use a Dell HBA330 in production with no...
  2. J

    Proxmox installation failed on Dell R340

    Just get a used Dell HBA330 storage controller. Flash latest firmware and Proxmox will use the simpler mpt3sas driver. I run it in production with both ZFS and Ceph with no issues.
  3. J

    proxmox install& best practice

    Best practice is to separate OS and data storage. So, install OS on small boot drives mirrored. I use ZFS RAID-1. Then rest of drives for VMs/data. Your future self will thank you.
  4. J

    Backup Server compatibility between versions.

    Just used PBS 3 to backup to PVE 9 hosts. No issues. Once the backup was done, clean installed PBS 4.
  5. J

    Proxmox 9 in a 3 node cluster with CEPH VM migration use VMBR0 network? how to get faster VM migration speed?

    You can use either the GUI Datacenter option or the edit /etc/pve/datacenter.cfg and change the migration network, ie, migration: type=insecure,network=169.254.1.0/24 Obviously, you need a migration network for this to work. Also, if this is an isolated network, can use the insecure option for...
  6. J

    10Gb network adapter

    I use Intel X550 and Intel X540 10GbE without issues in production. They are running latest firmware. For 1GbE, I use Intel i350 with latest firmware also without issues. I use Mellanox ConnectX-3 with latest firmware at home but obviously use a supported version in production.
  7. J

    [SOLVED] Proxmox VE Incorrect Time

    It's always DNS. When given the chance, use static IPs.
  8. J

    (Compatibility) Is there a problem restoring a pve8 VM backup (PBS) to a pve9?

    Shouldn't be. I was able to restore a PVE 7 VM to PVE 8 last year. Just did a PVE 8 VM to PVE 9 last week. I upgrade PVE first and do PBS last.
  9. J

    Hardware question: Hardware Raid for OS (Proxmox ve)?

    True. BOSS-S1 is SATA only. I have it mirrored booting Proxmox using XFS.
  10. J

    Hardware advice (or "questionable Proxmox performance on nice box")

    Been migrating Dell servers at work to Proxmox from VMware. While it's true the PERC RAID controller can be switched to HBA-mode, it uses the megaraid_sas driver which has cause issues in the past. I decided to replace the PERC with a Dell HBA330 controller and it uses the much simpler mpt3sas...
  11. J

    10G NIC HCL for Proxmox 9.0.3

    In production at work, I use the following: Intel X550 Intel X540 Intel i350 without issues with latest firmware. At home, I use a Mellanox ConnectX-3 SFP+ 10GbE fiber NIC without issues also with latest firmware.
  12. J

    ZFS on rotative drives super bad performance

    I run ZFS on standalone servers and Ceph on clustered servers. Usually, on Dell servers, the write cache on hard drives is disabled because it is assumed they will be used on a BBU RAID controller. Since, ZFS & Ceph don't play nice with RAID controllers and only with HBA controllers, you'll...
  13. J

    [Proxmox Cluster with Ceph Full-Mesh Network Design] Sanity Check & Advice for 3-node cluster with separate 10GbE/25GbE networks

    Nope. All network is traffic going over single network cable per node as physically described (1 -> 2, 3; 2 -> 1, 3; 3 -> 1,2) at https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server#Introduction So, yeah, you can say that is a single point-of-failure. Not noticing any latency...
  14. J

    [Proxmox Cluster with Ceph Full-Mesh Network Design] Sanity Check & Advice for 3-node cluster with separate 10GbE/25GbE networks

    I run a 3-node Proxmox Ceph cluster using a full-mesh broadcast network per https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server#Broadcast_Setup Each node is directly connected to each other without a switch. Yes, since it's broadcast traffic, each node gets Ceph public, private, and...
  15. J

    Disk cache none...safe or not safe?

    It was quite reproduceable in my production environment with Ceph migrations done every other week. Again, YMMV. All VMs are Linux with qemu-guest-agent installed. Was only through trial-and-error, I found the cache policies that work for ZFS & Ceph and no more corruptions.