Recent content by jdancer

  1. J

    [SOLVED] Proxmox VE Incorrect Time

    It's always DNS. When given the chance, use static IPs.
  2. J

    (Compatibility) Is there a problem restoring a pve8 VM backup (PBS) to a pve9?

    Shouldn't be. I was able to restore a PVE 7 VM to PVE 8 last year. Just did a PVE 8 VM to PVE 9 last week. I upgrade PVE first and do PBS last.
  3. J

    Hardware question: Hardware Raid for OS (Proxmox ve)?

    True. BOSS-S1 is SATA only. I have it mirrored booting Proxmox using XFS.
  4. J

    Hardware advice (or "questionable Proxmox performance on nice box")

    Been migrating Dell servers at work to Proxmox from VMware. While it's true the PERC RAID controller can be switched to HBA-mode, it uses the megaraid_sas driver which has cause issues in the past. I decided to replace the PERC with a Dell HBA330 controller and it uses the much simpler mpt3sas...
  5. J

    10G NIC HCL for Proxmox 9.0.3

    In production at work, I use the following: Intel X550 Intel X540 Intel i350 without issues with latest firmware. At home, I use a Mellanox ConnectX-3 SFP+ 10GbE fiber NIC without issues also with latest firmware.
  6. J

    ZFS on rotative drives super bad performance

    I run ZFS on standalone servers and Ceph on clustered servers. Usually, on Dell servers, the write cache on hard drives is disabled because it is assumed they will be used on a BBU RAID controller. Since, ZFS & Ceph don't play nice with RAID controllers and only with HBA controllers, you'll...
  7. J

    [Proxmox Cluster with Ceph Full-Mesh Network Design] Sanity Check & Advice for 3-node cluster with separate 10GbE/25GbE networks

    Nope. All network is traffic going over single network cable per node as physically described (1 -> 2, 3; 2 -> 1, 3; 3 -> 1,2) at https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server#Introduction So, yeah, you can say that is a single point-of-failure. Not noticing any latency...
  8. J

    [Proxmox Cluster with Ceph Full-Mesh Network Design] Sanity Check & Advice for 3-node cluster with separate 10GbE/25GbE networks

    I run a 3-node Proxmox Ceph cluster using a full-mesh broadcast network per https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server#Broadcast_Setup Each node is directly connected to each other without a switch. Yes, since it's broadcast traffic, each node gets Ceph public, private, and...
  9. J

    Disk cache none...safe or not safe?

    It was quite reproduceable in my production environment with Ceph migrations done every other week. Again, YMMV. All VMs are Linux with qemu-guest-agent installed. Was only through trial-and-error, I found the cache policies that work for ZFS & Ceph and no more corruptions.
  10. J

    Disk cache none...safe or not safe?

    Through trial-and-error, I use the following disk cache policies in production: For stand-alone servers using ZFS, writeback. For Ceph servers, none. Using anything else other than none/no cache with Ceph cause VM migration corruption issues.
  11. J

    PERC H345

    I'm using Dell HBA330 in production with no issues with ZFS & Ceph. No idea if that is an option for you.
  12. J

    Is Dell PERC H710P supported??

    Better off flashing it to IT-Mode per https://fohdeesha.com/docs/perc.html Do NOT skip any steps and take your time. Make sure you record the SAS address. Don't forget to flash a BIOS and/or ROM on it if you want to boot from the drives. Been running it production with no issues.
  13. J

    Open VSwitch broadcast mode?

    Trying my luck in asking my question in this forum. Have 3-nodes that are physically cabled (each node has 2 cables to each other node, total 6 six cables, no switch) as a full mesh network per https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server#Introduction (4. Broadcast: Each...
  14. J

    Open VSwitch Broadcast bond mode?

    The 3-nodes are physically cabled (each node has 2 cables to each other node, total 6 six cables, no switch) as a full mesh network per https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server#Introduction (4. Broadcast: Each packet is sent to both other nodes) Nodes From To 1 --> 2, 3...
  15. J

    Sanity check for new installation

    Corosync and Ceph's definition of quorum are different, AFAIK. Corosync considers quorum when its N/2+1 nodes whereas with Ceph it's odd-numbered of nodes. So, I always have odd-numbered Ceph nodes cause you know, split-brain blows chunks.