Search results

  1. J

    10G NIC HCL for Proxmox 9.0.3

    In production at work, I use the following: Intel X550 Intel X540 Intel i350 without issues with latest firmware. At home, I use a Mellanox ConnectX-3 SFP+ 10GbE fiber NIC without issues also with latest firmware.
  2. J

    ZFS on rotative drives super bad performance

    I run ZFS on standalone servers and Ceph on clustered servers. Usually, on Dell servers, the write cache on hard drives is disabled because it is assumed they will be used on a BBU RAID controller. Since, ZFS & Ceph don't play nice with RAID controllers and only with HBA controllers, you'll...
  3. J

    [Proxmox Cluster with Ceph Full-Mesh Network Design] Sanity Check & Advice for 3-node cluster with separate 10GbE/25GbE networks

    Nope. All network is traffic going over single network cable per node as physically described (1 -> 2, 3; 2 -> 1, 3; 3 -> 1,2) at https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server#Introduction So, yeah, you can say that is a single point-of-failure. Not noticing any latency...
  4. J

    [Proxmox Cluster with Ceph Full-Mesh Network Design] Sanity Check & Advice for 3-node cluster with separate 10GbE/25GbE networks

    I run a 3-node Proxmox Ceph cluster using a full-mesh broadcast network per https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server#Broadcast_Setup Each node is directly connected to each other without a switch. Yes, since it's broadcast traffic, each node gets Ceph public, private, and...
  5. J

    Disk cache none...safe or not safe?

    It was quite reproduceable in my production environment with Ceph migrations done every other week. Again, YMMV. All VMs are Linux with qemu-guest-agent installed. Was only through trial-and-error, I found the cache policies that work for ZFS & Ceph and no more corruptions.
  6. J

    Disk cache none...safe or not safe?

    Through trial-and-error, I use the following disk cache policies in production: For stand-alone servers using ZFS, writeback. For Ceph servers, none. Using anything else other than none/no cache with Ceph cause VM migration corruption issues.
  7. J

    PERC H345

    I'm using Dell HBA330 in production with no issues with ZFS & Ceph. No idea if that is an option for you.
  8. J

    Is Dell PERC H710P supported??

    Better off flashing it to IT-Mode per https://fohdeesha.com/docs/perc.html Do NOT skip any steps and take your time. Make sure you record the SAS address. Don't forget to flash a BIOS and/or ROM on it if you want to boot from the drives. Been running it production with no issues.
  9. J

    Open VSwitch broadcast mode?

    Trying my luck in asking my question in this forum. Have 3-nodes that are physically cabled (each node has 2 cables to each other node, total 6 six cables, no switch) as a full mesh network per https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server#Introduction (4. Broadcast: Each...
  10. J

    Open VSwitch Broadcast bond mode?

    The 3-nodes are physically cabled (each node has 2 cables to each other node, total 6 six cables, no switch) as a full mesh network per https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server#Introduction (4. Broadcast: Each packet is sent to both other nodes) Nodes From To 1 --> 2, 3...
  11. J

    Sanity check for new installation

    Corosync and Ceph's definition of quorum are different, AFAIK. Corosync considers quorum when its N/2+1 nodes whereas with Ceph it's odd-numbered of nodes. So, I always have odd-numbered Ceph nodes cause you know, split-brain blows chunks.
  12. J

    Best storage solution for Proxmox Cluster ?

    At work, been migrating VMware clusters over to Proxmox Ceph clusters. These clusters are using 10K SAS drives. One of the issues was why IOPS was real bad. Found out the SAS drives are configured to use a BBU HW RAID controller with the SAS drive write cache disabled because it was connected...
  13. J

    Open VSwitch Broadcast bond mode?

    Per https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server#Broadcast_Setup, I have a 3-node full-mesh broadcast Proxmox cluster working with traditional Linux networking. I've migrated the nodes to Open VSwitch. The only non-LACP bond mode options are active-backup & balance-slb. Well...
  14. J

    CentOS 7.3 cannot boot after migrate from VMware

    Did you prep the VM before migration? See my reply at https://forum.proxmox.com/threads/debian-11-not-booting-with-virtio-scsi-single-but-works-with-vmware-pvscsi.144806/#post-652170
  15. J

    Dell T140 with PERC H330 on Proxmox 8.4

    It does work. Have it running in production in a couple of Dell R530s running PBS. Make sure it's running the latest firmware, delete any virtual disks, and switch it to HBA-mode. It uses the megaraid SAS driver. A better option is a Dell HBA330 which is a pure HBA controller. I use this on...
  16. J

    Broadcom or Intel network card support

    I use Intel NICs in production except for the 700-series on Dell servers without issues. I do have Dell servers with Broadcom 1GbE which work fine but don't have having any 10/25GbE Broadcoms in production. May want to ask your question on Reddit.
  17. J

    Migration from VMware - vSAN or Ceph?

    At work, migrated from VMware to Proxmox. The question was which filesystem provides native snapshots? For standalone servers, it was ZFS. For clusters, it was Ceph. I think of Ceph as an open-source version of vSAN. Researched Ceph and find out it wants homogeneous hardware. So made sure the...
  18. J

    Proxmox Ceph Cluster Optimizations?

    May also want to ask your question on Reddit's r/ceph
  19. J

    VMware to Proxmox migration

    Took a look at the SCv3020 manual and it seems this storage unit supports direct connection to SAS. Proxmox should be able to see each individual drive. I use ZFS in production on standalone servers for its data/metadata error checking, compression, and snapshots. For max IOPS, I use ZFS...
  20. J

    HW RAID or ZFS on Dell PowerEdge R630

    At work, I'm involved with migrating Dells from VMware to Proxmox. I just make sure all hardware and firmware is the same (CPU, NIC, Storage, RAM, etc). I do swap out the PERC for Dell HBA330. I use two small drives and use ZFS RAID-1 for Proxmox. Rest of drives are for VMs/data. ZFS provides...