Recent content by jdancer

  1. J

    ESXi → Proxmox migration: Host CPU vs x86-64-v4-AES performance & 10 Gbit VirtIO speeds

    Per https://www.proxmox.com/en/services/training-courses/videos/proxmox-virtual-environment/whats-new-in-proxmox-ve-9-1 for Windows Server 2025 VMs, you'll want to enable the nested-virt flag under Extra CPU Flags options.
  2. J

    ddr4 intel optane

    Since Proxmox is Debian with an Ubuntu LTS kernel, it should work. If it was me, I would just go straight to flash storage and skip it. I do, however use the Intel Optane P1600X as a ZFS RAID-0 OS drive for Proxmox without issues.
  3. J

    VMware user here

    If you plan on using shared storage, your officially Proxmox supported options are Ceph & ZFS (they do NOT work with RAID controllers like the Dell PERC). Both require an IT/HBA-mode controller. I use a Dell HBA330 in production with no issues.
  4. J

    Dedicated Migration Network vs. High Speed Storage Network: Do I need two separate VLANs when Clustering?

    Technically, you do not if this is a home lab, which I am guessing it is. Now, it is considered best production practice to separate the various network into their own VLANs especially with Corosync with it's own isolated network switches. Notice, I said best practice. However, lots of people...
  5. J

    H330 sas controller failed after upgrading to Kernel 6.17.2-2-pve

    Better off with a Dell HBA330. It's a LSI 3008 IT-mode controller chip anyhow. Just make sure to update the firmware to lastest version at dell.com/support
  6. J

    The SSD search continues ...

    As was mentioned, getting new drive is "nice" but not really required. With a reputable enterprise flash drive, getting it used is fine. I have used 5-year-old Intel enterprise SSDs and they still show 100% life. At home, I use Intel Optane which pretty much have infinite lifetime but doesn't...
  7. J

    The SSD search continues ...

    This thread at https://forums.servethehome.com/index.php?threads/enterprise-ssd-small-deals.48343 has a list of enterprise SSDs for sale. Another option is getting a server that supports U.2/U.3 flash drives. They can be surprisingly cheaper than SATA enterprise SSDs.
  8. J

    H740p mini and SAS Intel SSD PX05SMB040

    I compile this utility to format SAS drives to 512-bytes or do a low-level format https://github.com/ahouston/setblocksize You'll need to install the sg3_utils package. While doing that, might as install sdparm package and enable the write cache on the SAS drive as root running the following...
  9. J

    H740p mini and SAS Intel SSD PX05SMB040

    Don't forgot to flash the HBA330 to the latest firmware from dell.com/support
  10. J

    H740p mini and SAS Intel SSD PX05SMB040

    As I mentioned before, so much drama with PERC controllers in HBA-mode. Just swap them out for a Dell HBA330 true IT-mode storage controller. Your future self will thank you. They are cheap to get.
  11. J

    Proxmox Offline Mirror Pick the Latest Snapshot

    I use this script by Thomas https://forum.proxmox.com/threads/proxmox-offline-mirror-released.115219/#post-506894
  12. J

    [Help] Dell R740 + Broadcom BCM5720 NDC - Ports Active (Lights On) but Not Detected by Proxmox

    I use Dell Intel X550 rNDC in production without issues. Both the 2x1GbE-2x10GbE and 4x10GbE versions. The 10GbE uses the ixgbe driver and the 1GbE uses the igb driver. Use 'dmesg -t' to confirm. Obviously flash the rNDC to the latest firmware version which is currently v23.0.0 dated 20Sep2024...
  13. J

    H330 sas controller failed after upgrading to Kernel 6.17.2-2-pve

    It's these one-off situations with the megaraid_sas driver and just installing a Dell HBA330 using the much simpler mpt3sas driver will avoid all this drama. LOL. In addition, the Dell HBA330 is very cheap to get.
  14. J

    Esxi migration to proxmox

    While it's true 3-nodes is the minimum for a Ceph cluster, you can only lose 1 node before losing quorum. You'll really want 5-nodes. Ceph is a scale-out solution. More nodes/OSDs = more IOPS. Can lose 2 nodes and still have quorum. While converting the PERC to HBA-mode does work, I've had...
  15. J

    Storage for production cluster

    You need to really confirm that write cache enable is turned on via 'dmesg -t' output on each drive. If the write/read cache is disabled, it really kills the IOPS. While technically 3-nodes is indeed the bare minimum for Ceph, I don't consider it production worthy due to fact if you lose 1...