Search results

  1. J

    [SOLVED] Proxmox on Dell P570F

    Seems the Dell P570F is a nothing more than a Dell R740xd. I would get a Dell R740xd to future proof it to make sure it doesn't get vendor locked. Make sure you get the NVME version of the R740xd otherwise you'll get a R740xd with a PERC which is NOT what you want. So as to NOT waste any NVME...
  2. J

    PVE 9.1 Installation on Dell R515

    Sounds good. I've moved on to 13th-gen Dells and swapped out the Dell PERCs for Dell HBA330s which is a true HBA/IT-mode controller.
  3. J

    PVE 9.1 Installation on Dell R515

    I use this, https://fohdeesha.com/docs/perc.html, to flash 12th-gen Dell PERCs to IT-mode with no issues in production. Don't skip any steps and take your time. Don't forget to flash the BIOS/UEFI ROMs to allow booting off Proxmox.
  4. J

    Install Proxmox on Dell PowerEdge R6515 with RAID1

    That darn PERC and it's HBA/IT-mode drama. Get a true HBA controller. I use Dell HBA330s in production with no issues.
  5. J

    iSCSI/LVM RHEL guest disk scheduler selection

    I use none/noop on Linux guests since like forever on virtualization platforms. That includes VMware and Proxmox in production with no issues. Per that RH article, I don't use iSCSI/SR-IOV/passthrough. I let the hypervisor's I/O scheduler figure out I/O ordering.
  6. J

    Ceph performance

    Lack of power-loss protection (PLP) on those SSDs is the primary reason for horrible IOPS. Read other posts on why PLP is important for SSDs. I get IOPS in the low thousands on a 7-node Ceph cluster using 10K RPM SAS drives on 16-drive bay nodes. For Ceph, more OSDs/nodes = more IOPS.
  7. J

    ESXi → Proxmox migration: Host CPU vs x86-64-v4-AES performance & 10 Gbit VirtIO speeds

    Try setting the VM Networking VirtIO Multiqueue to 1. Giving the NIC its own I/O thread in my case helps with networking througput.
  8. J

    ESXi → Proxmox migration: Host CPU vs x86-64-v4-AES performance & 10 Gbit VirtIO speeds

    Per https://www.proxmox.com/en/services/training-courses/videos/proxmox-virtual-environment/whats-new-in-proxmox-ve-9-1 for Windows Server 2025 VMs, you'll want to enable the nested-virt flag under Extra CPU Flags options.
  9. J

    ddr4 intel optane

    Since Proxmox is Debian with an Ubuntu LTS kernel, it should work. If it was me, I would just go straight to flash storage and skip it. I do, however use the Intel Optane P1600X as a ZFS RAID-0 OS drive for Proxmox without issues.
  10. J

    VMware user here

    If you plan on using shared storage, your officially Proxmox supported options are Ceph & ZFS (they do NOT work with RAID controllers like the Dell PERC). Both require an IT/HBA-mode controller. I use a Dell HBA330 in production with no issues.
  11. J

    Dedicated Migration Network vs. High Speed Storage Network: Do I need two separate VLANs when Clustering?

    Technically, you do not if this is a home lab, which I am guessing it is. Now, it is considered best production practice to separate the various network into their own VLANs especially with Corosync with it's own isolated network switches. Notice, I said best practice. However, lots of people...
  12. J

    H330 sas controller failed after upgrading to Kernel 6.17.2-2-pve

    Better off with a Dell HBA330. It's a LSI 3008 IT-mode controller chip anyhow. Just make sure to update the firmware to lastest version at dell.com/support
  13. J

    The SSD search continues ...

    As was mentioned, getting new drive is "nice" but not really required. With a reputable enterprise flash drive, getting it used is fine. I have used 5-year-old Intel enterprise SSDs and they still show 100% life. At home, I use Intel Optane which pretty much have infinite lifetime but doesn't...
  14. J

    The SSD search continues ...

    This thread at https://forums.servethehome.com/index.php?threads/enterprise-ssd-small-deals.48343 has a list of enterprise SSDs for sale. Another option is getting a server that supports U.2/U.3 flash drives. They can be surprisingly cheaper than SATA enterprise SSDs.
  15. J

    H740p mini and SAS Intel SSD PX05SMB040

    I compile this utility to format SAS drives to 512-bytes or do a low-level format https://github.com/ahouston/setblocksize You'll need to install the sg3_utils package. While doing that, might as install sdparm package and enable the write cache on the SAS drive as root running the following...
  16. J

    H740p mini and SAS Intel SSD PX05SMB040

    Don't forgot to flash the HBA330 to the latest firmware from dell.com/support
  17. J

    H740p mini and SAS Intel SSD PX05SMB040

    As I mentioned before, so much drama with PERC controllers in HBA-mode. Just swap them out for a Dell HBA330 true IT-mode storage controller. Your future self will thank you. They are cheap to get.
  18. J

    Proxmox Offline Mirror Pick the Latest Snapshot

    I use this script by Thomas https://forum.proxmox.com/threads/proxmox-offline-mirror-released.115219/#post-506894
  19. J

    [Help] Dell R740 + Broadcom BCM5720 NDC - Ports Active (Lights On) but Not Detected by Proxmox

    I use Dell Intel X550 rNDC in production without issues. Both the 2x1GbE-2x10GbE and 4x10GbE versions. The 10GbE uses the ixgbe driver and the 1GbE uses the igb driver. Use 'dmesg -t' to confirm. Obviously flash the rNDC to the latest firmware version which is currently v23.0.0 dated 20Sep2024...
  20. J

    H330 sas controller failed after upgrading to Kernel 6.17.2-2-pve

    It's these one-off situations with the megaraid_sas driver and just installing a Dell HBA330 using the much simpler mpt3sas driver will avoid all this drama. LOL. In addition, the Dell HBA330 is very cheap to get.