Search results

  1. J

    Is it safe to use Proxmox VE (No-Subscription / Free) for small-scale production?

    Yes, the no-sub repos are fine as long as you have Linux SME skills. Always have a Plan B which is backups. I highly recommend you test updates on a separate server/cluster before pushing to production.
  2. J

    10G NIC HCL for Proxmox 9.0.3

    I haven't migrated to PVE 9 yet. I'm still on PVE 8 at home. I wonder if a ConnectX-4 LX works?
  3. J

    Proxmox Install Failing on R740xd - PERC H730P

    Ditch the PERC HBA-mode drama and swap it for a Dell HBA330 true IT/HBA-mode storage controller. Your future self will thank you. Plus, HBA330s are very cheap to get. Update to latest firmware from dell.com/support
  4. J

    [SOLVED] Proxmox VE namespaces?

    Just as Proxmox Backup Server supports namespaces to do hierarchical backups on the same backup pool, does Proxmox VE support namespaces for the creation of VMs/CTs as well on the same node/cluster? I really, really do NOT want to stand up an IaaS such as Apache Cloudstack. I want each local...
  5. J

    HW-based RAID corrupted something, VMs won't start anymore

    Hopefully you have backups. I strongly recommend using a pure IT/HBA-mode storage controller. Use software-defined storage (ZFS, LVM, Ceph) to handle your storage needs. I use a LSI3008 IT-mode storage controller (Dell HBA330) in production with no issues.
  6. J

    [SOLVED] Using RAID in HBA mode or remove RAID?

    Seriously, ditch the PERC HBA-mode drama and get a Dell HBA330 which is a true IT/HBA-mode controller. Uses the much simpler mpt3sas driver. Be sure to update to latest firmware at dell.com/support Super cheap to get and no more drama! LOL!
  7. J

    Is a 3-node Full Mesh Setup For Ceph and Corosync Good or Bad

    While it's true that 3-nodes is the bare minimum for Ceph, losing a node and depending on the other 2 to pick up the slack workload will make me nervous. For best practices, start with 5-nodes. With Ceph, more nodes/OSDs = more IOPS. As been said, better have good backup and restore procedures...
  8. J

    [SOLVED] Proxmox on Dell P570F

    Seems the Dell P570F is a nothing more than a Dell R740xd. I would get a Dell R740xd to future proof it to make sure it doesn't get vendor locked. Make sure you get the NVME version of the R740xd otherwise you'll get a R740xd with a PERC which is NOT what you want. So as to NOT waste any NVME...
  9. J

    New to this (Reflashing Dell PERC)

    Sounds good. I've moved on to 13th-gen Dells and swapped out the Dell PERCs for Dell HBA330s which is a true HBA/IT-mode controller.
  10. J

    New to this (Reflashing Dell PERC)

    I use this, https://fohdeesha.com/docs/perc.html, to flash 12th-gen Dell PERCs to IT-mode with no issues in production. Don't skip any steps and take your time. Don't forget to flash the BIOS/UEFI ROMs to allow booting off Proxmox.
  11. J

    Install Proxmox on Dell PowerEdge R6515 with RAID1

    That darn PERC and it's HBA/IT-mode drama. Get a true HBA controller. I use Dell HBA330s in production with no issues.
  12. J

    iSCSI/LVM RHEL guest disk scheduler selection

    I use none/noop on Linux guests since like forever on virtualization platforms. That includes VMware and Proxmox in production with no issues. Per that RH article, I don't use iSCSI/SR-IOV/passthrough. I let the hypervisor's I/O scheduler figure out I/O ordering.
  13. J

    Ceph performance

    Lack of power-loss protection (PLP) on those SSDs is the primary reason for horrible IOPS. Read other posts on why PLP is important for SSDs. I get IOPS in the low thousands on a 7-node Ceph cluster using 10K RPM SAS drives on 16-drive bay nodes. For Ceph, more OSDs/nodes = more IOPS.
  14. J

    ESXi → Proxmox migration: Host CPU vs x86-64-v4-AES performance & 10 Gbit VirtIO speeds

    Try setting the VM Networking VirtIO Multiqueue to 1. Giving the NIC its own I/O thread in my case helps with networking througput.
  15. J

    ESXi → Proxmox migration: Host CPU vs x86-64-v4-AES performance & 10 Gbit VirtIO speeds

    Per https://www.proxmox.com/en/services/training-courses/videos/proxmox-virtual-environment/whats-new-in-proxmox-ve-9-1 for Windows Server 2025 VMs, you'll want to enable the nested-virt flag under Extra CPU Flags options.
  16. J

    ddr4 intel optane

    Since Proxmox is Debian with an Ubuntu LTS kernel, it should work. If it was me, I would just go straight to flash storage and skip it. I do, however use the Intel Optane P1600X as a ZFS RAID-0 OS drive for Proxmox without issues.
  17. J

    VMware user here

    If you plan on using shared storage, your officially Proxmox supported options are Ceph & ZFS (they do NOT work with RAID controllers like the Dell PERC). Both require an IT/HBA-mode controller. I use a Dell HBA330 in production with no issues.
  18. J

    Dedicated Migration Network vs. High Speed Storage Network: Do I need two separate VLANs when Clustering?

    Technically, you do not if this is a home lab, which I am guessing it is. Now, it is considered best production practice to separate the various network into their own VLANs especially with Corosync with it's own isolated network switches. Notice, I said best practice. However, lots of people...
  19. J

    H330 sas controller failed after upgrading to Kernel 6.17.2-2-pve

    Better off with a Dell HBA330. It's a LSI 3008 IT-mode controller chip anyhow. Just make sure to update the firmware to lastest version at dell.com/support
  20. J

    The SSD search continues ...

    As was mentioned, getting new drive is "nice" but not really required. With a reputable enterprise flash drive, getting it used is fine. I have used 5-year-old Intel enterprise SSDs and they still show 100% life. At home, I use Intel Optane which pretty much have infinite lifetime but doesn't...