Search results

  1. J

    Optimizing PERC Storage Configuration for Proxmox on the R640

    A better option is to avoid the PERC HBA-mode drama and get a Dell HBA330. It's a true IT-mode storage controller based on the LSI3008 chip. Real cheap to get. Just make sure to remove any existing virtual disks prior to use and flash to latest firmware. No issues in production.
  2. J

    Creating a set of Proxmox Servers for a class

    Currently using a full-mesh 3-node Proxmox Ceph cluster using Resource Pools. Created individual accounts and have them log in using the Proxmox VE authentication realm. Each VE account has PVEAdmin permissions to the Ceph pools and Networking to ensure they don't interfere with other. Works...
  3. J

    High latency on proxmox ceph cluster

    Use consumer SSDs at your peril. They fail and fail spectacularly. Only real solution is enterprise SSDs with PLP (power-loss protection) and for endurance.
  4. J

    best practices network for the PVE Cluster

    This. Since I follow the KISS principle, I use active/passive setup. I believe even VMware recommends this before I migrated to Proxmox.
  5. J

    Proxmox 6 mpt3sas Debian Bug ICO Report 926202 with Loaded Controller

    Don't know if matters but latest firmware for Dell HBA330 is 16.17.01.00 A08. May or may not help your issue. No issues on 16-drive bay R730s in production.
  6. J

    Dell R610’s good for PBS

    I use a Dell R200 to backup a Dell R320. Dell R200 has PERC flashed to IT-mode with 2 x 4TB SAS drives using ZFS RAID-1 with 8GB of RAM. No issues.
  7. J

    Gen1 VM migrate from Hyper-V 2019 to ProxmoxVE9

    Prior to migrating, may need to regenerate the kernel to include all the drivers. Had to do this when migrating to Hyper-V. So, run the following as root: dracut -fv -N --regenerate-all
  8. J

    Is it safe to use Proxmox VE (No-Subscription / Free) for small-scale production?

    Yes, the no-sub repos are fine as long as you have Linux SME skills. Always have a Plan B which is backups. I highly recommend you test updates on a separate server/cluster before pushing to production.
  9. J

    10G NIC HCL for Proxmox 9.0.3

    I haven't migrated to PVE 9 yet. I'm still on PVE 8 at home. I wonder if a ConnectX-4 LX works?
  10. J

    Proxmox Install Failing on R740xd - PERC H730P

    Ditch the PERC HBA-mode drama and swap it for a Dell HBA330 true IT/HBA-mode storage controller. Your future self will thank you. Plus, HBA330s are very cheap to get. Update to latest firmware from dell.com/support
  11. J

    [SOLVED] Proxmox VE namespaces?

    Just as Proxmox Backup Server supports namespaces to do hierarchical backups on the same backup pool, does Proxmox VE support namespaces for the creation of VMs/CTs as well on the same node/cluster? I really, really do NOT want to stand up an IaaS such as Apache Cloudstack. I want each local...
  12. J

    HW-based RAID corrupted something, VMs won't start anymore

    Hopefully you have backups. I strongly recommend using a pure IT/HBA-mode storage controller. Use software-defined storage (ZFS, LVM, Ceph) to handle your storage needs. I use a LSI3008 IT-mode storage controller (Dell HBA330) in production with no issues.
  13. J

    [SOLVED] Using RAID in HBA mode or remove RAID?

    Seriously, ditch the PERC HBA-mode drama and get a Dell HBA330 which is a true IT/HBA-mode controller. Uses the much simpler mpt3sas driver. Be sure to update to latest firmware at dell.com/support Super cheap to get and no more drama! LOL!
  14. J

    Is a 3-node Full Mesh Setup For Ceph and Corosync Good or Bad

    While it's true that 3-nodes is the bare minimum for Ceph, losing a node and depending on the other 2 to pick up the slack workload will make me nervous. For best practices, start with 5-nodes. With Ceph, more nodes/OSDs = more IOPS. As been said, better have good backup and restore procedures...
  15. J

    [SOLVED] Proxmox on Dell P570F

    Seems the Dell P570F is a nothing more than a Dell R740xd. I would get a Dell R740xd to future proof it to make sure it doesn't get vendor locked. Make sure you get the NVME version of the R740xd otherwise you'll get a R740xd with a PERC which is NOT what you want. So as to NOT waste any NVME...
  16. J

    PVE 9.1 Installation on Dell R515

    Sounds good. I've moved on to 13th-gen Dells and swapped out the Dell PERCs for Dell HBA330s which is a true HBA/IT-mode controller.
  17. J

    PVE 9.1 Installation on Dell R515

    I use this, https://fohdeesha.com/docs/perc.html, to flash 12th-gen Dell PERCs to IT-mode with no issues in production. Don't skip any steps and take your time. Don't forget to flash the BIOS/UEFI ROMs to allow booting off Proxmox.
  18. J

    Install Proxmox on Dell PowerEdge R6515 with RAID1

    That darn PERC and it's HBA/IT-mode drama. Get a true HBA controller. I use Dell HBA330s in production with no issues.
  19. J

    iSCSI/LVM RHEL guest disk scheduler selection

    I use none/noop on Linux guests since like forever on virtualization platforms. That includes VMware and Proxmox in production with no issues. Per that RH article, I don't use iSCSI/SR-IOV/passthrough. I let the hypervisor's I/O scheduler figure out I/O ordering.
  20. J

    Ceph performance

    Lack of power-loss protection (PLP) on those SSDs is the primary reason for horrible IOPS. Read other posts on why PLP is important for SSDs. I get IOPS in the low thousands on a 7-node Ceph cluster using 10K RPM SAS drives on 16-drive bay nodes. For Ceph, more OSDs/nodes = more IOPS.