Yes, the no-sub repos are fine as long as you have Linux SME skills.
Always have a Plan B which is backups.
I highly recommend you test updates on a separate server/cluster before pushing to production.
Ditch the PERC HBA-mode drama and swap it for a Dell HBA330 true IT/HBA-mode storage controller.
Your future self will thank you. Plus, HBA330s are very cheap to get. Update to latest firmware from dell.com/support
Just as Proxmox Backup Server supports namespaces to do hierarchical backups on the same backup pool, does Proxmox VE support namespaces for the creation of VMs/CTs as well on the same node/cluster?
I really, really do NOT want to stand up an...
Hopefully you have backups.
I strongly recommend using a pure IT/HBA-mode storage controller. Use software-defined storage (ZFS, LVM, Ceph) to handle your storage needs.
I use a LSI3008 IT-mode storage controller (Dell HBA330) in production...
Seriously, ditch the PERC HBA-mode drama and get a Dell HBA330 which is a true IT/HBA-mode controller. Uses the much simpler mpt3sas driver. Be sure to update to latest firmware at dell.com/support
Super cheap to get and no more drama! LOL!
While it's true that 3-nodes is the bare minimum for Ceph, losing a node and depending on the other 2 to pick up the slack workload will make me nervous. For best practices, start with 5-nodes. With Ceph, more nodes/OSDs = more IOPS.
As been...
Seems the Dell P570F is a nothing more than a Dell R740xd.
I would get a Dell R740xd to future proof it to make sure it doesn't get vendor locked.
Make sure you get the NVME version of the R740xd otherwise you'll get a R740xd with a PERC which...
I use this, https://fohdeesha.com/docs/perc.html, to flash 12th-gen Dell PERCs to IT-mode with no issues in production.
Don't skip any steps and take your time. Don't forget to flash the BIOS/UEFI ROMs to allow booting off Proxmox.
I use none/noop on Linux guests since like forever on virtualization platforms. That includes VMware and Proxmox in production with no issues. Per that RH article, I don't use iSCSI/SR-IOV/passthrough. I let the hypervisor's I/O scheduler figure...
Lack of power-loss protection (PLP) on those SSDs is the primary reason for horrible IOPS. Read other posts on why PLP is important for SSDs.
I get IOPS in the low thousands on a 7-node Ceph cluster using 10K RPM SAS drives on 16-drive bay...
Per https://www.proxmox.com/en/services/training-courses/videos/proxmox-virtual-environment/whats-new-in-proxmox-ve-9-1 for Windows Server 2025 VMs, you'll want to enable the nested-virt flag under Extra CPU Flags options.
Since Proxmox is Debian with an Ubuntu LTS kernel, it should work.
If it was me, I would just go straight to flash storage and skip it.
I do, however use the Intel Optane P1600X as a ZFS RAID-0 OS drive for Proxmox without issues.
If you plan on using shared storage, your officially Proxmox supported options are Ceph & ZFS (they do NOT work with RAID controllers like the Dell PERC).
Both require an IT/HBA-mode controller. I use a Dell HBA330 in production with no issues.
Technically, you do not if this is a home lab, which I am guessing it is.
Now, it is considered best production practice to separate the various network into their own VLANs especially with Corosync with it's own isolated network switches...
Better off with a Dell HBA330. It's a LSI 3008 IT-mode controller chip anyhow. Just make sure to update the firmware to lastest version at dell.com/support
As was mentioned, getting new drive is "nice" but not really required.
With a reputable enterprise flash drive, getting it used is fine. I have used 5-year-old Intel enterprise SSDs and they still show 100% life.
At home, I use Intel Optane...