
How does it work with auto fan-speed regulation and all that?
I wrote a fan controller script that uses ipmitool to read the various onboard temperatures and dynamically adjusts the fan speeds via ipmi. So it's now temperature controlled between 30 and 100% PWM for the CPU and 40-80% PWM for the case fans even if the BMCs firmware only allows you to set the fans at fixed 100%, 75% or 50% PWM.
It's been a power saving measure, I am sure ...
If I turn off the RAMs RGB, yes. That saves a few watts

But I of cause only bought them because of the additional cooling by the heatspreaders...
Yes, but now there are 4 and 8-port NIC MiniPCs that are quite tempting. But I personally won't buy anything without ECC anymore after having 3 times non-ECC RAM slowly failing corrupting hundreds of GBs of data which I always only recognized after weeks when the system got unstable...
I want to directly know when my machine is screwing up all my data and not weeks later when its too late. Luckily I keep backups for months without overwriting them. But still annoying if you lose weeks of data as you won't know what file is healthy or or corrupted as even ZFS checksumming won't help if the data got corrupted in RAM before writing it to disk...had to overwrite/restore every file that got modified the last weeks...
Without the extras, if it's just about having CPU+RAM+SSD, I would just submit a cluster is more reliable with ordinary hardware than any single piece of server gear, old or new.
Yes, I wouldn't run a single server. All the important stuff is redundant on different servers. OPNsense with HA via pfsync, Piholes via keepalived and gravity-sync, TrueNAS with replication of all data (this is where it gets expensive and power inefficient if you need to buy 18 disks to only get the usable capatity of 3 disks...), PBS with sync jobs, ...
But as downtime isn't that important (still a homelab where it doesn't really matter if it's offline for some hours) I could always remotely spin up one of the powered down servers and restore some daily backups. The server I use just for backups that is powered down most of the time for example is quite overkill for what is is doing. The idea was that in case one of the other servers fails I could simple restore a bunch of important VMs there. And its hardware is also compatible with the big on-demand server. So I could always move some PCIe cards, CPU, disks and RAM to the backup server if I need to.
I would push these off-site, but in your case it's probably cost prohibitive for the amount of data to store.
Yes. I would also prefer that but that's not affordable now. I've got a third PBS and USB disks offsite for some important data and also a forth PVE server with a third TrueNAS VM in the basement. Not offsite and won't help much in case of lightning, fire or water damage but at least a bit of protection against theft and cats or drunken visitors knocking over the rack
ot you, finally! So in the end, the really necessary stuff does not need all the PCIe-s.
Exactly. That one is stripped down as much as possible to save power. Doesn't need a GPU, not a problem if file transfers/backups take a bit longer with only Gbit instead of 10Gbit NICs, two mirrored SSDs for the virtual disks instead of dozens of SSDs and HDDs for cold data, backups, ... that I don't need to access all the time anyway. With less disks I also don't need the additional HBAs (which consume 10W each doing nothing) and HDDs that don't run 24/7 will also last longer.
Most important stuff here are webservers, reverse proxy, Nextcloud for syncing contacts/calenders/boommarks/todo lists and password safe, wireguard for VPN, OPNsense for routing, dokuwiki, Zabbix for monitoring, Graylog for central log collection, Pihole with Unbound as DNS, Homeassistant for smarthome, guacamole + semaphore + custom orchestrator VM for management stuff, firefly for finance management, paperless-ngx for digital document management, Wazuh for SIEM. All stuff that isn't that demanding on a "small" homelab scale as long as you got some durable and fast SSDs for the endless sync writes to the DBs.
Thanks for the comprehensive reply, we just have very different use cases, but that's fine, OP has his own mind anyhow.
Yes. There is a big spectrum of homelabbers. For some, it's a hobby and the more they learn and selfhost and bigger the homelab will grow and the higher the standards and expectations will become. On the other side of the spectrum, some people don't really care and it's just the annoying thing you have to do to get your smart home working. You run some oneliners to install turnkey appliances and forget it for years until PVE stops working because the disk got filled up or the hardware fails.
That is one of the great things about Proxmox products. You can run proper productive clusters, that companies rely on, but it is also a great entry for beginners who simply want to tinker around a bit without spending much.
Not that I wouldn't love to run a ceph cluster at home...it's just not economically reasonable if you want to seriously make use of it instead of just tinkering around for some learning purposes or proof-of-concept. And that's from a person that hoarded a rack of servers and is still complaining that he is still running out of ressources

. Then I need to run 3 or better 5 nodes 24/7. I would need to buy dozens of additional disks as ceph wants 3 copies of everything on multiple OSDs per host with always enough spare capacity to compensate a failing node. I would want to upgrade my NICs from 10 to 40GBit or at least 10Gbit NICs for every node. I would want to get rid of every single-point-of-failure so stacked switches, every NIC redundant, ...
Not sure where on this spectrum the OP is, but planning to run a reliable ceph cluster with usable performance for daily use is targeting at the peak of the scale.
You need weak hardware without a bunch of power hungry enterprise disks, NICs and switches to make it reasonable to run multiple ceph nodes 24/7 without needing to sell some organs to pay the electricity bills but on the other hand you really need all that enterprise stuff to get something that won't bottleneck your VMs by slow IO and that you can rely on and won't catastrophically fail if you do something stupid or some hardware fails. I personally still don't see how this should work if you want more than something just for learning.