Disk constellation for DXP4800

Bernd_909

New Member
Dec 20, 2025
4
0
1
The current NAS series from Ugreen seems to be a good platform for a small home setup, but I am struggling a bit with choosing the right disk constellation.
I am planning with the DXP4800 (Intel N100 4 cores, 8 GB RAM) and installing PVE on it. The main consideration here is power efficiancy since this thing will be idle most of the time.
The setup will serve for both backup purposes of family members (through sshfs) and a few VMs or containers that will run things like nextcloud, email and plex.
There may be about 10 users eventually and the system should run with good performance when 2 or 3 are accessing the services simultaneously.

When it comes to the optimal disk setup for this purpose I am unsure however. I was considering to go with 4x 4TB NAS HDDs, one nvme ssd for VMs, another nvme for read cache and the OS installed on the internal flash. But I have no clue if this would provide a decent combination with the rest of the hardware. Would sata SSDs for storage instead of HDDs make sense or would that be a waste of money? Do I need a read cache or will other components provide a bottleneck? Would it be an option to have both read and write cache on a single nvme? Since no USP is used, should a write cache be avoided?
 
When it comes to the optimal disk setup for this purpose I am unsure however. I was considering to go with 4x 4TB NAS HDDs, one nvme ssd for VMs, another nvme for read cache and the OS installed on the internal flash.

Is the internal flash a sd (mmc) card? Then you won't be able to install ProxmoxVE with the regular iso image see https://forum.proxmox.com/threads/unable-to-get-device-for-partition-1-on-device-dev-mmcblk0.42348/

In theory you could first install Debian stable and afterwards ProxmoxVE ( see https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_13_Trixie for details) but unless you have a good reason I wouldn't do this. There is a good reason, why instaalling PVE to sd cards is not supported: Since the ProxmoxVE os does a lot of writes (logging and metric data plus the cluster file system database (which is also used on non-cluster single-nodes)) sd cards will get trashed soon. So: Don't do this.

Regarding nvme ssds: Usual consumer SSDs without power-loss-protection (plp) (which is true for most M2 2880 ssds) suffer the same issue as sd cards but are still more relieable than them and thanks to ZFS mirror you should be able to replace one of them in case of an failure. Alternatively get the cheapest plp-ssds ( https://geizhals.at/?cat=hdssd&xf=7... (PCIe)&offset=0&promode=true&hloc=at&hloc=de ). You could also use the boot device for VM/lxcs if you can live with the higher wearout and that OS and vms/lxcs share the same disc. In my homelab I did this since my mini-pcs just have two storage slots (one nvme, one sata) and I want to have some redundancy. If this means that in case of an reinstallation I have to restore the vms/lxcs from my backup so be it.
So in your case I would use the NVME slots for the OS. I would recommend to get two NVME sds from different brands (to minimice the chance that two will fail at the same time) if you can't afford ssds with PLP and create a ZFS mirror on them.

For the internal storage I personally wouldn't go with four hdds, but two used sata server-ssds with PLP (they are quite affordable and still have larger endurance than consumer-ssds) and two hdds with the largest capacity you can afford. Afterwards setup a mirror on the ssds for vm/lxc storage and another mirror on the hdds for raw data storage.
Alternatively you could create a mirror on the hdds and add the ssds as a special device mirror. The idea is to use the special device to store the metadata of all files and additionally very small files. By creating a custom dataset for vms/lxcs you could even use the ssds as vm/lxc storage while bulk data is still saved on the hdds. See this post by LnxBil who explains this better than me:
https://forum.proxmox.com/threads/zfs-metadata-special-device.129031/post-699290
If this sounds a little bit to eloborate to be comfortable with, stick with two ssds for VM/LXC storage and two hdds for raw data it will propably still be fine for a home server.

Regarding the read cache: How do you want to configure it ? This needs to be configured on the os level, PVE doesn't do this by default. You can do this with ZFS ( https://pve.proxmox.com/wiki/ZFS_on_Linux ) but usually it's not worth it ( having more RAM helps more). A special device on the other hand can be quite useful.

See also following old forum threads:
- https://forum.proxmox.com/threads/zfs-ssd-pool-with-nvme-cache-zil-l2arc.45920/
- https://forum.proxmox.com/threads/l2arc-cache-with-or-w-o-swap-on-zfs.121156/
- https://forum.proxmox.com/threads/brauche-ich-slog-l2arc.134928/ (German, use something like deepl for translation)

Regarding write cache I have no idea what you mean, can you explain?

For backups I would recommend not to use file sharing via sshfs but instead a dedicated backup tool like kopia, restic, borg or duplicati. They all can be configured to work with a linux server ( so in case of your homeserver propably a dedicated lxc or vm would be the best option) and some of them even have a GUI, so things don't get to complicated for non-techy users.

Regarding your apps: How do you want to run email? I hope you don't plan to self-host an mail server because you will need to be quite careful to not be added to anti-spam blacklists or get abused by spammers or both. Hosting some webmail interface to an external mail provider should work though.

If you just want to host some apps please also consider to use some NAS OS with Apps (aka docker container) and VM support instead like OpenMediaVault, TrueNAS or uNRAID. OpenMediaVault could even be installed to a sd card since it has a plugin to reduce writes and extend the durability. unRAID can also be installed to SD cards as far I know. TrueNAS however doesn't support this (like PVE) though.
Imho if your homeserver main purpose is to provide network storage plus some self-hosted apps a NAS OS is way less complex and has a more novice-friendly learning curve than ProxmoxVE.
 
  • Like
Reactions: news and UdoB
Thank you for the extensive advice.

Regarding the nvme ssds, I was already considering either the Kingston DC2000B or the Samsung PM9A3 since both have powerloss-protection.
The TBW of the Kingston is rather low with 700TB but I am wondering how much write activity the disks will get considering the setup will see very little load most of the time. If I want to have VMs on there 480 Gb or 960 GB at most should suffice I think.

My reason for choosing 4x HDDs was to have them setup with a RAID-Z1 so the 4x 4TB will get me 12 TB usable space with one disk for redundancy. That seems cheaper then getting two 12TB HDDs

The idea of a dedicated cache is something I no longer think useful in this case, since most guides seem to argue that this does not make sense unless the system has much more RAM then 8 or even 16 GB

If having both OS and VMs on the nvme is possible, that sounds like the better and most importantly less complicated option right now,
I was planning to have the two nvme drives in a simple RAID1 mirror though and probably with ext4 since I don't see much benefit with ZFS in this use case, especially with powerloss-prevention available. Does that make sense?

TrueNAS is somthing I also looked into and you can apparently install it on the emmc drive as well. The main reason against it is that I feel it less transparent then PVE since "Apps" are mostly predefined containers and I can't really audit much of their internal configuration. That said it may be easier to run. Then again the primary feature of TrueNAS is the SMB/NFS share which I wont use since most clients will connect over the internet. And regarding backup, the idea was to run restic through ssh using rclone with the append-only feature, which in turn uses the sshfs protocol if I am not mistaken. Im going to configure the clients anyway to ensure the user data is backed up the next time some ransomware rolls over them. So effectively I only need SSH access and a few dedicated users for the backup. And there is also the fact that I just like the idea of a solif PVE better then TrueNAS :)

 
Yes more Ram 32 GByte and more Cores, please read the Proxmox VE System Requirements.

The Ugreen DXP4800 is only a NAS, not more!
With 8 GB Ram you can't get more.
 
Last edited:
The DXP4800 fullfills the listed requirements many times over. Based on other experience reports with the same or similar hardware, that's not only plenty but will still leave the system at 1% loads most of the time. There will rarely be more then two users accessing those services at the same time.
 
Then gool luck, you can tell us, how its works for you.
I will use this Ugreen DXP4800 only as a NAS.
But my Ryzen 5 3500X NAS use ZFS on the boot and data and has 32 GByte DDR4 Ram with 2 NICs.
 
Last edited:
Is the internal flash a sd (mmc) card? Then you won't be able to install ProxmoxVE with the regular iso image see https://forum.proxmox.com/threads/unable-to-get-device-for-partition-1-on-device-dev-mmcblk0.42348/

In theory you could first install Debian stable and afterwards ProxmoxVE ( see https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_13_Trixie for details) but unless you have a good reason I wouldn't do this. There is a good reason, why instaalling PVE to sd cards is not supported: Since the ProxmoxVE os does a lot of writes (logging and metric data plus the cluster file system database (which is also used on non-cluster single-nodes)) sd cards will get trashed soon. So: Don't do this.

Regarding nvme ssds: Usual consumer SSDs without power-loss-protection (plp) (which is true for most M2 2880 ssds) suffer the same issue as sd cards but are still more relieable than them and thanks to ZFS mirror you should be able to replace one of them in case of an failure. Alternatively get the cheapest plp-ssds ( https://geizhals.at/?cat=hdssd&xf=7161_Power-Loss Protection~7177_M.2 2280~7525_M.2 (PCIe)&offset=0&promode=true&hloc=at&hloc=de ). You could also use the boot device for VM/lxcs if you can live with the higher wearout and that OS and vms/lxcs share the same disc. In my homelab I did this since my mini-pcs just have two storage slots (one nvme, one sata) and I want to have some redundancy. If this means that in case of an reinstallation I have to restore the vms/lxcs from my backup so be it.
So in your case I would use the NVME slots for the OS. I would recommend to get two NVME sds from different brands (to minimice the chance that two will fail at the same time) if you can't afford ssds with PLP and create a ZFS mirror on them.

For the internal storage I personally wouldn't go with four hdds, but two used sata server-ssds with PLP (they are quite affordable and still have larger endurance than consumer-ssds) and two hdds with the largest capacity you can afford. Afterwards setup a mirror on the ssds for vm/lxc storage and another mirror on the hdds for raw data storage.
Alternatively you could create a mirror on the hdds and add the ssds as a special device mirror. The idea is to use the special device to store the metadata of all files and additionally very small files. By creating a custom dataset for vms/lxcs you could even use the ssds as vm/lxc storage while bulk data is still saved on the hdds. See this post by LnxBil who explains this better than me:
https://forum.proxmox.com/threads/zfs-metadata-special-device.129031/post-699290
If this sounds a little bit to eloborate to be comfortable with, stick with two ssds for VM/LXC storage and two hdds for raw data it will propably still be fine for a home server.

Regarding the read cache: How do you want to configure it ? This needs to be configured on the os level, PVE doesn't do this by default. You can do this with ZFS ( https://pve.proxmox.com/wiki/ZFS_on_Linux ) but usually it's not worth it ( having more RAM helps more). A special device on the other hand can be quite useful.

See also following old forum threads:
- https://forum.proxmox.com/threads/zfs-ssd-pool-with-nvme-cache-zil-l2arc.45920/
- https://forum.proxmox.com/threads/l2arc-cache-with-or-w-o-swap-on-zfs.121156/
- https://forum.proxmox.com/threads/brauche-ich-slog-l2arc.134928/ (German, use something like deepl for translation)

Regarding write cache I have no idea what you mean, can you explain?

For backups I would recommend not to use file sharing via sshfs but instead a dedicated backup tool like kopia, restic, borg or duplicati. They all can be configured to work with a linux server ( so in case of your homeserver propably a dedicated lxc or vm would be the best option) and some of them even have a GUI, so things don't get to complicated for non-techy users.

Regarding your apps: How do you want to run email? I hope you don't plan to self-host an mail server because you will need to be quite careful to not be added to anti-spam blacklists or get abused by spammers or both. Hosting some webmail interface to an external mail provider should work though.

If you just want to host some apps please also consider to use some NAS OS with Apps (aka docker container) and VM support instead like OpenMediaVault, TrueNAS or uNRAID. OpenMediaVault could even be installed to a sd card since it has a plugin to reduce writes and extend the durability. unRAID can also be installed to SD cards as far I know. TrueNAS however doesn't support this (like PVE) though.
Imho if your homeserver main purpose is to provide network storage plus some self-hosted apps a NAS OS is way less complex and has a more novice-friendly learning curve than ProxmoxVE.
As an alternate opinion, I have self-hosted mail for over 30 years. I have used Exchange, Groupwise, Zimbra, SMail, plain PostFix annd even Sendmail. Most recently, I have been using MailCow. I find that SPAM is not a real problem if running SpamAssassiin or some similar tool. Frankly, I get more SPAM in my GMail account than I do in my self-hosted one. I know I’m in a minority, but I like to host my own mail so I have full visibility in the mail transport and storage.
 
Last edited:
When it comes to the optimal disk setup for this purpose I am unsure however.
"optimal" isnt very meaninful without a use case. what you describe would be served by any combination since you dont have any performance or fault tolerance requirements to speak of, so the only thing remaining is usable capacity.

I was considering to go with 4x 4TB NAS HDDs, one nvme ssd for VMs, another nvme for read cache and the OS installed on the internal flash.
that would work. you can have as high as 16TB usable capacity without any fault tolerance, or up to 12 with single parity. I use a similar setup for my cold storage. I would say you wont need any "read cache" at all- you can either set up your boot device+vm store as a mirror, or just use one drive. A word about your virtualization payload- considering the meagre cpu availability, you're better off just running docker containers for your applications instead of vms (in lxc containers or without, although in the latter case you're better off just not bothering with PVE at all and using truenas directly.)

Would sata SSDs for storage instead of HDDs make sense or would that be a waste of money?
For your usecase, anything in excess of your required usable capacity is a "waste of money." sata ssd's and HDDs consume more or less the same power, but nvmes consume more, otherwise just the smallest price per gig.

Regarding nvme ssds: Usual consumer SSDs without power-loss-protection (plp) (which is true for most M2 2880 ssds) suffer the same issue as sd cards but are still more relieable than them and thanks to ZFS mirror you should be able to replace one of them in case of an failure.
Not this again. standard SSDs work JUST FINE. PLP is useful in situation where write performance is critical, and when you dont want the possibility of "last write" loss. for OP its not going to make any difference at all.

So effectively I only need SSH access and a few dedicated users for the backup.
consider not bothering with any "higher level" OS and just install debian ;)

Yes more Ram 32 GByte and more Cores, please read the Proxmox VE System Requirements.

The Ugreen DXP4800 is only a NAS, not more!
With 8 GB Ram you can't get more.
pve can run on a raspberry pi- and certainly on a N100. When gauging the resources required consider the APPLICATION, not the OS- which is really lightweight all said. Ops use case can certainly be deployed on PVE. or Truenas. or straight debian.

My own "home server" is a dell 1L box with 8GB of ram. it happily runs about a dozen containers and a windows VM.
 
"optimal" isnt very meaninful without a use case. what you describe would be served by any combination since you dont have any performance or fault tolerance requirements to speak of, so the only thing remaining is usable capacity.

Perhaps I should rephrase it like this: Given the hardware of the DXP4800, which disk setup would match the performance of the rest of the setup. I dont know what uses I will have for it in the future, and perhaps I end up with 20 containers in a year, but when choosing the disks I want something that is neither a bottleneck nor something that's limited by the rest of the hardware. So assuming I add another 8 GB of ram and considering the N100 CPU, what disks would be the best match performance wise.
 
Last edited:
It doesnt work like that :)

"performance" is a term thrown around a lot without any real meaning. How much performance do you get from your oven, or your sink? any performance would be with what you cook or clean with them, not the devices themselves. Similarly, your appliance will have a cpu, ram, storage, and network interfaces. None of those things do anything on their own; if you dont define what you are cooking, so to speak, its not really a relevant question to ask how well the oven works.

Since you seem to want someone to tell you what to do- quit worrying about it and do what you said in your initial post.
 
A word about your virtualization payload- considering the meagre cpu availability, you're better off just running docker containers for your applications instead of vms (in lxc containers or without, although in the latter case you're better off just not bothering with PVE at all and using truenas directly.)

I agree, especially since TrueNAS not only supports the Apps from the app store but also allows setting up containers with docker-compose files. Most applications in the self-hosted sphere already provide ones for copy/paste. Or like you said: Using directly Debian or another Linux distribution with docker or podman. One thing to consider for TrueNAS: While installing to a sd card is possible (different than PVE) it's still not recommended if I recall correctly.

Not this again. standard SSDs work JUST FINE. PLP is useful in situation where write performance is critical, and when you dont want the possibility of "last write" loss. for OP its not going to make any difference at all.

The other usefulness is to stop worrying about the wearout which leads people to bothering with third-party scripts or non-supported setups (like moving part of the root file system to ram discs). To deal with the wearout you have two options: Living with it and buying a new ssd from time to time (a ZFS mirror is really practical for it) or shopping for used ssds with PLP. The reason why I tend to recommend these options that I want to encourage people not to reilie on unsupported hacks or third-party scripts or other "best practices" from the reddit/youtube homelabbing/selfhosting sphere.

pve can run on a raspberry pi- and certainly on a N100. When gauging the resources required consider the APPLICATION, not the OS- which is really lightweight all said. Ops use case can certainly be deployed on PVE. or Truenas. or straight debian.

It should be noted that PVE still has no offiical ARM support though and you can't expect much help with it. I agree that for the usecase of the OP a N100 should be ok, given he has enough RAM as long as he don't run too much vms. But one vm for mailcow and another one for the docker containers plus maybe some containers for anything needing access to the GPU (like plex) should work. WIth more RAM it would work even better ;)

Thank you for the extensive advice.

Regarding the nvme ssds, I was already considering either the Kingston DC2000B or the Samsung PM9A3 since both have powerloss-protection.

Both are fine.
The TBW of the Kingston is rather low with 700TB but I am wondering how much write activity the disks will get considering the setup will see very little load most of the time. If I want to have VMs on there 480 Gb or 960 GB at most should suffice I think.

My reason for choosing 4x HDDs was to have them setup with a RAID-Z1 so the 4x 4TB will get me 12 TB usable space with one disk for redundancy. That seems cheaper then getting two 12TB HDDs

The most important thing is that you can live with the slower performance of RAIDZ. it will propably be ok for most of your bulk data (media files, documents, images) but you shouldn't don't host any vms or performance-critical databases on it:
https://constantin.glez.de/posts/2010-01-23-home-server-raid-greed-and-why-mirroring-is-still-best/
https://jrs-s.net/2015/02/06/zfs-you-should-use-mirror-vdevs-not-raidz/
https://www.truenas.com/community/t...d-why-we-use-mirrors-for-block-storage.44068/
https://www.truenas.com/community/threads/the-path-to-success-for-block-storage.81165/

Although I will admit that it's rather unlikely that you home database will be performance-critical ;) But if you happen to care about performance you should instead go with a striped mirror which would result in a usable capacity of 8 GB. You can use following calculators (also for different RAIDZ levels) to check possible configurations:
https://www.raidz-calculator.com/
https://wintelguy.com/zfs-calc.pl
https://www.truenas.com/docs/references/zfscapacitycalculator/

Please note ( as mentioned by @alexskysilk ) that for your usecase you propably don't need to worry too much about it but could just go with your original plan (although I would go with a mirror for the OS instead).
However you setup the HDD layout you would use your nvmes or ssds for the vm/lxc os/application install and add a virtual drive hosted on your HDDs for the data. Depending on the app you will need it to configure to use the hdd storage.

I was planning to have the two nvme drives in a simple RAID1 mirror though and probably with ext4 since I don't see much benefit with ZFS in this use case, especially with powerloss-prevention available. Does that make sense?

Not really ;) First the Proxmox installer don't allow to setup a SW RAID with ext4 (https://pve.proxmox.com/wiki/Software_RAID#mdraid ) you would need to first install Debian and afterwards Proxmox VE. This however is nothing I would recommend to a newbie for the reason explained in the official documentation:

Installing on top of an existing Debian installation looks easy, but it presumes that the base system has been installed correctly and that you know how you want to configure and use the local storage. You also need to configure the network manually.

In general, this is not trivial, especially when LVM or ZFS is used.

A detailed step by step how-to can be found on the wiki.
https://pve.proxmox.com/pve-docs/chapter-pve-installation.html#_install_proxmox_ve_on_debian

Second powerloss-protection is not so much about dataloss-prevention but for performance and wearout prevention. If you write data to a storage it needs to be written so in case of an power outage you still have your data. Since consumer ssds don't have a large cache (if at all) they need to write data more often, resulting in higher writeouts and also less performance (since the cache isn't that large). A SSD with plp (basically some capacitors) instead can buffer more data and just write it only if it's big enough to reduce neccesary writes. In case of a powerloss the capacitors will also the PLP to have still enough power to write the reamining data to the ssd. ZFS however has nothing to do with it it doesn't protect against a loss of power but against bitrot (flipped bits due to background radiation, hw failiure etc) by detecting and (if you have redundancy) auto-healing errors. See also following piece by @UdoB https://forum.proxmox.com/threads/f...y-a-few-disks-should-i-use-zfs-at-all.160037/

I disagree with Udos point that you need ssds with PLP though: I run PVE on ZFS mirrors who also contains consumer ssds/nvmes, but I'm preparing that I will need to replace them earlier. The reason is as said that I only have two storage slots and M2 nvmes with PLP are not in my budget. Up to now the wearout indicator in smart tells me that the NVMEs without PLP have lost 14% which is not so bad after one year of usage. The sata-SSDs ( used with PLP) have lost around 3-5% though. In both cases I expect that I will enough time to save cash for a replacement ;)

And regarding backup, the idea was to run restic through ssh using rclone with the append-only feature, which in turn uses the sshfs protocol if I am not mistaken. Im going to configure the clients anyway to ensure the user data is backed up the next time some ransomware rolls over them. So effectively I only need SSH access and a few dedicated users for the backup. And there is also the fact that I just like the idea of a solif PVE better then TrueNAS :)
For restic you actually don't even need ssh. They have a server which offers a REST API via http/s which also allows setups where people are only allowed to append new backups but not remove them: https://github.com/restic/rest-server
For housekeeping (pruning, verify etc) you then need to setup jobs on the server of course (or give ssh access which can only be used for this commands), I recommend something like resticprofile or autorestic for it. I use restic together with the rest-server and resticprofile for the backups of my notebook, the data on the NAS and the ProxmoxVE host backups (since I want to be able to restore them without ProxmoxBackupServer, my vms and lxcs are backed up to one local PBS and a PBS on a vserver though).

In your case I would setup the restic rest-server in a dedicated lxc or vm (depending how much isolation you want and if you want to use the docker-compose file or not).
 
  • Like
Reactions: UdoB
The other usefulness is to stop worrying about the wearout which leads people to bothering with third-party scripts or non-supported setups (like moving part of the root file system to ram discs). To deal with the wearout you have two options: Living with it and buying a new ssd from time to time (a ZFS mirror is really practical for it) or shopping for used ssds with PLP. The reason why I tend to recommend these options that I want to encourage people not to reilie on unsupported hacks or third-party scripts or other "best practices" from the reddit/youtube homelabbing/selfhosting sphere.
Not sure where the connection between PLP and user scripts lie. wear out on any SSD made in the last 10 years is such a remote concern as to not warrant any thought at all. I bet you dont have a single SSD that is ANYWHERE CLOSE to wearout- I know I dont. I have SSDs in BUSY SERVERS with over 10 years of power on time and still have life left- for a home server I bet the ssd will outlive the usefulness of the server. even one without PLP. I honestly dont know why people have this obsession with this.

disagree with Udos point that you need ssds with PLP though: I run PVE on ZFS mirrors who also contains consumer ssds/nvmes, but I'm preparing that I will need to replace them earlier.
You probably wont. ever. the disks are more likely to fail than reach 0% life. plp doesnt actually help that anyway- its the spare allocation that changes the tdbw rating, not plp.
 
I think we have an missunderstanding here: I basically agree with your take that one shouldn't worry about wearout to much. But in the case somebody do (which often seems to be the case in Reddits homelabbing/selfhosting subs) I want to state the available options:
- Moving stuff to ram discs (bad idea imho)
- Buying used enterprise ssds
- Accept that you might need to replace the ssd earlier than expected
 
You probably wont. ever. the disks are more likely to fail than reach 0% life.

Yes, a lot of setups will reach end of life before a cheap (non-PLP) SSD may die. At least you may hope for that. To increase the reliabilty drastically my rule is to use redundancy and to use "with-PLP" devices.

Disclaimer: I do not follow my own rule everywhere, for the usual reasons: limited space and connectivity, price, power consumption. Not in a homelab, that is. But when I use cheap devices, I know the consequences.

My prime example from this year: a 2 TB Samsung "Pro" NVMe went from zero to "101 percent used" in five months. The obvious reason was a 100 GB mariadb database for a Zabbix VM - with some really busy and continuous operation. Irony ahead: without monitoring I wouldn't have noticed this problem, produced by... monitoring ;-)

After analyzing the reason the only actual surprise was that the other part of the ZFS mirror - with the very same type of device - reported "only" "86 percent used". I do not understand that difference. (And the same type of device went from zero to six percent on another, very similar system without that Zabbix specific database.)

That example may probably not be a "normal" use case, but I do not think a busy database is crazy extraordinary for a homelab.

My personal conclusion:
  • know your actual technical requirements
  • ...and use suitable hardware
  • for "simple" tasks you may absolutely buy cheap hardware
  • but my recommendation is to better be safe than sorry
:-)