Disk constellation for DXP4800

Bernd_909

New Member
Dec 20, 2025
4
0
1
The current NAS series from Ugreen seems to be a good platform for a small home setup, but I am struggling a bit with choosing the right disk constellation.
I am planning with the DXP4800 (Intel N100 4 cores, 8 GB RAM) and installing PVE on it. The main consideration here is power efficiancy since this thing will be idle most of the time.
The setup will serve for both backup purposes of family members (through sshfs) and a few VMs or containers that will run things like nextcloud, email and plex.
There may be about 10 users eventually and the system should run with good performance when 2 or 3 are accessing the services simultaneously.

When it comes to the optimal disk setup for this purpose I am unsure however. I was considering to go with 4x 4TB NAS HDDs, one nvme ssd for VMs, another nvme for read cache and the OS installed on the internal flash. But I have no clue if this would provide a decent combination with the rest of the hardware. Would sata SSDs for storage instead of HDDs make sense or would that be a waste of money? Do I need a read cache or will other components provide a bottleneck? Would it be an option to have both read and write cache on a single nvme? Since no USP is used, should a write cache be avoided?
 
When it comes to the optimal disk setup for this purpose I am unsure however. I was considering to go with 4x 4TB NAS HDDs, one nvme ssd for VMs, another nvme for read cache and the OS installed on the internal flash.

Is the internal flash a sd (mmc) card? Then you won't be able to install ProxmoxVE with the regular iso image see https://forum.proxmox.com/threads/unable-to-get-device-for-partition-1-on-device-dev-mmcblk0.42348/

In theory you could first install Debian stable and afterwards ProxmoxVE ( see https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_13_Trixie for details) but unless you have a good reason I wouldn't do this. There is a good reason, why instaalling PVE to sd cards is not supported: Since the ProxmoxVE os does a lot of writes (logging and metric data plus the cluster file system database (which is also used on non-cluster single-nodes)) sd cards will get trashed soon. So: Don't do this.

Regarding nvme ssds: Usual consumer SSDs without power-loss-protection (plp) (which is true for most M2 2880 ssds) suffer the same issue as sd cards but are still more relieable than them and thanks to ZFS mirror you should be able to replace one of them in case of an failure. Alternatively get the cheapest plp-ssds ( https://geizhals.at/?cat=hdssd&xf=7... (PCIe)&offset=0&promode=true&hloc=at&hloc=de ). You could also use the boot device for VM/lxcs if you can live with the higher wearout and that OS and vms/lxcs share the same disc. In my homelab I did this since my mini-pcs just have two storage slots (one nvme, one sata) and I want to have some redundancy. If this means that in case of an reinstallation I have to restore the vms/lxcs from my backup so be it.
So in your case I would use the NVME slots for the OS. I would recommend to get two NVME sds from different brands (to minimice the chance that two will fail at the same time) if you can't afford ssds with PLP and create a ZFS mirror on them.

For the internal storage I personally wouldn't go with four hdds, but two used sata server-ssds with PLP (they are quite affordable and still have larger endurance than consumer-ssds) and two hdds with the largest capacity you can afford. Afterwards setup a mirror on the ssds for vm/lxc storage and another mirror on the hdds for raw data storage.
Alternatively you could create a mirror on the hdds and add the ssds as a special device mirror. The idea is to use the special device to store the metadata of all files and additionally very small files. By creating a custom dataset for vms/lxcs you could even use the ssds as vm/lxc storage while bulk data is still saved on the hdds. See this post by LnxBil who explains this better than me:
https://forum.proxmox.com/threads/zfs-metadata-special-device.129031/post-699290
If this sounds a little bit to eloborate to be comfortable with, stick with two ssds for VM/LXC storage and two hdds for raw data it will propably still be fine for a home server.

Regarding the read cache: How do you want to configure it ? This needs to be configured on the os level, PVE doesn't do this by default. You can do this with ZFS ( https://pve.proxmox.com/wiki/ZFS_on_Linux ) but usually it's not worth it ( having more RAM helps more). A special device on the other hand can be quite useful.

See also following old forum threads:
- https://forum.proxmox.com/threads/zfs-ssd-pool-with-nvme-cache-zil-l2arc.45920/
- https://forum.proxmox.com/threads/l2arc-cache-with-or-w-o-swap-on-zfs.121156/
- https://forum.proxmox.com/threads/brauche-ich-slog-l2arc.134928/ (German, use something like deepl for translation)

Regarding write cache I have no idea what you mean, can you explain?

For backups I would recommend not to use file sharing via sshfs but instead a dedicated backup tool like kopia, restic, borg or duplicati. They all can be configured to work with a linux server ( so in case of your homeserver propably a dedicated lxc or vm would be the best option) and some of them even have a GUI, so things don't get to complicated for non-techy users.

Regarding your apps: How do you want to run email? I hope you don't plan to self-host an mail server because you will need to be quite careful to not be added to anti-spam blacklists or get abused by spammers or both. Hosting some webmail interface to an external mail provider should work though.

If you just want to host some apps please also consider to use some NAS OS with Apps (aka docker container) and VM support instead like OpenMediaVault, TrueNAS or uNRAID. OpenMediaVault could even be installed to a sd card since it has a plugin to reduce writes and extend the durability. unRAID can also be installed to SD cards as far I know. TrueNAS however doesn't support this (like PVE) though.
Imho if your homeserver main purpose is to provide network storage plus some self-hosted apps a NAS OS is way less complex and has a more novice-friendly learning curve than ProxmoxVE.
 
  • Like
Reactions: news and UdoB
Thank you for the extensive advice.

Regarding the nvme ssds, I was already considering either the Kingston DC2000B or the Samsung PM9A3 since both have powerloss-protection.
The TBW of the Kingston is rather low with 700TB but I am wondering how much write activity the disks will get considering the setup will see very little load most of the time. If I want to have VMs on there 480 Gb or 960 GB at most should suffice I think.

My reason for choosing 4x HDDs was to have them setup with a RAID-Z1 so the 4x 4TB will get me 12 TB usable space with one disk for redundancy. That seems cheaper then getting two 12TB HDDs

The idea of a dedicated cache is something I no longer think useful in this case, since most guides seem to argue that this does not make sense unless the system has much more RAM then 8 or even 16 GB

If having both OS and VMs on the nvme is possible, that sounds like the better and most importantly less complicated option right now,
I was planning to have the two nvme drives in a simple RAID1 mirror though and probably with ext4 since I don't see much benefit with ZFS in this use case, especially with powerloss-prevention available. Does that make sense?

TrueNAS is somthing I also looked into and you can apparently install it on the emmc drive as well. The main reason against it is that I feel it less transparent then PVE since "Apps" are mostly predefined containers and I can't really audit much of their internal configuration. That said it may be easier to run. Then again the primary feature of TrueNAS is the SMB/NFS share which I wont use since most clients will connect over the internet. And regarding backup, the idea was to run restic through ssh using rclone with the append-only feature, which in turn uses the sshfs protocol if I am not mistaken. Im going to configure the clients anyway to ensure the user data is backed up the next time some ransomware rolls over them. So effectively I only need SSH access and a few dedicated users for the backup. And there is also the fact that I just like the idea of a solif PVE better then TrueNAS :)

 
The DXP4800 fullfills the listed requirements many times over. Based on other experience reports with the same or similar hardware, that's not only plenty but will still leave the system at 1% loads most of the time. There will rarely be more then two users accessing those services at the same time.
 
Then gool luck, you can tell us, how its works for you.
I will use this Ugreen DXP4800 only as a NAS.
But my Ryzen 5 3500X NAS use ZFS on the boot and data and has 32 GByte DDR4 Ram with 2 NICs.
 
Last edited:
Is the internal flash a sd (mmc) card? Then you won't be able to install ProxmoxVE with the regular iso image see https://forum.proxmox.com/threads/unable-to-get-device-for-partition-1-on-device-dev-mmcblk0.42348/

In theory you could first install Debian stable and afterwards ProxmoxVE ( see https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_13_Trixie for details) but unless you have a good reason I wouldn't do this. There is a good reason, why instaalling PVE to sd cards is not supported: Since the ProxmoxVE os does a lot of writes (logging and metric data plus the cluster file system database (which is also used on non-cluster single-nodes)) sd cards will get trashed soon. So: Don't do this.

Regarding nvme ssds: Usual consumer SSDs without power-loss-protection (plp) (which is true for most M2 2880 ssds) suffer the same issue as sd cards but are still more relieable than them and thanks to ZFS mirror you should be able to replace one of them in case of an failure. Alternatively get the cheapest plp-ssds ( https://geizhals.at/?cat=hdssd&xf=7161_Power-Loss Protection~7177_M.2 2280~7525_M.2 (PCIe)&offset=0&promode=true&hloc=at&hloc=de ). You could also use the boot device for VM/lxcs if you can live with the higher wearout and that OS and vms/lxcs share the same disc. In my homelab I did this since my mini-pcs just have two storage slots (one nvme, one sata) and I want to have some redundancy. If this means that in case of an reinstallation I have to restore the vms/lxcs from my backup so be it.
So in your case I would use the NVME slots for the OS. I would recommend to get two NVME sds from different brands (to minimice the chance that two will fail at the same time) if you can't afford ssds with PLP and create a ZFS mirror on them.

For the internal storage I personally wouldn't go with four hdds, but two used sata server-ssds with PLP (they are quite affordable and still have larger endurance than consumer-ssds) and two hdds with the largest capacity you can afford. Afterwards setup a mirror on the ssds for vm/lxc storage and another mirror on the hdds for raw data storage.
Alternatively you could create a mirror on the hdds and add the ssds as a special device mirror. The idea is to use the special device to store the metadata of all files and additionally very small files. By creating a custom dataset for vms/lxcs you could even use the ssds as vm/lxc storage while bulk data is still saved on the hdds. See this post by LnxBil who explains this better than me:
https://forum.proxmox.com/threads/zfs-metadata-special-device.129031/post-699290
If this sounds a little bit to eloborate to be comfortable with, stick with two ssds for VM/LXC storage and two hdds for raw data it will propably still be fine for a home server.

Regarding the read cache: How do you want to configure it ? This needs to be configured on the os level, PVE doesn't do this by default. You can do this with ZFS ( https://pve.proxmox.com/wiki/ZFS_on_Linux ) but usually it's not worth it ( having more RAM helps more). A special device on the other hand can be quite useful.

See also following old forum threads:
- https://forum.proxmox.com/threads/zfs-ssd-pool-with-nvme-cache-zil-l2arc.45920/
- https://forum.proxmox.com/threads/l2arc-cache-with-or-w-o-swap-on-zfs.121156/
- https://forum.proxmox.com/threads/brauche-ich-slog-l2arc.134928/ (German, use something like deepl for translation)

Regarding write cache I have no idea what you mean, can you explain?

For backups I would recommend not to use file sharing via sshfs but instead a dedicated backup tool like kopia, restic, borg or duplicati. They all can be configured to work with a linux server ( so in case of your homeserver propably a dedicated lxc or vm would be the best option) and some of them even have a GUI, so things don't get to complicated for non-techy users.

Regarding your apps: How do you want to run email? I hope you don't plan to self-host an mail server because you will need to be quite careful to not be added to anti-spam blacklists or get abused by spammers or both. Hosting some webmail interface to an external mail provider should work though.

If you just want to host some apps please also consider to use some NAS OS with Apps (aka docker container) and VM support instead like OpenMediaVault, TrueNAS or uNRAID. OpenMediaVault could even be installed to a sd card since it has a plugin to reduce writes and extend the durability. unRAID can also be installed to SD cards as far I know. TrueNAS however doesn't support this (like PVE) though.
Imho if your homeserver main purpose is to provide network storage plus some self-hosted apps a NAS OS is way less complex and has a more novice-friendly learning curve than ProxmoxVE.
As an alternate opinion, I have self-hosted mail for over 30 years. I have used Exchange, Groupwise, Zimbra, SMail, plain PostFix annd even Sendmail. Most recently, I have been using MailCow. I find that SPAM is not a real problem if running SpamAssassiin or some similar tool. Frankly, I get more SPAM in my GMail account than I do in my self-hosted one. I know I’m in a minority, but I like to host my own mail so I have full visibility in the mail transport and storage.
 
Last edited:
When it comes to the optimal disk setup for this purpose I am unsure however.
"optimal" isnt very meaninful without a use case. what you describe would be served by any combination since you dont have any performance or fault tolerance requirements to speak of, so the only thing remaining is usable capacity.

I was considering to go with 4x 4TB NAS HDDs, one nvme ssd for VMs, another nvme for read cache and the OS installed on the internal flash.
that would work. you can have as high as 16TB usable capacity without any fault tolerance, or up to 12 with single parity. I use a similar setup for my cold storage. I would say you wont need any "read cache" at all- you can either set up your boot device+vm store as a mirror, or just use one drive. A word about your virtualization payload- considering the meagre cpu availability, you're better off just running docker containers for your applications instead of vms (in lxc containers or without, although in the latter case you're better off just not bothering with PVE at all and using truenas directly.)

Would sata SSDs for storage instead of HDDs make sense or would that be a waste of money?
For your usecase, anything in excess of your required usable capacity is a "waste of money." sata ssd's and HDDs consume more or less the same power, but nvmes consume more, otherwise just the smallest price per gig.

Regarding nvme ssds: Usual consumer SSDs without power-loss-protection (plp) (which is true for most M2 2880 ssds) suffer the same issue as sd cards but are still more relieable than them and thanks to ZFS mirror you should be able to replace one of them in case of an failure.
Not this again. standard SSDs work JUST FINE. PLP is useful in situation where write performance is critical, and when you dont want the possibility of "last write" loss. for OP its not going to make any difference at all.

So effectively I only need SSH access and a few dedicated users for the backup.
consider not bothering with any "higher level" OS and just install debian ;)

Yes more Ram 32 GByte and more Cores, please read the Proxmox VE System Requirements.

The Ugreen DXP4800 is only a NAS, not more!
With 8 GB Ram you can't get more.
pve can run on a raspberry pi- and certainly on a N100. When gauging the resources required consider the APPLICATION, not the OS- which is really lightweight all said. Ops use case can certainly be deployed on PVE. or Truenas. or straight debian.

My own "home server" is a dell 1L box with 8GB of ram. it happily runs about a dozen containers and a windows VM.
 
"optimal" isnt very meaninful without a use case. what you describe would be served by any combination since you dont have any performance or fault tolerance requirements to speak of, so the only thing remaining is usable capacity.

Perhaps I should rephrase it like this: Given the hardware of the DXP4800, which disk setup would match the performance of the rest of the setup. I dont know what uses I will have for it in the future, and perhaps I end up with 20 containers in a year, but when choosing the disks I want something that is neither a bottleneck nor something that's limited by the rest of the hardware. So assuming I add another 8 GB of ram and considering the N100 CPU, what disks would be the best match performance wise.
 
Last edited: