Deskmini X300 Proxmox Server Configuration

user332255

Member
Sep 5, 2021
44
5
13
29
Hello,
I bought some hardware for my homeserver:

Deskmini X300
AMD Athlon GE200
2x 16GB 3200MHz RAM
2x Intenso 128GB SSD
1x Crucial NVME 2TB

current status:
- 2x Intenso SSDs configured as RAID1 pool. Should be used for installing proxmox os and VMs & CTs
- 1x NVME configured as single disk. Should be used as storage for data from VMs & CTs (NAS, Plex...)
- Proxmox 7.0-8 installed on raid pool

for test purposes i installed:
1x Ubunutu VM
1x Ubunutu CT


what do i want to run on proxmox
24/7
- pihole: just a simple pihole server, which ran on my raspberry in docker before
- plex server: just to use it for my smart tv (4k); i hope i don't need transcoding (weak cpu)
- OMV (just for an exchange drive in my network)
- (maybe VPN server)

for tests
- win VM
- ubuntu VM

Questions
- How can i configure proxmox for dhcp? Now it has a fixed ip, but i want to give it a static ip by my dhcp server (router)
- What is the fastest way to move pihole into proxmox? I have pihole running atm on my raspberry in docker.
- Plex Server: I want to put content for plex on my NVME drive, accessible for users on my network. So one can just add or delete content from his device easily and plex server updates the new content automatically

I would be pleased for tutorials. Thank you
 
- How can i configure proxmox for dhcp? Now it has a fixed ip, but i want to give it a static ip by my dhcp server (router)
DHCP is not recommended for bridges on PVE. You can configure it via /etc/network/interfaces if needed (just like on regular Debian), but in general it is discouraged. You can configure a static IP on PVE and also reserve it in your router's DHCP server or assign a static IP outside the DHCP range for example.

- What is the fastest way to move pihole into proxmox? I have pihole running atm on my raspberry in docker.
PVE only (really) supports x86 VMs at the moment, so the easiest way would probably involve a reinstall of the x86 version in a PVE VM and then moving over the relevant pihole config files.

- Plex Server: I want to put content for plex on my NVME drive, accessible for users on my network. So one can just add or delete content from his device easily and plex server updates the new content automatically
Not sure how Plex itself handles it, but sharing data between multiple users ("users" in this scenario being programs that access it, not people) generally requires specialy protocols, such as network file sharing ones - NFS, SMB (samba), sshfs, etc... Sharing data on an allocated disk between VMs or the host and a VM directly is not supported.
 
You can configure it via /etc/network/interfaces
Thank you, I configured it as follows:

Code:
auto lo
iface lo inet loopback

iface enp1s0 inet manual

auto vmbr0

iface vmbr0 inet dhcp
  bridge_ports enp1s0
  bridge_stp off
  bridge_fd 0

# iface vmbr0 inet static
#       address 192.168.10.188/24
#       gateway 192.168.10.1
#       bridge-ports enp1s0
#       bridge-stp off
#       bridge-fd 0
I commented the old config out, as you can see
However, my proxmox server still has his old ip (192.168.10.188), although i configured a static ip via mac binding in my router.

Maybe it has to do with /etc/hosts ? What do I have to change here?
Code:
127.0.0.1 localhost.localdomain localhost
192.168.10.188 proxmox-x300.local proxmox-x300

# The following lines are desirable for IPv6 capable hosts

::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
/etc/hostname
Code:
proxmox-x300
 
Last edited:
Have you tried rebooting to apply the new network config?
 
Like already said, if you setup a server you want it to be always the same IP. As soon as you would change the IP you would need to edit all configs of all services on that server too and also on all clients that access that server. So if you use DHCP you need to set it to a fixed IP allocation anyway. So there is really no point using DHCP expect for maybe autoassigning a DNS server.

For Pi-hole you could use a debian/ubuntu VM/LXC. Or you could use a Docker LXC/VM und use the docker container.

Also keep in mind that ZFS raid got a really terrible write amplification and you are using only small and cheap consumer SSD that can't handle much writes and might fail within months or years depending on your workload. So you should monitor them and look for the wear.
 
Have you tried rebooting to apply the new network config?
yes of course
and you are using only small and cheap consumer SSD that can't handle much writes and might fail within months or years depending on your workload.
I forgot to mention that I changed those intenso SSDs to 2xCT480BX300SSD1 (due to more storage for my VM&CTs)

Also keep in mind that ZFS raid got a really terrible write amplification
Is there a possibility to use RAM as a cache, to prevent those workloads on the ssd? Atm i have much RAM not used, so maybe i can configure a ram disk.
 
I forgot to mention that I changed those intenso SSDs to 2xCT480BX300SSD1 (due to more storage for my VM&CTs)

Is there a possibility to use RAM as a cache, to prevent those workloads on the ssd? Atm i have much RAM not used, so maybe i can configure a ram disk.
No, RAM is only used as a read cache (ARC) and for async writes and ZFS will use up to 50% of your total RAM for read caching by default. Sync writes can't be cached in RAM because all RAM cached writes will be lost on an power outage. Depending of your workload you will probably get a write amplification between factor 3 (for big async writes) and factor 40 (for small sync writes). That means your SSDs will die 3-40 times faster and only got 1/3 to 1/40th of the drives performance. Those BX300 are still only consumer SSDs that can't handle alot of writes. Both together are rated to survive 320 TB (2x 160TB BTW) but because of the write amplification they will reach those 320TB after only writing 8 to 107 TB inside a guest.
 
Last edited:
Sync writes can't be cached in RAM because all RAM cached writes will be lost on an power outage
Hmm, I'm not sure about that. I read that tweaking the parameter vm.swappiness may reduce SSD/HDD writes. But I'm new to Proxmox and dont' know what would be suitable. Is there some information about that?
because all RAM cached writes will be lost on an power outage
well thats true, however this isn't my main concern, so I could live with that. My Prio is reduced drive wearout.
Those BX300 are still only consumer SSDs
Yeah my other hardware is too. I know that enterprise hardware would always be better, with ECC RAM, multiple NICs, USV, IPMI, etc... But my usecase is only a simple home server.
 
Last edited:
Hmm, I'm not sure about that. I read that tweaking the parameter vm.swappiness may reduce SSD/HDD writes. But I'm new to Proxmox and dont' know what would be suitable. Is there some information about that?
Swappiness is only swapping out the read cache from RAM to disk. RAM is a volatile storage so you will always loose everything in it on an power outage or kernel crash. Async writes will use the RAM as write cache but you will also loose everything thats cached in it. So async writes are always handled as unsecure and programs will be programmed that way that they only use it for not that important data. For really important data programs use sync writes which aren't allowed to be cached in RAM, so these writes can't be lost on an power outage or kernel crash.
Because your SSD don't got a powerloss protection (basically a internal backup battery) your SSDs also can't cache sync writes and are forced to write them directly to the non-volatile NAND flash storage. And because your SSDs can't cache them, they can't optimize the incoming writes and you get a horrible write amplification.
Yeah my other hardware is too. I know that enterprise hardware would always be better, with ECC RAM, multiple NICs, USV, IPMI, etc... But my usecase is only a simple home server.
ZFS isn't designed to be run on consumer hardware. It will work, but it isn't that secure nor that stable and all the enterprise integrity features of ZFS will cause alot of extra writes so its recommended to use enterprise SSDs because they are build to better handle that additional writes.
Or to quote the PVE ZFS Benchmark FAQ:
Can I use consumer or pro-sumer SSDs, as these are much cheaper than enterprise-class SSD?
No. Never. These SSDs wont provide the required performance, reliability or endurance. See the fio results from before and/or run your own fio tests.

If you don't care about data integrity and want you SSDs to survive a multiple of times you might consider using HW raid with LVMthin ontop or installing Debian Bullseye with a mdadm software raid with LVMthin ontop and install PVE ontop of that.
 
ZFS isn't designed to be run on consumer hardware.
So maybe I should use btrfs instead? Since my TrueNAS Build Server (with ECC stuff and whatnot), I liked the idea about data integrity. But I don't want to start over multiple times by choosing the wrong filesystem, so maybe you can just tell me, how you would configure my hw for my usecase :D
 
I think brtfs is somewhere between ZFS and simpler stuff like LVM. But keep in mind that btrfs support was only added some weeks ago with the PVE7.0 release. So thats a very new feature and I personally woudn'T rely on that yet.
You can try ZFS, maybe you mostly got async writes and the SSDs will survive some years. But if you got a unlucky workload like me, your homeserver will write 900GB per day while idleing like mine here at home. Thats why I mentioned you should monitor the wear using SMART so you see a too fast wear early and can switch to another filesystem that doesn't cause a that high write amplification.
 
Last edited:
Thats why I mentions you should monitor the wear using SMART so you see a too fast wear early and can switch to another filesystem that doesn't cause a that high write amplification.
ok, I will do that. Atm I cannot estime my workload, because this is my first 24/7 homeserver.
I looked at Total Disk Read and Total Disk Write under Datacenter, but there is nothing atm (0B).

Just in Case: Is there an easy solution to swap my zfs raid to ext4 raid?
 
ok, I will do that. Atm I cannot estime my workload, because this is my first 24/7 homeserver.
I looked at Total Disk Read and Total Disk Write under Datacenter, but there is nothing atm (0B).
You can run apt install sysstat on your host to install iostat. If you then run iostat you can see how much every single drive wrote in total since the reboot. But keep in mind that your SSD also got an internal write amplification, so whats really might be written to the NAND flash of your SSDs might be factor 2 or 3 of what iostat is showing.

Just in Case: Is there an easy solution to swap my zfs raid to ext4 raid?
No, not really. PVE doesn't officially support mdadm for software raid so you need to setup a normal debian install yourself and convert it into a PVE. And ext4 is just a filesystem, so you need something below that can do the raid like mdadm. Or if your mainboard got onboard raid you can use that too. With that onboard pseudo-HW raid you cound do a normal PVE ISO installation.
So if you want to switch from ZFS to something else you would need to backup your VMs/LXCs, wipe your PVE, install it from scratch again and restore the VMs/LXCs.
 
If you then run iostat you can see how much every single drive wrote in total since the reboot
ok, I did what you suggested:
Code:
root@proxmox-x300:~# iostat
Linux 5.11.22-4-pve (proxmox-x300)      09/16/2021      _x86_64_        (4 CPU)

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.41    0.00    0.42    0.02    0.00   99.16

Device             tps    kB_read/s    kB_wrtn/s    kB_dscd/s    kB_read    kB_wrtn    kB_dscd
sda              14.66        15.14       147.98         0.00     339959    3323632          0
sdb              14.64         7.84       147.98         0.00     176139    3323632          0
with an uptime of 6h20min now. So this would be around 12.6GB/day, 4.6 TB/year
 
ok, I did what you suggested:
Code:
root@proxmox-x300:~# iostat
Linux 5.11.22-4-pve (proxmox-x300)      09/16/2021      _x86_64_        (4 CPU)

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.41    0.00    0.42    0.02    0.00   99.16

Device             tps    kB_read/s    kB_wrtn/s    kB_dscd/s    kB_read    kB_wrtn    kB_dscd
sda              14.66        15.14       147.98         0.00     339959    3323632          0
sdb              14.64         7.84       147.98         0.00     176139    3323632          0
with an uptime of 6h20min now. So this would be around 12.6GB/day, 4.6 TB/year
You write to your pool with 2x 148kb/s averaged since reboot. So thats combined 25GB per day. Then you got your internal SSDs write amplification (here I got factor 2 to 3) so this actually might be more like 62,5 GB per day or 22TB per year. So for now thats looks fine.

Should get more problematic as soon as you start to use databases like MySQL which do alot of small sync writes.
 
Last edited:
like 62,5 GB per day or 22TB per year
hmm this seems still a lot to me...

So if I'm not mistaken, ZFS would only make sense either with enterprise SSDs (I'm not sure about the costs for ~500GB) or a switch to HDDs (well, enterprise level?).
I still have some old 2.5 HDDs flying around (320gb, 1TB), but I wanted to keep this server as quite as possible and I don't want to ruin that by using HDDs again.
That's quite sad, that there isn't a native possibility for consumer ssd raid/mirror
 
hmm this seems still a lot to me...

So if I'm not mistaken, ZFS would only make sense either with enterprise SSDs (I'm not sure about the costs for ~500GB) or a switch to HDDs (well, enterprise level?).
I still have some old 2.5 HDDs flying around (320gb, 1TB), but I wanted to keep this server as quite as possible and I don't want to ruin that by using HDDs again.
That's quite sad, that there isn't a native possibility for consumer ssd raid/mirror
22 TB per year is fine. You got 320 TB TBW so if your writes don't increase your SSDs should survive for 14 years.
But my homeserver for example is writing 900GB per day so I would possibly kill those two SSD within a year.
Because of that I removed my 6 consumer SSDs and replaced them with second hand enterprise SSDs. You get a second hand 400GB enterprise SSD with still 8200 to 8300 TB TBW left for 50€. So they are basically way cheaper if you take into account that they can handle over 30 times the writes of a typical consumer SSD.
 
Last edited:
You need to be patient and look for good offers. Bought 2x 100GB SSDs for 10€ each, 7x 200GB SSDs for 25-30€ each, 3x 400GB SSDs for around 50€ each. I also saw 800GB SSDs for around 120€. All bought from ebay or ebay-kleinanzeigen from various sellers. Just make sure you ask for a SMART report before buying so you see how much has been written to them and if there were any errors recorded. Because these SSDs got such an incredible durability all my 12 SSD got still 96 to 100% life expectation left even if they already were 2 to 5 years running.
Its also useful not to look out for a specific model but for second hand enterprise SSDs in general and buy what you get (of cause after reading the datasheets and SMART reports to verify that it is a good deal).

So you basically get 1TB of second hand SSD storage with 20335 TB TBW left for around 100-150€. Thats 136 - 203 TB TBW per Euro.
A new Samsung 870 EVO 1TB (106€; 600 TB TBW) only gives you 5,66 TB TBW per Euro.
A new Samsung 870 QVO 1TB (75€; 360 TB TBW) only 4,8 TB TBW per Euro.
2x Crucial BX300 480GB combined would be 160€ for 320 TB TBW so 2 TB TBW per Euro.
A new Intel SSD D3-S4610 960GB (250€; 6000 TB TBW) is 24 TB TBW per Euro.
So you pay more for a (new) enterprise SSD but it will last for way longer. So in the long term its cheaper to buy enterprise SSDs...atleast if you really need all that durability...Right now you would need 924 years to reach the 20335 TB TBW...I would guess the SSDs chipset will fail before you actually kill the NAND flash. :D
If you compare the 20335 TB TBW of a good 1TB enterprise SSD vs 360 TB TBW of a 1TB consumer SSD you probably see why its recommended to buy enterprise SSDs, even if the initial buy costs more.
And a fun fact: Each of my 200GB SSDs got 4GB of RAM build in for caching...I use 8 of them in my home server, so my SSDs got 32GB RAM. Another point why they are more expensive. I believe the bigger models even got 8GB RAM (atleast there was a unpopulated pad on the PCB for a second RAM chip). A 1TB Samsung Evo only got 1GB RAM and your BX300 only 512MB. So you basically get what you pay for.

But like I said, if you just write 22TB per year your SSD should be absolutely fine. Just check them from time to time and then extrapolate when the TBW should be exceeded. And reaching the TBW doesn't mean that your SSD will fail instantly. Its just only rated for that amount of writes and after exceeding this TBW value you will loose your warranty even if the warranty otherwise would still last for years.
 
Last edited:
Hello,
I bought some hardware for my homeserver:

Deskmini X300
AMD Athlon GE200
2x 16GB 3200MHz RAM
2x Intenso 128GB SSD
1x Crucial NVME 2TB

current status:
- 2x Intenso SSDs configured as RAID1 pool. Should be used for installing proxmox os and VMs & CTs
- 1x NVME configured as single disk. Should be used as storage for data from VMs & CTs (NAS, Plex...)
- Proxmox 7.0-8 installed on raid pool

for test purposes i installed:
1x Ubunutu VM
1x Ubunutu CT


what do i want to run on proxmox
24/7
- pihole: just a simple pihole server, which ran on my raspberry in docker before
- plex server: just to use it for my smart tv (4k); i hope i don't need transcoding (weak cpu)
- OMV (just for an exchange drive in my network)
- (maybe VPN server)

for tests
- win VM
- ubuntu VM

Questions
- How can i configure proxmox for dhcp? Now it has a fixed ip, but i want to give it a static ip by my dhcp server (router)
- What is the fastest way to move pihole into proxmox? I have pihole running atm on my raspberry in docker.
- Plex Server: I want to put content for plex on my NVME drive, accessible for users on my network. So one can just add or delete content from his device easily and plex server updates the new content automatically

I would be pleased for tutorials. Thank you
I don't know if it is a problem with my deskmini x300 , but if I add a Crucial NVME 2TB I loose the network .
I can bring the interface up after and the light or on but no one is there .
Going to try it in the lower slot , but that is going to be a real pain removing all the sata drives and the tray.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!