Recommendations on Promox install, ZFS/mdadm/somehting else

Overfill4993

New Member
Dec 20, 2022
16
1
3
Hi all! New to this forum and read a lot recently about Proxmox installation. Most of the answers I've gotten with just searching but this topic still puzzles me a bit.

Just a little backgroud first. This is going to be my first Proxmox install and a first time trying type 1 hypervisor. I've been running a home server for a few years on Ubuntu. I went with ZFS storage for HDDs from the start and have been happy with it. My OS install is currently on a mdadm mirror (2x Samsung 870 Evo 256Gb). I'm quite familiar with Linux and know atleast the basics of ZFS. I have not however tried ZFS for OS install.

I've read that Proxmox on ZFS is available as a option from the installer, but one should really consider the SSDs endurance if ZFS is chosen. Those Samsungs are OK'ish consumer drives but they dont have petabytes of write endurance. As far as I understand, Proxmox does quite a lot of logging and this combined to ZFS write amplification, can kill a "normal" SSD surprisingly fast. And this would happen even without the VMs running on the same disks. I have 2x1Tb Firecuda 530 drives just for those and a separate HDD pools for storage.

Here are a few options that I currently consider:
1. Just go with ZFS and deal with the drives when time comes. Not exactly ideal, but small SSDs are not that pricey anymore, so it is worth considering because of the benefits of ZFS.
2. Start with Debian, install on Raid1 mdadm and install Proxmox packages on top. Sounds good to me, but going even this much on the "custom" path makes me a bit uneasy. How common are these types of installs?
3. Install on just one drive, set up robust backups and just swap the drive if needed.
4. Something else?

Am I on the right track? None of the options I have found sound ideal. Most likely I would be going with number 3 of those, since a home server can have some downtime for repairs. The SLA is very lenient around here.
 
Last edited:
Go with option 1. Just running ZFS on consumer ssds is no problem to the endurance. I don't know where you read that with the amplification or what the context was, but it sounds a little bit too much overpanic.
Proxmox does quite a lot of logging
That's true, but log has to be written regardless of the underlying FS.
Consider this example: If you run windows native on a single consumer ssd you could expect 9 years lifetime. If you run three win-VMs on that ssd you could get three years minus the logging you reach maybe two years. The same example on ZFS doesn't mean automatically that you burn the SSD within one year or shorter. It depends on the workload, what your VMs do.

And in general hardware dies sooner or later or you run out of space and need a bigger upgrade anyway.
So my tip is try option 1, you will be surprised how not bad it is at all. :)

Always backup and always keep an eye on smartctl
 
Go with option 1. Just running ZFS on consumer ssds is no problem to the endurance. I don't know where you read that with the amplification or what the context was, but it sounds a little bit too much overpanic.

That's true, but log has to be written regardless of the underlying FS.
Consider this example: If you run windows native on a single consumer ssd you could expect 9 years lifetime. If you run three win-VMs on that ssd you could get three years minus the logging you reach maybe two years. The same example on ZFS doesn't mean automatically that you burn the SSD within one year or shorter. It depends on the workload, what your VMs do.

And in general hardware dies sooner or later or you run out of space and need a bigger upgrade anyway.
So my tip is try option 1, you will be surprised how not bad it is at all. :)

Always backup and always keep an eye on smartctl

Thanks for the reply!

The write amplification was just a fact that I was aware of. The fact that how much of a problem that was, was not clear to me. It seems that this is a problem far smaller than I thought. You are absolutely right that the hardware will fail or just get replaced before that and this does mitigate the problem quite significantly.

My workloads are not going to be huge, afterall this is just a homelab with a couple of users. At the beginning I would be running something like 4-5 VMs and one LXC container for Nginx Proxy Manager.

One thing I'm still wondering is my CPU. I currently run a E5-1620V3 which has been fine, but now that I'm about to jump to virtualization and to the opportunities it brings I worry that my poor Haswell will be overworked quite a bit. It is now running Ubuntu Server 22.04 with all my services (media management, NAS duties and such) and it has been fine. Now that I would be splitting some of its duties TrueNAS VM, other Linux VMs and so on, would I be running out of threads?

I consider the Haswell-era platform too old to invest more in at this point. I would probably be getting something like Supermicro H12SSL-i and Epyc 7302P to replace current MB and CPU. That might be overkill, but the prices are just too good to pass. Here in EU the options are somewhat limited in terms of parts with reasonable prices.
 
Go with option 1. Just running ZFS on consumer ssds is no problem to the endurance. I don't know where you read that with the amplification or what the context was, but it sounds a little bit too much overpanic.
Here my PVE homeserver with ZFS writes 900GB per day to the SSDs to run ~30 mostly idleing guests. Really depends on the workload. Especially sync writes of DBs with ZFS will shred the SSDs in no time. And my two TrueNAS servers with ZFS killed 3 consumer SSDs this year...so I wouldn`t call it overpanicing...really depends on the use case. ZFS got a really bad overhead by design (seeing write amplification factor here of 3 to 62 with factor 20 in average). So its normal that a virtualized win VM will wear the SSD a multiple times faster than a bare metal Win.
My OS install is currently on a mdadm mirror (2x Samsung 870 Evo 256Gb).
That will work but isn't officially supported and not really recommended: https://forum.proxmox.com/threads/z...ty-and-reliability-of-zfs.116871/#post-505697

Just go with ZFS and deal with the drives when time comes. Not exactly ideal, but small SSDs are not that pricey anymore, so it is worth considering because of the benefits of ZFS.
If you buy your SSDs not by price per capacity but by price per TBW (so durablity) you will see that Enterprise SSDs are actually cheaper than the consumer SSDs...at least on the long run if you really write a lot: https://forum.proxmox.com/threads/c...-for-vms-nvme-sata-ssd-hdd.118254/post-512801
And Enterprise SSDs will give more consistent performance and are way faster doing sync writes, because they can cache them in RAM because of the built-in powerloss protection.

Start with Debian, install on Raid1 mdadm and install Proxmox packages on top. Sounds good to me, but going even this much on the "custom" path makes me a bit uneasy. How common are these types of installs?
It's not that uncommon. Sometimes you don't have another choice. For example if you want mdadm, you need to install using the terminal or you want wanted full system encryption before PVE 6.3. But yes, if for example your disk dies you can't just follow the PVE documentation or PVE wiki on how to replace the disk. You would need to find some Debian tutorials for that.

3. Install on just one drive, set up robust backups and just swap the drive if needed.
Also possible. But you probably still lose some hours of data (in case you also want to store guests on it) which can be really bad. Lets for example say you store a password safe on it, then create some new accounts and an hour later your disk dies. Then you would be locked out because your password is lost without any backups. And then there is the downtime and annoying work which could have been prevented...you will have to decide if that is ok for you or not. Could be really annoying when running a virtual router for your whole house, email server or some smart home stuff.

One thing I'm still wondering is my CPU. I currently run a E5-1620V3 which has been fine, but now that I'm about to jump to virtualization and to the opportunities it brings I worry that my poor Haswell will be overworked quite a bit. It is now running Ubuntu Server 22.04 with all my services (media management, NAS duties and such) and it has been fine. Now that I would be splitting some of its duties TrueNAS VM, other Linux VMs and so on, would I be running out of threads?
Just test it. You could always swap that CPU for a one with more but slower cores. Those are really cheap. I think a 8 core E5-2630 v3 is like 20€ or so. The higher end ones are a bit more expensive. I spend 100€ for a 16 core E5 2683v4.
 
Last edited:
It seems that this is a problem far smaller than I thought.
To sort and guess the context a little bit: First era of spinning disks with SMR are one problem with ZFS (drops out of pool), later ones were better, but still not optimal. Another problem is the low workload rating on some cheaper productlines *cough* https://www.servethehome.com/discussing-low-wd-red-pro-nas-hard-drive-endurance-ratings/ *cough*

Especially on ZFS you want and should do scrubbing. Rule of thumb ist every month on consumer grade, every three months on enterprise. Scrubbing is pure reading (write only at error correction), but this goes to the workload. Bad for spinning disks with low workload rating, no problem for SSDs at all.

My workloads are not going to be huge, afterall this is just a homelab with a couple of users. At the beginning I would be running something like 4-5 VMs and one LXC container for Nginx Proxy Manager.
That should be absolutely ok. Most important ist that you know about the problem, are prepared (backup) and what to expect (just earlier death of hardware, but not instant-like ;) ). I know and understand the recommendation of other users to use enterprise grade, it is nice to have and should be bought in the first place for server stuff when planned.
Many users are doing first steps here with VMs/hosting/server things and use what they have already. That's just ok, I think. If SSDs die within first three months then you'll get the experience anyway. Can happen also with enterprise grade, only lowering the risk, no guarantee at all. :)

Now that I would be splitting some of its duties TrueNAS VM, other Linux VMs and so on, would I be running out of threads?
Depends. It has 8 threads. If you have 4 VMs running and expect full load all the time, you can give 2 virtual cpus per VM and see if the speed satisfies you, then all is good. A fifth VM could be problematic.
If you have, say, 8 VMs and the full load changes between VMs and this is under your control and when, you can give every VM 8 vcpus, but you have to expect the other 7 to be slower at that time.
Other way round: You don't have to provide physical 16 cores, when you have 16 VMs, knowing that they idle most of the time.

Here my PVE homeserver with ZFS writes 900GB per day to the SSDs to run ~30 mostly idleing guests. Really depends on the workload. Especially sync writes of DBs with ZFS will shred the SSDs in no time. And my two TrueNAS servers with ZFS killed 3 consumer SSDs this year...so I wouldn`t call it overpanicing...really depends on the use case. ZFS got a really bad overhead by design (seeing write amplification factor here of 3 to 62 with factor 20 in average). So its normal that a virtualized win VM will wear the SSD a multiple times faster than a bare metal Win.
Yeah, I read that in other posts of you. I use ZFS mostly on FreeBSD and also used bhyve with that on SSDs. I had dead spinning disks sure, but never a dead consumer SSD so far, so I was wondering. A mix of spinners and ssds, 96 pieces atm.
For the dbwrites...have you customized the recordsize and much better a separated dataset for the db? Because that is the biggest write amplification, when not configured properly. https://freebsdfoundation.org/wp-content/uploads/2016/08/Tuning-ZFS.pdf

The consumer SSDs here were used and for another project, but in the end I put them in my pools, expecting to die soon. Hasn't happened yet and now I have no reason to buy bigger and better ones...will see, when the time comes.
 
Yeah, I read that in other posts of you. I use ZFS mostly on FreeBSD and also used bhyve with that on SSDs. I had dead spinning disks sure, but never a dead consumer SSD so far, so I was wondering. A mix of spinners and ssds, 96 pieces atm.
For the dbwrites...have you customized the recordsize and much better a separated dataset for the db? Because that is the biggest write amplification, when not configured properly. https://freebsdfoundation.org/wp-content/uploads/2016/08/Tuning-ZFS.pdf
Jup, already optimized DBs and ZFS as good as I can. Ideally I would for example lower the volblocksize to 16K to match the 16K writes of MySQL, but then my raidz1 would write more and I lose a lot capacity because of the increasing padding overhead. To decrease the volblocksize without adding more padding overhead I would need to switch to a striped mirror. But then I get 50% instead of only 20% parity overhead. So no matter what I do, it ends with overhead.
 
Mhm, I see.
It would be interesting how much of dbwrites are part of your daily 900GB and if it could be better to provide an extra optimized storage for just the db.
30 VMs sure do writings, but 900GB in total and daily are really a lot. And sure this will burn also enterprise disks very fast.
I don't know about your budget, but normally writes in this region should be spread over much, much vdevs in raidz to not kill the single disks that fast. :confused:

Btw. for media files and other files I know that they will be bigger than 1M, I use recordsize=1M and zstd-3 or higher. This also helps against the writes a lil bit.
 
Last edited:
Mhm, I see.
It would be interesting how much of dbwrites are part of your daily 900GB and if it could be better to provide an extra optimized storage for just the db.
I think half of that is just writing logs and metrics to DBs produced by those idleing VMs/LXCs and a few routers/PCs (elasticsearch/mongodb/MySQL). I already moved both of them to a single disk LVM-Thin which reduced writes by 300GB per day, as I don't really care if I lose a day of logs/metrics. LVM-Thin on top of mdadm raid1 is similar bad, but a bit better than a ZFS mirror.
30 VMs sure do writings, but 900GB in total and daily are really a lot. And sure this will burn also enterprise disks very fast.
Got the good ones with 21125 TBW per 1TB capacity. They should survive some years, even with those writes. And the biggest problem is the write amplification. Those 900GB written to the NAND flash are actually only 45GB per day of real data written inside the VMs. So thats just 1.5GB per guest per day.
I don't know about your budget, but normally writes in this region should be spread over much, much vdevs in raidz to not kill the single disks that fast. :confused:
Jup, got 4x 2 disk mirrors, 2x 5 disk raidz1.
Btw. for media files and other files I know that they will be bigger than 1M, I use recordsize=1M and zstd-3 or higher. This also helps against the writes a lil bit.
Those are all stored on HDD pools for cold storage. Its really just the OS and services on those SSDs.
1671713993566.png
Those two green lines for example are just my two SSDs in a ZFS mirror of thin-client running 1x OPNsense VM, 1x Nextcloud VM, 1x Home Assistant VM, 1x Reverse Proxy VM, 1x Zabbix LXC, 1x DokuWiki LXC, 1x Pihole LXC. So those 7 guests alone write half a TB per day to the SSDs.
 
Last edited:
Thats really bad.
Those two green lines for example are just my two SSDs in a ZFS mirror of thin-client running 1x OPNsense VM, 1x Nextcloud VM, 1x Home Assistant VM, 1x Reverse Proxy VM, 1x Zabbix LXC, 1x DokuWiki LXC, 1x Pihole LXC.
I know and use also opnsense and nextcloud (dunno about the other ones), but it seems there isn't something right anywhere. It just feels too much from my experience. Either you have really much client traffic to that to justify that much logging/dbwrites or something ist borked. Besides that logs are just text and zfs compresses that very good.
The small amount of difference between sda and sdb is strange also, it should be even. Any chance of debugging further or measuring the writes of every single vm?
Every day backup of android phone into nextcloud storage seems to be realistic, but still...mh. What about the reverse proxy? Can it be that it writes/caches your daily surf traffic and also your downloads? I've seen some wild misconfigurations and wrong tools for the job proxy<->reverse proxy...kein Unterstellen, du weißt sicher, was du machst. ;) Manchmal passiert aber eben dann doch mal ein falsches setting, wenn man nicht genau hinguckt und das wieder vergisst etc.

Do you have autotrim=on on the pool?
 
Last edited:
Thats really bad.

I know and use also opnsense and nextcloud (dunno about the other ones), but it seems there isn't something right anywhere. It just feels too much from my experience. Either you have really much client traffic to that to justify that much logging/dbwrites or something ist borked. Besides that logs are just text and zfs compresses that very good.
Those home servers are just used by me alone. And I don't often write data to it. That should be to 95% writes that the guests create themself while idleing. So there is not much traffic, just alot of write amplification. Write amplification of cause depends on the pool layout, ZFS configs and what is run in the guests. But write amplification in general is really high when using ZFS. Got 4 homeserver here that run on ZFS. Different pool layouts, did hundreds of benchmarks comparing results of dozens of pool layouts and ZFS configs. Benchmarked writes with different filesystems in guests as well as directly on the host. Benchmarked datasets and zvols. Also tried 5 different SSD models.

I think one problem is that write amplification. The ext4 inside my Debian VMs for examples causes a write amplification of up to factor 4. Then my ZFS pool uses native encryption which (I don't know why, but it does) always doubles the amount of data written. So again a write amplification of factor 2. Then sync writes will first be written to the ZIL area of the SSD later again a second time as the real record. So again a factor 2 write amplification caused by sync writes. Then I for example write a 4K sync write to a 8K volblocksize zvol and the zvol can't write anything smaller than 8K, so that 4K will become 8K. thats another factor 2 write amplification. Then I got a mirror, so two copies of every record will be written to the disks. Again a factor 2 write amplification. Then ZFS itself will have some overhead adding to it, but not sure what that was. Lets call it factor X. And then the SSDs also got an internal write amplification. If I write 1GB to my SSD, he SSD will actually write 2 to 4 GB to the NAND cells. So again a factor 2write amplification.
And the problem with write amplifiation is, that it will multiply and not sum up. So in theory it looks like this:
Total Write amplification factor = 4 (ext4) * 2 (encryption) * 2 (ZIL) * 2 (bad volblocksize) * 2 (mirror) * 2 (SSD) = 128

So in theory every 4K sync write written inside the guest to ext4 should cause 128 times the data written to the SSD, resulting in 512K written to the NAND. And I didn't even took that factor X from ZFS overhead into account ;)

Just wanted to show how easily write amplification can increase and how those terrible SSD wear is created.

I just gave up after a year of tinkering, and replaced most consumer SSDs with durable Enterprise SSD, so I don't need to replace them every year.

The small amount of difference between sda and sdb is strange also, it should be even. Any chance of debugging further or measuring the writes of every single vm?
The difference is normal. It's what SMART reports how much data was written to the NAND. This also includes wear leveling, SSD shuffeling data around to optimize writes, moving data between SLC/RAM cache and eMLC cells, ... Depending on the wear of the SSD, existing data and so on those will be a little bit different, even if both disks receive the identical data.
If you look at "Host Writes per day" (which is what SMART reports of much data the SSD received to write from host) they disks perfectly match:
1671723642221.png
Every day backup of android phone into nextcloud storage seems to be realistic, but still...mh.
No, there are less than 1GB of data total on my Nextcloud. I only use it to sync my password safe, bookmarks, contacts, calendar and notes.
What about the reverse proxy? Can it be that it writes/caches your daily surf traffic and also your downloads?

I've seen some wild misconfigurations and wrong tools for the job proxy<->reverse proxy...kein Unterstellen, du weißt sicher, was du machst. ;) Manchmal passiert aber eben dann doch mal ein falsches setting, wenn man nicht genau hinguckt und das wieder vergisst etc.
That is a Debian VM with nginx that shouldn't be caching. It shouldn't write anything special. It is just there so I can access my nextcloud and home assistant from the internet. Its only forwarding and doesn'T store any actual data.
Do you have autotrim=on on the pool?
No, just a daily fstrim -a in all guests and on the PVE hosts.
 
Last edited:
Seems like a opened a big can of worms with ZFS discussion.

Just to be clear, I'm just talking about installing Proxmox OS on the Samsungs. The VMs will run on separate NVMe drives. Just want to make sure we are all on the same page since we are comparing data written numbers. The VMs will be running on 2x1Tb Firecuda 530 drives which have 1,2PB of write endurance. Those should get me started and we'll see how far those last. @Dunuin are your numbers on the disks running VMs and Promox or just the OS? Does majority of the wear come from running the VM and how much does Proxmox itself contribute?

As a little side note this sort of thing irks me a little bit. I don't like the idea of burning through equipment just for the sake of playing around. I think I will need to consider my position on virtualizing a little bit. Virtualizing my services is not a must, it would basically just be a fun project. The fun side of things would suffer greatly if I would feel that I just brake stuff for the sake of it. It's not that I feel overwhelming amount of sympathy for a SSD, but I just hate being wasteful.
 
@Dunuin are your numbers on the disks running VMs and Promox or just the OS? Does majority of the wear come from running the VM and how much does Proxmox itself contribute?
PVE system + VMs. Majority of the writes will come from your guests. With mdadm raid1 PVE is here writing about 2GB per disk per day. Should be more with ZFS.

As a little side note this sort of thing irks me a little bit. I don't like the idea of burning through equipment just for the sake of playing around. I think I will need to consider my position on virtualizing a little bit. Virtualizing my services is not a must, it would basically just be a fun project. The fun side of things would suffer greatly if I would feel that I just brake stuff for the sake of it. It's not that I feel overwhelming amount of sympathy for a SSD, but I just hate being wasteful.
You can monitor the SSD wear with SMART and if you see the wear is increasing too much, you can always remove them with plenty of health left.
There should be a wear counter counting down from 100 to 0 or up from 0 to 100. And most of the times you also got a value where you can see the total amount of data written to the SSD. Write that down, wait a month, write it down again and then subtract the first from the second. Then you know how much that SSDs wrote over a month and you can interpolate when the TBW should be reached. Warranty is always the number of years the warranty covers (usually 2, 3 or 5 years) OR until you wrote more to it than its TBW is rated for. Whatever happens first.
 
Last edited:
PVE system + VMs. Majority of the writes will come from your guests. With mdadm raid1 PVE is here writing about 2GB per day. Should be more with ZFS.


You can monitor the SSD wear with SMART and if you see the wear is increasing too much, you can always remove them with plenty of health left.
There should be a wear counter counting down from 100 to 0 or up from 0 to 100. And most of the times you also got a value where you can see the total amount of data written to the SSD. Write that down, wait a month, write it down again and then subtract the first from the second. Then you know how much that SSDs wrote over a month and you can interpolate when the TBW should be reached. Warranty is always the number of years the warranty covers (usually 2, 3 or 5 years) OR until you wrote more to it than its TBW is rated for. Whatever happens first.

Ok that's good to know. In that case it really does not matter that much if I go with ZFS on host OS. Even with ZFS write amplification being what it is, the write amounts won't be problematic in many years. That was my original question since my purpose was always to run the actual VMs on different set of drives.

This discussion did however bring to my attention that no consumer drive really is up for the task when running many guests. Guess I'll just have to see how much writes accumulate with my usage and choose the next drives accordingly.
 
Benchmarked writes with different filesystems in guests as well as directly on the host. Benchmarked datasets and zvols. Also tried 5 different SSD models.
Ok, I see you've diagnosed a lot and really know what you are doing. :)

I think one problem is that write amplification.
Yep, definitely.
Just wanted to show how easily write amplification can increase and how those terrible SSD wear is created.
Sure and thx for the detailed insight. So far it's true with the amplification you calculated although I'm not sure over every single point (again not arguing against ;) ) and I can't explain the *2 from the native encryption (on some pools I have geli and no amplification) and overall I can't compare, because you have a bigger workload overall.

If you look at "Host Writes per day" (which is what SMART reports of much data the SSD received to write from host) they disks perfectly match:
Ok, that fits.

Seems like a opened a big can of worms with ZFS discussion.
Nah, just comparisons and maybe fighting bottlenecks. :) It should also help for your decision.

It's not that I feel overwhelming amount of sympathy for a SSD, but I just hate being wasteful.
Even with ZFS write amplification being what it is, the write amounts won't be problematic in many years. That was my original question since my purpose was always to run the actual VMs on different set of drives.
You could also consider for peace of mind the installation of (only!) proxmox onto regular HDD (PMR, not SMR!). Boot, shutdown and updates will take a little bit longer, but you don't really need the throughput and the IOPS of a SSD. ZFS mirror adds together IOPS on reading btw, 2 disks with 200 IOPS give 400, 3 disks will give 600.
When proxmox is booted, it writes its logs and mostly idles.

To compare (Old 128G SATA SSD):
Code:
SMART Attributes Data Structure revision number: 1
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAGS    VALUE WORST THRESH FAIL RAW_VALUE
  9 Power_On_Hours          -O--CK   095   095   000    -    23790
 12 Power_Cycle_Count       -O--CK   098   098   000    -    1013
175 Program_Fail_Count_Chip -O--CK   100   100   010    -    0
176 Erase_Fail_Count_Chip   -O--CK   100   100   010    -    0
177 Wear_Leveling_Count     PO--C-   081   081   005    -    223
178 Used_Rsvd_Blk_Cnt_Chip  PO--C-   100   100   010    -    0
179 Used_Rsvd_Blk_Cnt_Tot   PO--C-   100   100   010    -    0
180 Unused_Rsvd_Blk_Cnt_Tot PO--C-   100   100   010    -    2206
181 Program_Fail_Cnt_Total  -O--CK   100   100   010    -    0
182 Erase_Fail_Count_Total  -O--CK   100   100   010    -    0
183 Runtime_Bad_Block       PO--C-   100   100   010    -    0
184 End-to-End_Error        PO--CK   100   100   097    -    0
187 Reported_Uncorrect      -O--CK   100   100   000    -    0
190 Airflow_Temperature_Cel -O--CK   072   046   000    -    28
195 Hardware_ECC_Recovered  -O-RC-   200   200   000    -    0
198 Offline_Uncorrectable   ----CK   100   100   000    -    0
199 UDMA_CRC_Error_Count    -OSRCK   100   100   000    -    2
233 Media_Wearout_Indicator -O-RCK   200   199   000    -    0
234 Unknown_Attribute       -O--C-   100   100   000    -    0
235 Unknown_Attribute       -O--C-   099   099   000    -    105
236 Unknown_Attribute       -O--C-   099   099   000    -    178
237 Unknown_Attribute       -O--C-   099   099   000    -    223
238 Unknown_Attribute       -O--C-   100   100   000    -    0

First 10000h were on FreeBSD+ZFS (some many reboots between, tests etc.) from there Proxmox+ZFS. Single disk pool.
 
Last edited:
It's considered best practice to separate storage into OS and data (VMs, etc) filesystems. I don't use SSDs but if I did I would make sure they are enterprise quality for the intensive writes. In my case, the clusters are enterprise servers using SAS HDDs.


Also considered best practice to mirror the OS drives. I have used BTRFS and ZFS for this purpose.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!