Disk configuration recommendations please

louie1961

Active Member
Jan 25, 2023
137
35
28
I am running Proxmox VE 7.3-3 on an old repurposed HP Z640 workstation (Xeon E5-2690v3 chip, 48 GB of ram, and three drives all connected by SATA III: a 256GB SSD currently used for the boot drive/proxmox root, a 1TB SSD drive used for VM and CT storage, and 1TB HDD used only for ISOs CT templates currently-all formatted as EXT4). This is not a production box, and I have no issue wiping the drives and starting over. I am using this box for home lab and tinkering purposes only at the moment. I only run it a couple of hours a day and it is not doing anything super important. My NAS runs on different hardware at the moment which is running 24x7. I have 1 TB of storage on the NAS served up via SMB/CIFS and backed up locally every 15 minutes and offsite to AWS S3 every night. I may at some point migrate the NAS to this box and also spin up VMs or docker images to run pfsense, openVPN, and a few other services on a more permanent basis.

SO, with all that being said, what are your suggestions for how to configure the storage on the HP Z640? I am guessing some kind of LVM would be better than what I am doing. I am not sure I am ready to take the leap into ZFS yet, but maybe, who knows? I am still pretty new to Proxmox. This box has been running for only a couple of weeks now, and I have installed a few VMs (windows 11, Linux Mint 11, Kali, and Ubuntu server). I have also installed docker a couple of different ways (in Ubuntu and in an LXC container), and wordpress in an LXC container. SAS is not currently an option (as far as I know) since there isn't an SAS adapter in the machine (that I am aware of)
 
SO, with all that being said, what are your suggestions for how to configure the storage on the HP Z640?
You are in the optimal position to try out every storage option there is including ZFS and compare if it fits your requirements. There is no one solution for all, but generally ZFS is a VERY good option to play around with and get familiarized. It may not fit your requirements, but you have to check that.

Running Docker inside of LX(C) containers does not offer much better isolation than just running it directly on your PVE host (for security always run it in a VM - has been discussed a lot on the forums). So with that beeing said, try out Docker directly on your PVE host WITH ZFS, so that you can benefit from all ZFS glory while running Docker. This is a game changer with respect to quota, file isolation, snapshots, compression etc. - althrough I would not run this in production, but you already said, that this would be a test system.

Having PVE on its own disk has the advantage, that you can play around with different storage types on your other disks and beeing able to wipe / reinstall it easily. Again, this may not be a good production setup, but that depends on a lot of other factors and again ... just a test system, so this is totally fine.
 
  • Like
Reactions: louie1961
Great info and recommendation, thanks. But one follow up. Would you install ZFS on only one of the disks (say for example the 1TB SSD) or would it get installed across all three disks, or something else? I am kind of wondering how to best take advantage of the mix of hard drives and mix of SSD vs HDD disks. The HDD disk is a 7200 RPM Seagate Baracuda, with the 64MB cache. So not as fast as the SSD drives, but not terribly slow either.
 
Great info and recommendation, thanks. But one follow up. Would you install ZFS on only one of the disks (say for example the 1TB SSD) or would it get installed across all three disks, or something else? I am kind of wondering how to best take advantage of the mix of hard drives and mix of SSD vs HDD disks. The HDD disk is a 7200 RPM Seagate Baracuda, with the 64MB cache. So not as fast as the SSD drives, but not terribly slow either.
Depends on what you call slow. For a VM/LXC storage you most care about IOPS and here HDDs are terrible slow and a SSD can easily be 100 or even 1000 times faster.

Mixing SSDs and HDDs is generally not a good idea. I would use those disks individually. The 1TB SSD for VMs/LXCs and the HDD for cold storage or backups.

I personally would replacethe 250GB SSD with another 1TB one, then create a mirror with those 2x 1TB SSDs and use them for both PVE system disk and VM/LXC storage and use the HDD for a PBS datastore.
 
Last edited:
I personally would replacethe 250GB SSD with another 1TB one,
That's just not in cards right now. I am going to have to work with what I have. I currently using the 1TB SSD for VMs/LXCs and the HDD for ISOs and backups as suggested.

Is it worth trying to capture the open space on the 256 GB SSD and making it into an LVM with the 1TB SSD?
 
OK, just as a followup here is where I landed. I picked up an Asus ROG hyper M.2 to PCIE x16 adapter for $20 (the two drive version that they ship with certain motherboards...I just couldn't resist for the price). My motherboard supports PCIE bifurcation, so it works fine even with my old E5-2690 cpu. I paired that up with a couple of cheap consumer grade (Teamgroup M34) M.2 NVME drives. And I picked up a couple of 2TB consumer grade SATA III (Teamgroup AX2). I have the two SATA III drives set up in a ZFS mirror for the system drive (local and local-zfs). The two NVME drives are set up in a different ZFS mirror pool. I have all my VMs stored/running on the NVME drives, and the ISOs, backups, etc. are running on the SATA drives. I also have mounted an NFS share and keep a copy of the backups there too.

I know in general consumer grade SSDs cause some people concerns, but to be honest, I am having no issues. I did disable pve-ha-lrm, pve-ha-crm, and corosync since I am running a single node. This machine has been running pretty much 24x7 for the last 6 months, and all my drives still report zero wear out. I don't run a lot of intensive workloads: Tracks, Leantime, Openmediavault (using the old spinning disks in a pass through arrangement, running BTRFS), Home Assistant, Wordpress, Nextcloud (using an NFS share for storage), Portainer, Photoprism, Grocy, Mealie, Monica, a couple of Cloudflare connectors for tunneling and a couple of Debian 12 instances as my docker hosts.

Considering the age of my server, I am very pleased with how fast these SSDs are and also with overall system performance.

1688755778206.png
 
I know in general consumer grade SSDs cause some people concerns, but to be honest, I am having no issues.
You could run pveperf /path/to/your/NvmeMirrorsMountpoint to see how well those NVMe SSDs perform. For comparison, this is a SATA SSD ZFS mirror: FSYNCS/SECOND: 5048.26
 
You could run pveperf /path/to/your/NvmeMirrorsMountpoint to see how well those NVMe SSDs perform. For comparison, this is a SATA SSD ZFS mirror: FSYNCS/SECOND: 5048.26
This pveperf benchmark makes no sense to me.

1x (Super cheap) SATA Seagate 120 500gb SSD in (ZFS as a Single Disk):
CPU BOGOMIPS: 39936.00
REGEX/SECOND: 7173061
HD SIZE: 447.09 GB (SSD_ZFS_SINGLE)
FSYNCS/SECOND: 1343.70
DNS EXT: 10.92 ms
DNS INT: 1001.56 ms

4x Samsung 990 PRO 2TB in (ZFS Raid 10):
CPU BOGOMIPS: 121372.32
REGEX/SECOND: 5607263
HD SIZE: 1568.59 GB (rpool/ROOT/pve-1)
FSYNCS/SECOND: 553.21
DNS EXT: 10.03 ms
DNS INT: 1001.55 ms

1x (Ultra Cheap and Ultra Crap) Gigabyte 2242 NVME 256GB in (LVM)
CPU BOGOMIPS: 23961.60
REGEX/SECOND: 4420879
HD SIZE: 58.02 GB (/dev/mapper/pve-root)
BUFFERED READS: 818.36 MB/sec
AVERAGE SEEK TIME: 0.04 ms
FSYNCS/SECOND: 266.76
DNS EXT: 55.11 ms
DNS INT: 22.01 ms

2x (Noname + The cheapest + Oldest Sata Crap on the Planet) Nytro xf1230 500gb SATA SSD in (ZFS Mirror)
CPU BOGOMIPS: 57600.00
REGEX/SECOND: 4169360
HD SIZE: 403.48 GB (rpool/ROOT/pve-1)
FSYNCS/SECOND: 7261.64
DNS EXT: 25.45 ms
DNS INT: 15.20 ms

I can continue, but thats out of my 5 Proxmox Servers.
I don't think that pveperf has any meaningfullness at all, i mean i know how the systems perform in real time...
how the ssds perform in vms/backups/cloning/booting/windows gui performance etc...

And those "Nytro SSD's" even while they outperform according to the benchmark my 4x 990 Pro in Raid 10, by a factor of 13, lol xD
Sry, this is really funny, im laughing my ass off xD
However, the Performance is not even close, not even night and day, its like those Nytro SSD's are Slow as crap HDDs and my 4x 990 Pro's are absolutely blazing, dunno, even hard to find a word, how much faster the 990's are xD

So much for pveperf xD
Sry Dunuin xD

Edit: What i mean is, i absolutely love the idea behind pveperf, it's greatly appreciated.
Just the results aren't comparable in any way.
Seems like there is still some work needed.
 
Last edited:
This is the report for the NVME mirrored drives:

root@pve:/dev/zvol/VMstorage# pveperf /dev/zvol/VMstorage
CPU BOGOMIPS: 124510.80
REGEX/SECOND: 3619091
HD SIZE: 31.31 GB (udev)
FSYNCS/SECOND: 81384.87
DNS EXT: 72.30 ms
DNS INT: 24.17 ms (home.louie)
root@pve:/dev/zvol/VMstorage#

This is the report for the SATA SSD mirrored drives:

root@pve:/rpool/ROOT# pveperf /rpool/ROOT/pve-1
CPU BOGOMIPS: 124510.80
REGEX/SECOND: 3551993
HD SIZE: 1820.32 GB (rpool/ROOT)
FSYNCS/SECOND: 1999.19
DNS EXT: 75.66 ms
DNS INT: 21.50 ms (home.louie)
root@pve:/rpool/ROOT#

I have no idea what is good or bad.
 
This is the report for the NVME mirrored drives:

root@pve:/dev/zvol/VMstorage# pveperf /dev/zvol/VMstorage
CPU BOGOMIPS: 124510.80
REGEX/SECOND: 3619091
HD SIZE: 31.31 GB (udev)
FSYNCS/SECOND: 81384.87
DNS EXT: 72.30 ms
DNS INT: 24.17 ms (home.louie)
root@pve:/dev/zvol/VMstorage#

This is the report for the SATA SSD mirrored drives:

root@pve:/rpool/ROOT# pveperf /rpool/ROOT/pve-1
CPU BOGOMIPS: 124510.80
REGEX/SECOND: 3551993
HD SIZE: 1820.32 GB (rpool/ROOT)
FSYNCS/SECOND: 1999.19
DNS EXT: 75.66 ms
DNS INT: 21.50 ms (home.louie)
root@pve:/rpool/ROOT#

I have no idea what is good or bad.
81384.87 on the nvme's is soo high, that it's probably fast as hell.
I wouldn't worry, i mean as much i don't care about pveperf, that number is soo high, that it's indeed probably insane :)

About the ssd speed, I can't say anything. Maybe it's okay, maybe it's the same as my 8y old seagate nytro's for 20€ xD
 
2x (Noname + The cheapest + Oldest Sata Crap on the Planet) Nytro xf1230 500gb SATA SSD in (ZFS Mirror)
Nytro xf1230 got proper Power-loss Protection and eMLC NAND, so the better sync writes compared for example to the 990 Pros (no PLP and only cheaper TLC NAND) would totally make sense. So from that point of view those Nytros are the way better SSDs.
81384.87 on the nvme's is soo high, that it's probably fast as hell.
I wouldn't worry, i mean as much i don't care about pveperf, that number is soo high, that it's indeed probably insane :)

About the ssd speed, I can't say anything. Maybe it's okay, maybe it's the same as my 8y old seagate nytro's for 20€ xD
But that was also a benchmark of a block device, as you benchmarked a zvol and not a filesystem, like you did with the rpool. Benchmarking a filesystem (dataset) on the NVMes would probably be a bit slower. Still incredible fast for SSDs without PLP, in case you didn't set your VMStorage to something like "sync=disabled".
 
Last edited:
  • Like
Reactions: Ramalama
OK, this may make more sense.

root@pve:/VMstorage# pveperf /VMstorage
CPU BOGOMIPS: 124510.80
REGEX/SECOND: 3580802
HD SIZE: 676.07 GB (VMstorage)
FSYNCS/SECOND: 2332.83
DNS EXT: 69.78 ms
DNS INT: 36.86 ms (home.louie)
 
Fastest disks then are still Ramalama's 8 years old 20€ enterprise SATA SSDs :D
 
About the ssd speed, I can't say anything. Maybe it's okay, maybe it's the same as my 8y old seagate nytro's for 20€ xD
Considering the quality of these drives (they are super cheap consumer drives that cost me $65 each) I am OK if they are the same a old Nytros. I mean $65 for a 2 TB SSD? That's crazy cheap. Almost the same price as a 2TB spinning disk.
 
Yeah, problem with SSDs is, that each year they get bigger and cheaper but also will the writes be slower and they will fail sooner...
Those ancient SSDs with SLC NAND cells could be written 100,000 times before failing. Now with odern QLC NAND we are down to 1000 P/E cycles. I like to pay less for the same amount of storage, but SSDs failing 100x faster is also terrible...
 
Last edited:
Nytro xf1230 got proper Power-loss Protection and eMLC NAND, so the better sync writes compared for example to the 990 Pros (no PLP and only cheaper TLC DAND) would totally make sense. So from that point of view those Nytros are the way better SSDs.

But that was also a benchmark of a block device, as you benchmarked a zvol and not a filesystem, like you did with the rpool. Benchmarking a filesystem (dataset) on the NVMes would probably be a bit slower.
I don't want to argue, on the paper the nytro's are great, that's why i bought them at all.

But in real life, i mean running an windows vm on that "Storage", and how snappy the vm is with opening apps/booting etc...

And even using a small amount of that drive as a metadata partition for hdd storage, makes a difference.

I have on my raid10 990pro mirror an 256gb partition (because the raid10 mirror is an mdadm mirror) as metadata for an zfs-z2 120tb+ pool. Using that as Samba...

I do a similar thing with the nytro's, just without mdadm and not as an physical partition, because it's already zfs...
However the same, just with 50gb for an 2tb hdd mirror. Same usecase for Samba...

The search result speed is way faster with the 990s.

However, that's all reads, i don't have that much write situations to campare the drives.
But in any real life usage situation i was till now, the difference is not even close between both.

I mean if you want numbers, tell me what you want me todo and i do it. But you should pick an usecases that are normal usecases, not just benchmarks.

What im finding extremely strange tho:
On a Windows VM, if i run there inside something like "AS-SSD" i get like 10-14g read speed and 5g write speed...
And everything else down to qdepth 32 is extremely good either.
If i do the same on the Proxmox host itself, I can't reach those numbers at all, not even 40% of them :)

However, in short, i didn't found any usecase or any situation, where the nytros were in any way better or faster, they perform here like HDD's, except in benchmarks, lol

Tho i didn't tested iops performance and compared both, maybe that's what i should try.

And thanks for the hint with the block device testing!
 
Yeah, problem with SSDs is, that each year they get bigger and cheaper but also will the writes be slower and they will fail sooner...
Those ancient SSDs with SLC NAND cells could be written 100,000 times before failing. Now with odern QLC NAND we are down to 1000 P/E cycles. I like to pay less for the same amount of storage, but SSDs failing 100x faster is also terrible...
Btw, one of my 990 pro 2tb, failed after 1month runtime, with 0% wearout.

Never happened that fast to me with any ssd before.
 
You really see the performance benefit of the Nytros when doing:
A.) async writes, but not in short bursts but when writing big amounts of data (like restoring a 100GB VM) which the SLC/DRAM cache can't handle. Those Samsungs will then drop down to bad performance while the Nytros should still be fine. It's more about predictability. I personally don't want a SSD that is sometimes very fast for a short time but then terrible slow in other situations. I want a SSD that will deliver a good performance in all situations under all circumstances. You can compare that to GPUs where a PC game might look great with 300 FPS but you also got frame drops which hurt the whole experience if the game is stuttering from time to time if there is too much action on the screen. Or you can compare that to a car, where a F1 car might be great for a racing track but it will really suck when you are driving on an old road with lots of potholes. A rally car might not be as fast on long smooth racing tracks but will also work great when driving offroad.
B.) you need to do sync writes. Running databases or ceph for example would highly benefit from good sync write performance. ZFS is also doing some sync writes.
 
Last edited:
You really see the performance benefit of the Nytros when doing:
A.) async writes, but not in short bursts but when writing big amounts of data (like restoring a 100GB VM) which the SLC/DRAM cache can't handle. Those Samsungs will then drop down to bad performance while the Nytros should still be fine. It's more about predictability. I personally don't want a SSD that is sometimes very fast for a short time but then terrible slow in other situations. I want a SSD that will deliver a good performance in all situations. You can compare that to GPUs where a PC game might look great with 300 FPS but you also got frame drops which hurt the whole experience if the game is stuttering from time to time if there is too much action on the screen.
B.) you need to do sync writes. Running databases for example would highly benefit from good sync write performance.
I just did an random mixed read/write iops test with fio:
Nytros: 123k read / 41.2k write (2x zfs mirror)
990 Pro: 85k read / 28.4k write (4x zfs raid-10)
990 Pro: 302k read / 167.3k write (4x mdadm raid 10)

I can test both, because each 990 Pro has an 1tb partition: where i do zfs raid 10 (with the partitions)
And each has an 750gb partition, where i do mdadm raid 10.
And each has an ~250gb partition, where i do an zfs raid 10 as special device for my 120tb HDD pool xD

1. What i find very strange is, that mdadm outperforms an zfs raid by an factor of 4-8.
2. And the fio command completes on the 990 pro (zfs raid), 5 times faster as on the Nytros, makes no sense to me either, since the iops are worse.

testcmd:
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=fiotest --filename=testfio --bs=4k --iodepth=64 --size=1G --readwrite=randrw --rwmixread=75

Back to your Reply:
I restored already an 32GB windows VM on both, some days ago, dont call me for numbers, because i dont remember, but the 990's were way faster either.
However, thats an easy test, i can retest that simply and check the difference and share.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!