ZFS Raid 0

229Mick

Member
Jun 28, 2020
13
1
8
23
Trying to create a raid 0 ZFS on my first time proxmox box, but it doesn't offer raid 0. Am I missing something or is this not an option? Is there another way to get a raid 0? Should I do hardware raid 0?

Thanks for any input1

2020-07-18 21_43_18-pve01 - Proxmox Virtual Environment.png
 
Thanks for the reply and info! I guess I should have mentioned what I want to do and get some advice: I have 4 6TB drives in the physical machine, I want to use those 4 drives on an open media vault VM as a 24TB drive for backing up my media server (which is why raid 0 is OK for it). Is there an easier way to do that? Can I just add the four drives to the VM directly? I could just create the R0 in open media vault and be set...
Thoughts or suggestions are appreciated!
 
Why the RAID0 option was never added to that dropdown list? There are users out there who are interested in this option, given all the limitations RAID0 comes with.
 
Why the RAID0 option was never added to that dropdown list? There are users out there who are interested in this option, given all the limitations RAID0 comes with.
My guess would be to prevent inexperienced users from total data loss. And for experienced users, who know raid and backup strategies well enough to make a good decision if a raid0 is a good option for the use case or not, it shouldn't be a big problem running the three oneliners to create the raid0 pool and add it as storage to PVE:
Code:
ls -la /dev/disk/by-id
zpool create ...
pvesm add zfspool ...

Can't keep track of all the people here creating a raid0 or single disks because they want the most usable capacity or want it cheap and then are shocked that all data is lost when that disk then fails...and of course also didn't create any backups...because that would "waste" even more capacity/money than a raidz1 or striped mirror.

Or people want to create something like a raid5 in the PVE installer (which by the way allows a raid0 when choosing "single disk" but selecting multiple disks) but forgot to change the dropdown from "single disk" to "raidz1" and then wonder why all data is lost, as soon as the first disk fails, not realizing they were actually running a raid0 without any redundancy all the time...just happend 3 weeks ago:
https://forum.proxmox.com/threads/1-ssd-failed-but-raidz-pool-failed.133521/#post-588719

Or another example, 2 months ago, where a user created a 7 disk raid0 and then wondered why all data was lost after one of the disks failed. He thought a ZFS stripe is like a JBOD and not like a raid0 and that in case 1 disk is failing the data on the other 6 disks would still be there. Of cause no backups, because that would be too big...: https://forum.proxmox.com/threads/zfs-defekte-platte-ohne-raid.132134/

But would be interesting to know why the staff actually made that decision.
 
Last edited:
  • Like
Reactions: maxgdias
I was able to create the RAID0 configuration, but with the option available on this dropdown, it would have saved me some time finding the details on how to do it, on the internet. If added in that dropdown, the option can provide a short warning when choosing it, to make it clear to whoever selects it, that they are at risk.
I know a lot of folks here don't recommang RAID0, but I give you a real life scenarion where RAID0 is actually recommanded, in case you don't want to wait weeks for the job to finish: Loading the planet open street map data https://wiki.openstreetmap.org/wiki/Osm2pgsql/benchmarks#What_affects_import_time?
 
Last edited:
I don't know any problem in which RAID0 would be a valid solution and I concur with everything @Dunuin says.

I know a lot of folks here don't recommang RAID0, but I give you a real life scenarion where RAID0 is actually recommanded, in case you don't want to wait weeks for the job to finish: Loading the planet open street map data https://wiki.openstreetmap.org/wiki/Osm2pgsql/benchmarks#What_affects_import_time?
This is a very good example on how to not do it ... if you optimize Postgres for ZFS for the task at hand, you will get much better times even with an enterprise SSD without raid0 in ZFS. Most people just don't know how to proper do hardware or database optimizations. Especially with databases on ZFS, you WILL get a huge speed improvement with a proper low-latency SLOG device. Most tests on that page are done with non-enterprise SSDs ... just look at this forum in how well those perform ... spoiler: terrible compared to enterprise SSDs.
 
In addition to an optane as SLOG + lots of RAM for ARC + some fast Enterprise NVMe SSDs as mirrored and striped "special" vdevs a raid10 would also be an option for the same performance unless you run out of PCIe lanes or drive bays. If you for example need 32TB of fast storage you could get 8x 4TB NVMe SSDs and create a raid0 or get 16x 4TB NVMe SSDs and create a raid10. Then it's again only a question if you are willing to pay for the additional disks or a more capable hardware platform.
 
I agree with you folks! And thanks for the info. My only problem is that what I'm working on is a personal project with limited budget. I only have 1 system with 2 x XEON E5-2690 v2 available and 2 x 1 TB SSD disks (home grade). So I'm trying to squiz the maximum out of it. I'm preatty limited to using RAID0 at this moment, in order to gain extra performance on disk side. And loosing the data in case of a disk failure is acceptable for me.
 
FYI: I just bought a couple of 960 GB Samsung enterprise SSD for 39 euro a piece, so "no budget" is often not a real reason in my eyes.

I only have 1 system with 2 x XEON E5-2690 v2 available and 2 x 1 TB SSD disks (home grade).
Just (another) two cents for a general advice for all reading this later:
  • I/O was, still is and most probably will ever be THE bottleneck for databases and that's the part where most people "cheap out" (mostly without even knowing it, like OP). Do your research. Just beeing "SSD" or "NVME SSD" says nothing about it's actual real-world performance
  • The same with the low-frequency CPUs. Most databases or better most database loads are single threaded, so having the FASTEST possible frequency will always be better than having more cores (also true for per-core-licensing, less and more powerful cores is also a lot cheaper)
  • The same with more CPUs ... due to NUMA, your overall performance will in the most cases be worse with more populated CPU sockets
  • Optimize your storage AND database system with respect to your desired workload, especially for ZFS
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!