How to organise drives (passthrough, snapraid, etc)

SpinningRust

Active Member
Sep 25, 2019
43
3
28
35
Hello Dear Forum,

i am still very new here and have only lurked before, but now i have encountered a problem that i can't seem to solve.

I plan to build a server with Proxmox as a main OS and OMV (openmediavault) as a NAS VM. (specs if this is relevant that i want to use: 8GB RAM, 250GB m.2 pcie SSD, Asrock x370m pro4, Ryzen 5 2400G)

Right now i have a 10Tb WD RED with around 400GB of data.
i really wanted to use ZFS as filesystem but as far as i read online and got answers from some other forums, its not as easy to just add some drives to a zfs (pool) or even create a ZFS raid later on (please correct me if i am mistaken here).

So, i can't buy 2 or even three drives at once because $$$, so i plan to expand slowly.
Here, OMV comes into play.
My NAS right now is a banana pi M1 running on OMV and as wobbly it sometimes feels, it seems easy enough to set up snapraid and add more drives one after another.

To my actual Question now:

Can i set up Proxmox in a way, so that i let OMV see the whole drive, set up snapraid and let OMV handle the file sharing while i can still use the drive under Proxmox for iso and VM/Docker/LXD storage?
i have zero problems with the Terminal, but my experiences with drives, partitions, etc in the cli have almost always been bad, so i'd like to have a GUI for snapraid.

TL;DR: can i use my Drive with Proxmox, AND a VM so that the VM can secure the data from the VM itself and the Proxmox data?

Any help would be greatly appreciated, since i am unable to test the whole setup.....

Dear Regards :D
 
i really wanted to use ZFS as filesystem but as far as i read online and got answers from some other forums, its not as easy to just add some drives to a zfs (pool) or even create a ZFS raid later on (please correct me if i am mistaken here).

I'm pleased to correct: ZFS can
- enhance a Pool with other drives (one or another vdev with a raid configuration) to make it bigger
- enhance a single disk vdev to a mirrored one (so RAID1)
- there is also support for removing drives in upstream

In general it is very flexible, but changing an e.g. raid 5 to a raid 6 is not easy - it is also not easy on other software raid implementation and hardware raid. Some support it, some don't.

Can i set up Proxmox in a way, so that i let OMV see the whole drive, set up snapraid and let OMV handle the file sharing while i can still use the drive under Proxmox for iso and VM/Docker/LXD storage?

You can of course, but you will not have ZFS then. Proxmox VE is a virtualisation environment (hence the name), so it normally virtualises everything including storage. You will not have features like snapshots if you passthrough devices directly and you cannot share e.g. drives directly, you have to set up sharing via network. If you partition your drive, you can split between PVE and OVM, but I recommend to just use ZFS with PVE on your host and give the space to OMV, running in an LXC. I also recommend setting up RAID. In the beginning, I'd start with two drives in a mirrored setup, so that you can loose one disk and still have all your data, later on, you can add two disks and extend your zpool with another mirrored vdev. The downside is that you will have only half your total TB. If you already start with 4 drives, you can use RAIDz1 and have 3/4 of your total TB and still can loose one drive.

There are a lot of posts about OMV in this forum, so feel free to search - also with a ZFS.
 
I'm pleased to correct: ZFS can
- enhance a Pool with other drives (one or another vdev with a raid configuration) to make it bigger
- enhance a single disk vdev to a mirrored one (so RAID1)
- there is also support for removing drives in upstream

In general it is very flexible, but changing an e.g. raid 5 to a raid 6 is not easy - it is also not easy on other software raid implementation and hardware raid. Some support it, some don't.



You can of course, but you will not have ZFS then. Proxmox VE is a virtualisation environment (hence the name), so it normally virtualises everything including storage. You will not have features like snapshots if you passthrough devices directly and you cannot share e.g. drives directly, you have to set up sharing via network. If you partition your drive, you can split between PVE and OVM, but I recommend to just use ZFS with PVE on your host and give the space to OMV, running in an LXC. I also recommend setting up RAID. In the beginning, I'd start with two drives in a mirrored setup, so that you can loose one disk and still have all your data, later on, you can add two disks and extend your zpool with another mirrored vdev. The downside is that you will have only half your total TB. If you already start with 4 drives, you can use RAIDz1 and have 3/4 of your total TB and still can loose one drive.

There are a lot of posts about OMV in this forum, so feel free to search - also with a ZFS.




Hi, thank you for your reply!

so, lets say i set up the OS drive with zfs and my data drive (data migration is ok) and buy a second hdd later. can i still make a zfs raid with 2 drives and without formatting?
if yes, can i "level it up" once i have more drives to a zfs-2/3 raid without formatting it again and again?

Or do i have to maintain then more and more mirrored pools and buy 2 drives at a time?

My choice finally fell onto snapraid bc its so flexible with adding drives...
(i hope i'm making sense here...)

Dear regards :D
 
can i still make a zfs raid with 2 drives and without formatting?
That's called "a mirror" and it is possible. Internally your vdev will be converted from a single disk to a mirrored setup, not a RAID z1.

if yes, can i "level it up" once i have more drives to a zfs-2/3 raid without formatting it again and again?

No, you cannot convert a raid z level - as already mentioned, only very costly hardware controllers can do that if at all possible.

Or do i have to maintain then more and more mirrored pools and buy 2 drives at a time?

ZFS does work like this:
- smallest entity is a vdev
- multiple vdevs are in a pool (at least one)
- a vdev consists of at least one drive
- a vdev can have multiple drives with a raid configuration like mirror, raidz1, raidz2 and raidz3
- you can convert a single drive vdev into a mirrored configuration by adding disks (2-way and higher)

So, e.g. you start by one drive (and one vdev) and add another to have faul tolerance in a mirrored setup (still one vdev, but two drives), then you decide to enhance your pool with more space, you have to add another vdev (with at least one drive). To still remain in a fault tolerant state, you have to add two drives in a mirrored setup.

See this example:

Code:
root@zfs-howto /tmp > for i in 1 2 3 4; do dd if=/dev/zero bs=1M count=100 of=disk-$i; done
100+0 records in
100+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.0851422 s, 1.2 GB/s
100+0 records in
100+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.0827696 s, 1.3 GB/s
100+0 records in
100+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.0904513 s, 1.2 GB/s
100+0 records in
100+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.0899194 s, 1.2 GB/s

root@zfs-howto /tmp > zpool create test /tmp/disk-1

root@zfs-howto /tmp > zpool status -v test
  pool: test
 state: ONLINE
  scan: none requested
config:

        NAME           STATE     READ WRITE CKSUM
        test           ONLINE       0     0     0
          /tmp/disk-1  ONLINE       0     0     0

errors: No known data errors

root@zfs-howto /tmp > zpool attach test /tmp/disk-1 /tmp/disk-2

root@zfs-howto /tmp > zpool status -v test
  pool: test
 state: ONLINE
  scan: resilvered 82.5K in 0h0m with 0 errors on Thu Sep 26 13:46:46 2019
config:

        NAME             STATE     READ WRITE CKSUM
        test             ONLINE       0     0     0
          mirror-0       ONLINE       0     0     0
            /tmp/disk-1  ONLINE       0     0     0
            /tmp/disk-2  ONLINE       0     0     0

errors: No known data errors

root@zfs-howto /tmp > zpool add test mirror /tmp/disk-[34]

root@zfs-howto /tmp > zpool status -v test
  pool: test
 state: ONLINE
  scan: resilvered 82.5K in 0h0m with 0 errors on Thu Sep 26 13:46:46 2019
config:

        NAME             STATE     READ WRITE CKSUM
        test             ONLINE       0     0     0
          mirror-0       ONLINE       0     0     0
            /tmp/disk-1  ONLINE       0     0     0
            /tmp/disk-2  ONLINE       0     0     0
          mirror-1       ONLINE       0     0     0
            /tmp/disk-3  ONLINE       0     0     0
            /tmp/disk-4  ONLINE       0     0     0

errors: No known data errors
 
  • Like
Reactions: guletz
That's called "a mirror" and it is possible. Internally your vdev will be converted from a single disk to a mirrored setup, not a RAID z1.



No, you cannot convert a raid z level - as already mentioned, only very costly hardware controllers can do that if at all possible.



ZFS does work like this:
- smallest entity is a vdev
- multiple vdevs are in a pool (at least one)
- a vdev consists of at least one drive
- a vdev can have multiple drives with a raid configuration like mirror, raidz1, raidz2 and raidz3
- you can convert a single drive vdev into a mirrored configuration by adding disks (2-way and higher)

So, e.g. you start by one drive (and one vdev) and add another to have faul tolerance in a mirrored setup (still one vdev, but two drives), then you decide to enhance your pool with more space, you have to add another vdev (with at least one drive). To still remain in a fault tolerant state, you have to add two drives in a mirrored setup.

See this example:

Code:
root@zfs-howto /tmp > for i in 1 2 3 4; do dd if=/dev/zero bs=1M count=100 of=disk-$i; done
100+0 records in
100+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.0851422 s, 1.2 GB/s
100+0 records in
100+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.0827696 s, 1.3 GB/s
100+0 records in
100+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.0904513 s, 1.2 GB/s
100+0 records in
100+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.0899194 s, 1.2 GB/s

root@zfs-howto /tmp > zpool create test /tmp/disk-1

root@zfs-howto /tmp > zpool status -v test
  pool: test
state: ONLINE
  scan: none requested
config:

        NAME           STATE     READ WRITE CKSUM
        test           ONLINE       0     0     0
          /tmp/disk-1  ONLINE       0     0     0

errors: No known data errors

root@zfs-howto /tmp > zpool attach test /tmp/disk-1 /tmp/disk-2

root@zfs-howto /tmp > zpool status -v test
  pool: test
state: ONLINE
  scan: resilvered 82.5K in 0h0m with 0 errors on Thu Sep 26 13:46:46 2019
config:

        NAME             STATE     READ WRITE CKSUM
        test             ONLINE       0     0     0
          mirror-0       ONLINE       0     0     0
            /tmp/disk-1  ONLINE       0     0     0
            /tmp/disk-2  ONLINE       0     0     0

errors: No known data errors

root@zfs-howto /tmp > zpool add test mirror /tmp/disk-[34]

root@zfs-howto /tmp > zpool status -v test
  pool: test
state: ONLINE
  scan: resilvered 82.5K in 0h0m with 0 errors on Thu Sep 26 13:46:46 2019
config:

        NAME             STATE     READ WRITE CKSUM
        test             ONLINE       0     0     0
          mirror-0       ONLINE       0     0     0
            /tmp/disk-1  ONLINE       0     0     0
            /tmp/disk-2  ONLINE       0     0     0
          mirror-1       ONLINE       0     0     0
            /tmp/disk-3  ONLINE       0     0     0
            /tmp/disk-4  ONLINE       0     0     0

errors: No known data errors






Hi, sorry for the late reply and all the noob questions xD


So, i can just add one drive after another as a vdev in a pool. and the next one is added as another vdev and is then the parity, or the other drive becomes the parity and the new drive becomes the data drive.
So when i add a third drive, i still have to add a fourth drive because they mirror each other?
And since they're all in a pool, i can pass the (increasing) space through to VMS like OMV, right?
(can the drive adding etc be done in the web GUI?)

Then only two questions remain for me: Right now i have a 10tb drive and want to buy a 8Tb drive next. i'm wasting 2 tb of space there, if i get that correctly; or do you have any other recommendations than buying a second 10tb drive?
And would it make more sense to just let proxmox handle the file sharing via smb/NFS/etc and dockerize the other stuff i use in OMV and not run it as VM?

Sorry for asking so much but i just cant seem to find a good solution based on stuff i find online....

Thanks in advance :D
 
So, i can just add one drive after another as a vdev in a pool. and the next one is added as another vdev and is then the parity, or the other drive becomes the parity and the new drive becomes the data drive.

In my example with the mirroring, there is no parity. The data just written twice.

So when i add a third drive, i still have to add a fourth drive because they mirror each other?

If you want to have fault tolerance, yes. It will work also with only one drive.

And since they're all in a pool, i can pass the (increasing) space through to VMS like OMV, right?

Yes, just resize your disk in the GUI.

(can the drive adding etc be done in the web GUI?)

Good question, I've never tried.

Then only two questions remain for me: Right now i have a 10tb drive and want to buy a 8Tb drive next. i'm wasting 2 tb of space there, if i get that correctly; or do you have any other recommendations than buying a second 10tb drive?

If you have currently on 10 TB drive and want to buy another, you have to decide what you want:
- fault tolerance, then you need to buy another 10 TB drive, add it to the mirror (see my previous post) and you have a fault tolerant 10 TB zpool
- increate capacity, then you can buy whatever you want, but preferably also a 10 TB drive to better file distribution.

Personally, I'd go with fault tolerance

And would it make more sense to just let proxmox handle the file sharing via smb/NFS/etc and dockerize the other stuff i use in OMV and not run it as VM?

If you do everything in OMV, why not omit PVE at all? Concerning Docker, I'd also want at least two whole docker setups, one for testing and one for production. Nowadays, you can almost dockerize everything, so why not. I'd stick to ZFS. Docker with ZFS is also great!
 
In my example with the mirroring, there is no parity. The data just written twice.



If you want to have fault tolerance, yes. It will work also with only one drive.



Yes, just resize your disk in the GUI.



Good question, I've never tried.



If you have currently on 10 TB drive and want to buy another, you have to decide what you want:
- fault tolerance, then you need to buy another 10 TB drive, add it to the mirror (see my previous post) and you have a fault tolerant 10 TB zpool
- increate capacity, then you can buy whatever you want, but preferably also a 10 TB drive to better file distribution.

Personally, I'd go with fault tolerance



If you do everything in OMV, why not omit PVE at all? Concerning Docker, I'd also want at least two whole docker setups, one for testing and one for production. Nowadays, you can almost dockerize everything, so why not. I'd stick to ZFS. Docker with ZFS is also great!



Hi!
Thanks and sorry for the late reply

To conclude the drive dilemma, I think i'll let Proxmox handle the zfs pool and i just have to mangle myself through mirrors....
And buy a second 10tb :/ (poor wallet but maybe black friday has something in stock this year ^^)


and to why not skip Proxmox:

1. i don't want to because:

- curiosity
- i want to learn things
- virtualisation is way better handled than in omv (surprise surprise xD)

PFsense is going to be in Proxmox for me and some other stuff next to dockers etc.

2. OMV feels kinda sluggish... Proxmox seems more stable

Plus: i only use file sharing and the downloader plugin in OMV so maybe i'll try to containerise the downloader stuff to omit OMV at some point. sometimes it's just easier to have a web interface for quickly downloading one single vid than going into an ssh app on your phone...

thanks for your answers and patience!
If anyone else stumbles over this thread and thinks something is left out, feel free to join the discussion :D

Have a nice day/night ^^
 
Forget about omv.
Run proxmox with a turnkey lxc for file server. And an lxc with a downloader like deluge or transmision or what ever.
I run. Proxmox with zfs for system on a mirrored ssds.
Plus a mirrored 2tb pool for VMs and containers.
Ands data pool for all my shared data.
I run an emby lxc with bindmount pool
A jellyfin lxc bindmount to the same pool
A jdownloader bindmount,you guessed it, to the same pool.
Next on the list an turnkey lxc for file server.
Works well.
 
Forget about omv.
Run proxmox with a turnkey lxc for file server. And an lxc with a downloader like deluge or transmision or what ever.
I run. Proxmox with zfs for system on a mirrored ssds.
Plus a mirrored 2tb pool for VMs and containers.
Ands data pool for all my shared data.
I run an emby lxc with bindmount pool
A jellyfin lxc bindmount to the same pool
A jdownloader bindmount,you guessed it, to the same pool.
Next on the list an turnkey lxc for file server.
Works well.


Hey, thanks for the input!

Actually, i think i'll be doing this, and let proxmox only handle the zfs pools. There really seem to be some decent lxc containers and i'll doker the rest like the downloader stuff.

Do you know if i can start a mirror with the boot ssd later on too? i got a m.2 ssd to not waste a sata port but i'm kinda scared to burn it out too quickly
 
Hey, thanks for the input!

Actually, i think i'll be doing this, and let proxmox only handle the zfs pools. There really seem to be some decent lxc containers and i'll doker the rest like the downloader stuff.

Do you know if i can start a mirror with the boot ssd later on too? i got a m.2 ssd to not waste a sata port but i'm kinda scared to burn it out too quickly

Hi JohnTanner how it's gone your overall experience?
 
Hi JohnTanner how it's gone your overall experience?

Hello,
Thanks for asking!

In the end i decided for a different motherboard, but other than that, my experience with proxmox is great so far (I even recommended it to three of my friends and two of them now have their own instance running).

As for file sharing, I made an Ubuntu LXC container and installed samba on it. Low level, easy, reliable.
Right now my storage still has enough space, but once I have to expand, I’ll use the example above with zfs add.

Thank you everyone for helping me out, and thanks again for asking me :D

have a nice Weekend
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!