One "logical volume" for 2 disks using ZFS without RAID

Jaspergie

New Member
Sep 9, 2023
7
0
1
Good afternoon all,

I am building my homeserver to host a homelab, NAS, nextcloud and Plex/Jellyfin (amongst other things). I want to use Proxmox on the host to create VMs and containers for the mentioned services. For the Proxmox OS and VMs I have installed a 2TB M.2 SSD, in single disk mode using ZFS (I have another M.2 slot which I might use for a second SSD in order to create a mirror setup).

Multiple services I mention need storage to store my files, movies, series etc. I want to separate that from the OS disk, so I am planning to install 2 8TB (secondhand enterprise) SSDs via 2 of the 6 SATA ports. These 2 disks (16TB) in total I want to use as 1 large storage medium using ZFS without any RAID (so use the total 16TB). I would like to "combine" the 2 disks into 1 "logical volume" which I can then link to a Turnkey Fileserver VM. There want to create several folders for specific services: a folder for Plex, a folder for NAS, a folder for Nextcloud > without having to specify on which specific disk it is stored (so it is visible as one disk, the software will arrange storing it on a specific hardware disk under the hood). So for example if the plex folder is becoming larger than 8TB it also doesn't matter, I see just one disk of 16TB.

Is this possible and how can I do this? I can only find a lot of information that is assuming RAID1 or RAIDz configurations. When I try to add a ZFS pool using Single disk I can only select one physical disk and not combine to disks for one pool.

Extra: preferably I want to set it up so that I can potentially later on add another 8TB disk to "expand" the logical volume/disk when I am approaching the 16TB of storage.

Could someone please help me out?

Cheers.
Jasper
 
Hi,
I have a "similar setup' but I took it up for a notch. I created a 2 node PVE cluster with only 256G NVME's where I run all the services you mentioned + dns, dhcp (pihole). I added RPI as a corosync device to have a total of 3 cluster nodes which is needed if one of PVE nodes fails . I Also run zfs replication, so this then becomes almost a HA setup :). I am using Lenovo Thinkcentre m900, i5-6500 8gb microsized desktop machines. This works quite well to be honest and it never failed. For storage I decided to go abit safer way so I have a 4bay qnap server running Raid5 setup with 4x4 T disks.

You can create a Pool with single disk and then just add another disk to the pool, they are not treated as Raid0, but more or less JBOD afaik.

I did a quick test:

Code:
zpool create tank /dev/sdb /dev/sdc
root@k3s-controller:~# zpool status
  pool: tank
 state: ONLINE
config:

    NAME        STATE     READ WRITE CKSUM
    tank        ONLINE       0     0     0
      sdb       ONLINE       0     0     0
      sdc       ONLINE       0     0     0

errors: No known data errors
root@k3s-controller:~# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/root        31G  1.1G   30G   4% /
tmpfs           493M     0  493M   0% /dev/shm
tmpfs           197M  532K  197M   1% /run
tmpfs           5.0M     0  5.0M   0% /run/lock
/dev/sda15      105M  6.1M   99M   6% /boot/efi
tmpfs            99M  4.0K   99M   1% /run/user/1000
tank             15G  128K   15G   1% /tank
root@k3s-controller:~# zpool add tank /dev/sdd
root@k3s-controller:~# zpool status
  pool: tank
 state: ONLINE
config:

    NAME        STATE     READ WRITE CKSUM
    tank        ONLINE       0     0     0
      sdb       ONLINE       0     0     0
      sdc       ONLINE       0     0     0
      sdd       ONLINE       0     0     0

errors: No known data errors
root@k3s-controller:~# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/root        31G  1.1G   30G   4% /
tmpfs           493M     0  493M   0% /dev/shm
tmpfs           197M  540K  197M   1% /run
tmpfs           5.0M     0  5.0M   0% /run/lock
/dev/sda15      105M  6.1M   99M   6% /boot/efi
tmpfs            99M  4.0K   99M   1% /run/user/1000
tank             22G  128K   22G   1% /tank

As you can see I created a pool of 2 disks (8G) in the beginning ( /dev/sdb and /dev/sdc ) and later added another disk (/dev/sdd) and pool just grew. Although you can do that it is not advised as if one disk fails, you loose all the data.

Hope this helps.
 
Hi!

Thank you for your answer! Ah so it is possible via the command-line (and quite easy), but not via the web interface. Awesome!

I was indeed able to create a Pool with single disk and then just add the other disk to the pool.

Regarding "They are not treated as Raid0, but more or less JBOD afaik" and "Although you can do that it is not advised as if one disk fails, you loose all the data." > I think the idea of JBOD is that one disk can fail and only the data of that disk is lost, instead of Raid0/Striping, where the everything is lost if one disk fails, right? Anyway, to protect against "losing" data I will configure back-ups using PBS and back-up all data, in case of a failing disk (low/not frequent chance) I will be able to restore back-ups. It doesn't need to be restored quickly, so I don't see the need to make my setup more complex (and losing some TB storage) by adding redundancy via a RAID configuration.

Thank you for the help, I'll try to configure my Fileserver now with the Pool I created and make it available for my services (via SMB/NFS etc.)

Cheers,
Jasper
 
You can use mount points for single-host directly accessing CT/VMs so no samba/nfs is necessary, but you can also do that if needed. I would not be so sure that if one of the disks in JBOD volume fails you will still be able to access any of the data in that pool. I can try to emulate this but I am quite sure all data will be lost if one of the disks fails. I would recommend you to put them in mirror (Raidz1) and then when more data is needed add two more in a raidz1 and add them to a pool, so this pool seamlessly grows. I know you are losing half of the capacity but it is much safer. Maybe you could go with Raidz1 and 3 disks, which would give you 16T, and one disk can die. but if you want to properly expand it you would need 2 or 3 disks at that time to keep redundancy.

One of my pools with 2 disk redundancy on each disk group:

Code:
zpool status
  pool: rpool
 state: ONLINE
  scan: scrub repaired 0B in 16:35:31 with 0 errors on Sun Oct  8 16:59:33 2023
config:

    NAME                                                STATE     READ WRITE CKSUM
    rpool                                               ONLINE       0     0     0
      raidz2-0                                          ONLINE       0     0     0
        scsi-2001b4d2088814c00-part3                    ONLINE       0     0     0
        scsi-2001b4d2003000000-part3                    ONLINE       0     0     0
        scsi-2001b4d2078e94c00-part3                    ONLINE       0     0     0
        scsi-2001b4d20788a7900-part3                    ONLINE       0     0     0
        scsi-2001b4d20c87a6400-part3                    ONLINE       0     0     0
        scsi-2001b4d2078687400-part3                    ONLINE       0     0     0
      raidz2-1                                          ONLINE       0     0     0
        scsi-2001b4d20786d7400                          ONLINE       0     0     0
        scsi-2001b4d2058584800                          ONLINE       0     0     0
        scsi-2001b4d20786a4c00                          ONLINE       0     0     0
        scsi-2001b4d2058377100                          ONLINE       0     0     0
        scsi-2001b4d20787a7500                          ONLINE       0     0     0
        scsi-2001b4d2008472500                          ONLINE       0     0     0
      raidz2-2                                          ONLINE       0     0     0
        scsi-2001b4d2048834600                          ONLINE       0     0     0
        scsi-2001b4d2018654700                          ONLINE       0     0     0
        scsi-2001b4d20785d7000                          ONLINE       0     0     0
        scsi-2001b4d20786c7600                          ONLINE       0     0     0
        scsi-2001b4d2058854600                          ONLINE       0     0     0
        scsi-2001b4d20d8236a00                          ONLINE       0     0     0
      raidz2-3                                          ONLINE       0     0     0
        scsi-2001b4d2058b54500                          ONLINE       0     0     0
        scsi-2001b4d2078443100                          ONLINE       0     0     0
        scsi-2001b4d2088704b00                          ONLINE       0     0     0
        scsi-2001b4d20a80b6400                          ONLINE       0     0     0
        scsi-2001b4d2058324500                          ONLINE       0     0     0
        scsi-2001b4d20787a7900                          ONLINE       0     0     0
    logs
      mirror-4                                          ONLINE       0     0     0
        ata-WD_Blue_SA510_2.5_500GB_222944801862-part1  ONLINE       0     0     0
        ata-WD_Blue_SA510_2.5_500GB_222944801832-part1  ONLINE       0     0     0
    cache
      sda3                                              ONLINE       0     0     0
      sda4                                              ONLINE       0     0     0

I also had a JBOD pool running forever and it didn't fail, but I knew that data is not important so I did not care, but nowadays I always go for redundancy.
 
You can create a Pool with single disk and then just add another disk to the pool, they are not treated as Raid0, but more or less JBOD afaik.
Code:
zpool create tank /dev/sdb /dev/sdc

There is no JBOD in ZFS.
What you are doing there, is creating a stripe(/raid0):
 
@Neobin, thanks for the clarification, I don't use this mode much as it is unsafe anyway, but I was wondering how the pool knows if I create a pool with one disk, and then add one when this is let's say half full, to properly stripe over 2 disks? I created a pool once with only one disk and added one later on, and on top of that, the disk was bigger in size and the pool size grew for the size of that disk?
 
He explains it really well and even demonstrates it at around the middle of the video:
https://youtu.be/11bWnvCwTOU

In short:
ZFS distributes (newly) written data across all VDEVs in a pool based on the available capacity of each VDEV.
So, a VDEV with higher available capacity will get/hit/see more writes than a VDEV with lower available capacity.
 
Thanks both for your answers

@jedo19 Yeah, I think you are right in my scenario: If you use simple JBOD, the disks are treated as single disks and if one fails the other disks and the data on it is just fine. However, if you use JBOD and you make a one single Logical Volume of the physical disks, then if one disk fails, it means failure of the whole logical volume. Regarding "For storage I decided to go a bit safer" and "I also had a JBOD pool running forever and it didn't fail, but I knew that data is not important so I did not care, but nowadays I always go for redundancy." > for me a RAID config is not really about "safety", but just about redundancy and therefore only about "availability" or "uptime" of my data (which is not important for me). For data "safety" I have setup back-ups (which for me is the only proper way to secure your data). Your comment helped me a lot actually, I set my ZFS pool up now as I wanted, thanks!
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!