ZFS "problem"

> When a raidz2 allows you to lose any 2 HDDs it would be bad if you would only got two special device SSDs in a mirror so the whole pool would be lost as soon as the second SSD starts failing

There's really not much chance of whole-pool failure if you have at least a mirror for the special device (and keep up with your pool health via commandline and/or alerts.) Single-disk special, yes - your pool is definitely at risk.

A known-good way to help prevent both disks in the mirror from failing in the same timeframe is to use SSDs from different manufacturers or different models from the product line (say Samsung EVO and PRO) - the Evo will likely fail faster. Just stay well away from QVO.
 
No need to delete any cache.
With this setting its running without any problems anymore.

Code:
Plex                                        132T  13.1T   119T        -         -     0%     9%  1.00x    ONLINE  -
  raidz1-0                                 65.5T  5.86T  59.6T        -         -     0%  8.95%      -    ONLINE
    sde                                    16.4T      -      -        -         -      -      -      -    ONLINE
    sdf                                    16.4T      -      -        -         -      -      -      -    ONLINE
    sdg                                    16.4T      -      -        -         -      -      -      -    ONLINE
    sdh                                    16.4T      -      -        -         -      -      -      -    ONLINE
  raidz1-1                                 65.5T  7.23T  58.2T        -         -     0%  11.0%      -    ONLINE
    sdi                                    16.4T      -      -        -         -      -      -      -    ONLINE
    sdj                                    16.4T      -      -        -         -      -      -      -    ONLINE
    sdk                                    16.4T      -      -        -         -      -      -      -    ONLINE
    sdl                                    16.4T      -      -        -         -      -      -      -    ONLINE
special                                        -      -      -        -         -      -      -      -         -
  mirror-2                                  888G  14.4G   874G        -         -     1%  1.62%      -    ONLINE
    sdn                                     894G      -      -        -         -      -      -      -    ONLINE
    sdo                                     894G      -      -        -         -      -      -      -    ONLINE

With that much data i hopt those special devices are reliable :P If not then i hope you have good backup. I am sure you do though with that much data lol
 
There's really not much chance of whole-pool failure if you have at least a mirror for the special device (and keep up with your pool health via commandline and/or alerts.) Single-disk special, yes - your pool is definitely at risk.
There is more to a 3-disk mirror or raidz2 than just being able to have any 2 disks failing. Use a 2-disk mirror or raidz1 and one disk will fail, the pool is still operatable but is running without any protection in degraded state. ZFS won't be able to fix any corrupted data. So even with the second disk not failing before you finished replacing the disk and resilvering the pool, your data is at risk.

Should be good. This are Intel S4610. But will add next days 2 more to this pool. Waiting to arrive.
Keep in mind that upgrading a raid1 to raid10 won't make them more reliable. it only gives you more capacity and performance and I don't think you need any of those two yet.
Why its showing this 3.83T Data, but the Folder is nearly empty?
My guess would be that you didn't checked the "thin" checkbox when creating the ZFS storage so everything will be thick-partitioned?
 
Last edited:
I'm trying to understand the capacity thats showing me from Proxmox gui and cli.
Maybe someone can help me here.

I created raidz1 with 2 vdevs.

Code:
Plex                                        132T  16.8T   115T        -         -     0%    12%  1.00x    ONLINE  -
  raidz1-0                                 65.5T  7.55T  57.9T        -         -     0%  11.5%      -    ONLINE
    sde                                    16.4T      -      -        -         -      -      -      -    ONLINE
    sdf                                    16.4T      -      -        -         -      -      -      -    ONLINE
    sdg                                    16.4T      -      -        -         -      -      -      -    ONLINE
    sdh                                    16.4T      -      -        -         -      -      -      -    ONLINE
  raidz1-1                                 65.5T  9.22T  56.3T        -         -     0%  14.1%      -    ONLINE
    sdi                                    16.4T      -      -        -         -      -      -      -    ONLINE
    sdj                                    16.4T      -      -        -         -      -      -      -    ONLINE
    sdk                                    16.4T      -      -        -         -      -      -      -    ONLINE
    sdl                                    16.4T      -      -        -         -      -      -      -    ONLINE
special                                        -      -      -        -         -      -      -      -         -
  mirror-2                                  888G  18.2G   870G        -         -     1%  2.04%      -    ONLINE
    sdn                                     894G      -      -        -         -      -      -      -    ONLINE
    sdo                                     894G      -      -        -         -      -      -      -    ONLINE



When i use zfs list, it shows me this:
Code:
Plex                                         12.2T  82.8T   127K  /Plex
Plex/Downloads                                  4.34T  82.8T    96K  /Plex/Downloads
Plex/Downloads/vm-103-disk-0       4.34T  82.8T  4.34T  -
Plex/X1                                                 6.65G  82.8T    96K  /Plex/Filme
Plex/X1/vm-103-disk-0                      6.65G  82.8T  6.65G  -
Plex/X2                                                7.86T  82.8T    96K  /Plex/Serien
Plex/X2/vm-103-disk-0                     7.86T  82.8T  7.86T  -
Plex/vm-103-disk-0                          56K  82.8T    56K  -


Why its showing me 4.34T used space on Download? When i check with ncdu, used space 104G.
Is it normal that it shows vm-xxx-disk-0?


In Proxmox GUI i get this:

Download
Code:
Usage 4.99% (4.78 TB of 95.81 TB)

X1
Code:
Usage 0.01% (7.14 GB of 91.04 TB)

X2
Code:
Usage 8.68% (8.65 TB of 99.68 TB)

the whole Pool
Code:
Usage 12.86% (13.43 TB of 104.46 TB)


If i use df it shows me this:
Code:
/dev/sdd1                          102T  111G   97T   1% /home/ivans89/media/Downloads
/dev/sdb1                          102T   40K   97T   1% /home/ivans89/media/X1
/dev/sdc1                          102T  7.9T   89T   9% /home/ivans89/media/X2

Here it shows under Dowjnloads 111G. Why its showing 97 T free for Downloads and X1 if all 3 have the same avaiable storage pool?
Have i done something wrong here?
For Pool and all Datasets i activated the thin checkbox.
 
Code:
Plex/Downloads               4.34T 82.8T   96K /Plex/Downloads
Plex/Downloads/vm-103-disk-0 4.34T 82.8T 4.34T -

The space is mostly in the VM under downloads. Why is there a VM under there?

can you post in CODE tags what the properties are?

Code:
zfs get all Plex/Downloads/vm-103-disk-0
 
I have really no clue why there is a vm under every dataset. I passthrough every dataset to the vm and mount it under media.
Code:
Plex/Downloads               4.34T 82.8T   96K /Plex/Downloads
Plex/Downloads/vm-103-disk-0 4.34T 82.8T 4.34T -

The space is mostly in the VM under downloads. Why is there a VM under there?

can you post in CODE tags what the properties are?

Code:
zfs get all Plex/Downloads/vm-103-disk-0

i have no clue why vm disk are under every dataset. i created the pool and vdevs, after that i created datasets and passthrough them to the vm, and mount it to the media folders.

here the output:
Code:
NAME                          PROPERTY              VALUE                  SOURCE
Plex/Downloads/vm-103-disk-0  type                  volume                 -
Plex/Downloads/vm-103-disk-0  creation              Fri Feb 23 23:02 2024  -
Plex/Downloads/vm-103-disk-0  used                  4.45T                  -
Plex/Downloads/vm-103-disk-0  available             67.5T                  -
Plex/Downloads/vm-103-disk-0  referenced            4.45T                  -
Plex/Downloads/vm-103-disk-0  compressratio         1.00x                  -
Plex/Downloads/vm-103-disk-0  reservation           none                   default
Plex/Downloads/vm-103-disk-0  volsize               102T                   local
Plex/Downloads/vm-103-disk-0  volblocksize          64K                    -
Plex/Downloads/vm-103-disk-0  checksum              on                     default
Plex/Downloads/vm-103-disk-0  compression           on                     default
Plex/Downloads/vm-103-disk-0  readonly              off                    default
Plex/Downloads/vm-103-disk-0  createtxg             114                    -
Plex/Downloads/vm-103-disk-0  copies                1                      default
Plex/Downloads/vm-103-disk-0  refreservation        none                   default
Plex/Downloads/vm-103-disk-0  guid                  297000336366483664     -
Plex/Downloads/vm-103-disk-0  primarycache          all                    default
Plex/Downloads/vm-103-disk-0  secondarycache        all                    default
Plex/Downloads/vm-103-disk-0  usedbysnapshots       0B                     -
Plex/Downloads/vm-103-disk-0  usedbydataset         4.45T                  -
Plex/Downloads/vm-103-disk-0  usedbychildren        0B                     -
Plex/Downloads/vm-103-disk-0  usedbyrefreservation  0B                     -
Plex/Downloads/vm-103-disk-0  logbias               latency                default
Plex/Downloads/vm-103-disk-0  objsetid              296                    -
Plex/Downloads/vm-103-disk-0  dedup                 off                    default
Plex/Downloads/vm-103-disk-0  mlslabel              none                   default
Plex/Downloads/vm-103-disk-0  sync                  standard               default
Plex/Downloads/vm-103-disk-0  refcompressratio      1.00x                  -
Plex/Downloads/vm-103-disk-0  written               4.45T                  -
Plex/Downloads/vm-103-disk-0  logicalused           4.46T                  -
Plex/Downloads/vm-103-disk-0  logicalreferenced     4.46T                  -
Plex/Downloads/vm-103-disk-0  volmode               default                default
Plex/Downloads/vm-103-disk-0  snapshot_limit        none                   default
Plex/Downloads/vm-103-disk-0  snapshot_count        none                   default
Plex/Downloads/vm-103-disk-0  snapdev               hidden                 default
Plex/Downloads/vm-103-disk-0  context               none                   default
Plex/Downloads/vm-103-disk-0  fscontext             none                   default
Plex/Downloads/vm-103-disk-0  defcontext            none                   default
Plex/Downloads/vm-103-disk-0  rootcontext           none                   default
Plex/Downloads/vm-103-disk-0  redundant_metadata    all                    default
Plex/Downloads/vm-103-disk-0  encryption            off                    default
Plex/Downloads/vm-103-disk-0  keylocation           none                   default
Plex/Downloads/vm-103-disk-0  keyformat             none                   default
Plex/Downloads/vm-103-disk-0  pbkdf2iters           0                      default
 
i have no clue why vm disk are under every dataset. i created the pool and vdevs, after that i created datasets and passthrough them to the vm, and mount it to the media folders.
You cannot passthrough a dataset, maybe you created a VM disk in that dataset, which is not what you may wanted to do. Mounting does only work with LX(C) containers.

The disk image is thin provisioned (reservation and refreservation are set to none), so you may want to go in your VM and check there how full the disk is. Please post a df -PHT from your PLEX VM. fstrim -va will free up stuff if not already run.

Please also post your VM configuration (screenshot from the GUI or cat /etc/pve/qemu-server/<vmid>.conf.
 
You cannot passthrough a dataset, maybe you created a VM disk in that dataset, which is not what you may wanted to do. Mounting does only work with LX(C) containers.

The disk image is thin provisioned (reservation and refreservation are set to none), so you may want to go in your VM and check there how full the disk is. Please post a df -PHT from your PLEX VM. fstrim -va will free up stuff if not already run.

Please also post your VM configuration (screenshot from the GUI or cat /etc/pve/qemu-server/<vmid>.conf.


sorry i didn't mean passthrough. i added them as a drive in the gui.


df -PHT
Code:
Filesystem                        Type   Size  Used Avail Use% Mounted on
tmpfs                             tmpfs   37G  3.1M   37G   1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv ext4   137G   27G  104G  21% /
tmpfs                             tmpfs  183G  386k  183G   1% /dev/shm
tmpfs                             tmpfs  5.3M     0  5.3M   0% /run/lock
/dev/sda2                         ext4   2.1G  264M  1.7G  14% /boot
/dev/sdd1                         ext4   112T   12G  107T   1% /home/ivans89/media/Downloads
/dev/sdb1                         ext4   112T   41k  107T   1% /home/ivans89/media/X1
/dev/sdc1                         ext4   112T  8.9T   98T   9% /home/ivans89/media/X2
tmpfs                             tmpfs   37G  4.1k   37G   1% /run/user/1000


After fstrim:
Code:
df -PHT
Filesystem                        Type   Size  Used Avail Use% Mounted on
tmpfs                             tmpfs   37G  3.1M   37G   1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv ext4   137G   28G  104G  21% /
tmpfs                             tmpfs  183G  386k  183G   1% /dev/shm
tmpfs                             tmpfs  5.3M     0  5.3M   0% /run/lock
/dev/sda2                         ext4   2.1G  264M  1.7G  14% /boot
/dev/sdd1                         ext4   112T   10G  107T   1% /home/ivans89/media/Downloads
/dev/sdb1                         ext4   112T   41k  107T   1% /home/ivans89/media/X1
/dev/sdc1                         ext4   112T  8.9T   98T   9% /home/ivans89/media/X2
tmpfs                             tmpfs   37G  4.1k   37G   1% /run/user/1000


My config:
Code:
root@ivans89:~# cat /etc/pve/qemu-server/103.conf
agent: 1
balloon: 0
boot: order=scsi0;ide2;net0
cores: 32
cpu: EPYC-Milan
hostpci0: 0000:81:01.0,mdev=nvidia-664
ide2: none,media=cdrom
memory: 354800
meta: creation-qemu=8.1.2,ctime=1706380300
name: Swizzin
net0: virtio=BC:24:11:34:D9:6B,bridge=vmbr0,firewall=1
net1: virtio=BC:24:11:B5:1F:90,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: l26
scsi0: local-zfs:vm-103-disk-0,iothread=1,size=140G,ssd=1
scsi1: X1:vm-103-disk-0,backup=0,iothread=1,size=104460G
scsi2: X2:vm-103-disk-0,backup=0,iothread=1,size=104460G
scsi3: Downloads:vm-103-disk-0,backup=0,iothread=1,size=104460G
smbios1: uuid=0bf5b779-2892-4e12-9d1a-18192f92f38e
sockets: 1
startup: order=9
vmgenid: 0a604d19-99ae-4c20-bc15-bbb219382832
 
Last edited:
sorry i didn't mean passthrough. i added them as hard drive disk.


df -PHT
Code:
Filesystem                        Type   Size  Used Avail Use% Mounted on
tmpfs                             tmpfs   37G  3.1M   37G   1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv ext4   137G   27G  104G  21% /
tmpfs                             tmpfs  183G  386k  183G   1% /dev/shm
tmpfs                             tmpfs  5.3M     0  5.3M   0% /run/lock
/dev/sda2                         ext4   2.1G  264M  1.7G  14% /boot
/dev/sdd1                         ext4   112T   12G  107T   1% /home/ivans89/media/Downloads
/dev/sdb1                         ext4   112T   41k  107T   1% /home/ivans89/media/X1
/dev/sdc1                         ext4   112T  8.9T   98T   9% /home/ivans89/media/X2
tmpfs                             tmpfs   37G  4.1k   37G   1% /run/user/1000


After fstrim:
Code:
df -PHT
Filesystem                        Type   Size  Used Avail Use% Mounted on
tmpfs                             tmpfs   37G  3.1M   37G   1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv ext4   137G   28G  104G  21% /
tmpfs                             tmpfs  183G  386k  183G   1% /dev/shm
tmpfs                             tmpfs  5.3M     0  5.3M   0% /run/lock
/dev/sda2                         ext4   2.1G  264M  1.7G  14% /boot
/dev/sdd1                         ext4   112T   10G  107T   1% /home/ivans89/media/Downloads
/dev/sdb1                         ext4   112T   41k  107T   1% /home/ivans89/media/X1
/dev/sdc1                         ext4   112T  8.9T   98T   9% /home/ivans89/media/X2
tmpfs                             tmpfs   37G  4.1k   37G   1% /run/user/1000


My config:
Code:
root@ivans89:~# cat /etc/pve/qemu-server/103.conf
agent: 1
balloon: 0
boot: order=scsi0;ide2;net0
cores: 32
cpu: EPYC-Milan
hostpci0: 0000:81:01.0,mdev=nvidia-664
ide2: none,media=cdrom
memory: 354800
meta: creation-qemu=8.1.2,ctime=1706380300
name: Swizzin
net0: virtio=BC:24:11:34:D9:6B,bridge=vmbr0,firewall=1
net1: virtio=BC:24:11:B5:1F:90,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: l26
scsi0: local-zfs:vm-103-disk-0,iothread=1,size=140G,ssd=1
scsi1: X1:vm-103-disk-0,backup=0,iothread=1,size=104460G
scsi2: X2:vm-103-disk-0,backup=0,iothread=1,size=104460G
scsi3: Downloads:vm-103-disk-0,backup=0,iothread=1,size=104460G
scsi4: Plex:vm-103-disk-0,backup=0,iothread=1,size=104460G,ssd=1
scsihw: virtio-scsi-single
smbios1: uuid=0bf5b779-2892-4e12-9d1a-18192f92f38e
sockets: 1
startup: order=9
vmgenid: 0a604d19-99ae-4c20-bc15-bbb219382832

I think I had the same problem back then where when i put data on the vmdisk and move that data to another folder(in different pool) then it does not seem to clear it up or reclaim the space even though i had thin provissioning and discard turned on... I had to move my data and recreated and it was fine again. It looks like you put everything in the download folder first then move then to a different folder. Make sure you turn on discard on your vmdisk. This reclaims the space from the disk

1709044444246.png
 
I think I had the same problem back then where when i put data on the vmdisk and move that data to another folder(in different pool) then it does not seem to clear it up or reclaim the space even though i had thin provissioning and discard turned on... I had to move my data and recreated and it was fine again. It looks like you put everything in the download folder first then move then to a different folder. Make sure you turn on discard on your vmdisk. This reclaims the space from the disk

View attachment 63840

Yes i download everything to the Downloadfolder and put it after that in the other folders.
I turned discard right now on, on all disks. I need to move all data again?
 
Last edited:
I'm trying to understand the capacity thats showing me from Proxmox gui and cli.
Maybe someone can help me here.

I created raidz1 with 2 vdevs.

Code:
Plex                                        132T  16.8T   115T        -         -     0%    12%  1.00x    ONLINE  -
  raidz1-0                                 65.5T  7.55T  57.9T        -         -     0%  11.5%      -    ONLINE
    sde                                    16.4T      -      -        -         -      -      -      -    ONLINE
    sdf                                    16.4T      -      -        -         -      -      -      -    ONLINE
    sdg                                    16.4T      -      -        -         -      -      -      -    ONLINE
    sdh                                    16.4T      -      -        -         -      -      -      -    ONLINE
  raidz1-1                                 65.5T  9.22T  56.3T        -         -     0%  14.1%      -    ONLINE
    sdi                                    16.4T      -      -        -         -      -      -      -    ONLINE
    sdj                                    16.4T      -      -        -         -      -      -      -    ONLINE
    sdk                                    16.4T      -      -        -         -      -      -      -    ONLINE
    sdl                                    16.4T      -      -        -         -      -      -      -    ONLINE
special                                        -      -      -        -         -      -      -      -         -
  mirror-2                                  888G  18.2G   870G        -         -     1%  2.04%      -    ONLINE
    sdn                                     894G      -      -        -         -      -      -      -    ONLINE
    sdo                                     894G      -      -        -         -      -      -      -    ONLINE




Before you reboot that Proxmox host, make sure you export the pool and re-import it thusly:

' zpool import -a -f -d /dev/disk/by-id '

This will change your short drive letters to long form and avoid problems after a reboot if the drive names change order.
 
Yes i download everything to the Downloadfolder and put it after that in the other folders.
I turned discard right now on, on all disks. I need to move all data again?

Not seeing your complete layout makes it hard to see the real problem. I you cannot find the real problem, i think the easiest and fastest you can do it recreate that Download dataset. It only has 111G so moving it temporarily would only take a few minutes. i would say you are done in less than an hours recreating you Download dataset. Compare that to waiting for someone else trying to figure out what's wrong lol
 
I turned discard right now on, on all disks. I need to move all data again?
No, run fstrim -va in your guest and post the result (need to restart guest after enabling discard) and post the results. Afterwards check if the space usage decreases over time.
 
here output from fstrim -va
Code:
sudo fstrim -va
/home/ivans89/media/Downloads: 101.6 TiB (111693396992000 bytes) trimmed on /dev/sdd1
/home/ivans89/media/X1: 101.6 TiB (111715463028736 bytes) trimmed on /dev/sdb1
/home/ivans89/media/X2: 92.9 TiB (102181536239616 bytes) trimmed on /dev/sdc1
/boot: 1.6 GiB (1770582016 bytes) trimmed on /dev/sda2
/: 102.4 GiB (109955092480 bytes) trimmed on /dev/mapper/ubuntu--vg-ubuntu--lv

Seems this helped. here the zfs list
Code:
Plex                          8.74T  71.3T   127K  /Plex
Plex/Downloads                62.6G  71.3T    96K  /Plex/Downloads
Plex/Downloads/vm-103-disk-0  62.6G  71.3T  62.6G  -
Plex/X1                       1.30G  71.3T    96K  /Plex/X1
Plex/X1/vm-103-disk-0         1.30G  71.3T  1.30G  -
Plex/X2                       8.67T  71.3T    96K  /Plex/X2
Plex/X2/vm-103-disk-0          8.67T  71.3T  8.67T  -


Activated quota (80T) and this seems to shows correct too.
 
Last edited:
I know this probably does not help you much, but for a 120TB ISO host, I would use something like TrueNAS or Unraid instead.
Proxmox is a great hypervisor but not a great NAS in my opinion.

Torrenting files to a VM, which stores stuff into a zvol and then even unrar files afterwards, can be painful with ZFS.
You would be better off downloading to some random 500GB SSD, unpack it and push it to a NAS. Then the NAS gets one sequential write, without SYNC and SLOG.
 
I know this probably does not help you much, but for a 120TB ISO host, I would use something like TrueNAS or Unraid instead.
Proxmox is a great hypervisor but not a great NAS in my opinion.

Torrenting files to a VM, which stores stuff into a zvol and then even unrar files afterwards, can be painful with ZFS.
You would be better off downloading to some random 500GB SSD, unpack it and push it to a NAS. Then the NAS gets one sequential write, without SYNC and SLOG.

I was thinking of this too, create TrueNAS in vm on proxmox. Problem is i cant passthrough my HBA card to TrueNAS because i have 24 HDDs/SSDs on this HBA, over a Sata/SAS backplane.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!