Giganic

Active Member
Aug 19, 2016
25
3
43
Hi All,

Hoping to gain a better understanding of why I'm seeing a difference in performance running the same fio command in different contexts.

A little background, my pool originally consisted contained 4 vdevs, each with 2x 3TB. However due to a failing disk on mirror-3 and wanting to increase my storage capacity for the future I ended up replacing both 3TB drives with 10TB instead.

Recently I've been trying to test if my storage is performing as expected for spinning rust. I've come to learn that while I've increased my total capacity the pool is current imbalanced. I understand that any new writes zfs distributes to the vdevs will favor the one with the highest free space in a ratio determined to ensure the pool eventually allocation evens out.

But this has taken me down a rabbit hole in trying to understand why I'm seeing the numbers I am. Each of the tests I've run are below. It seems that the numbers I achieve from my first lxc which was on the original zdev configuration performs nearly identical to the pool pool, however running the same command in a newly created lxc the performance is vastly different.

I've also run each fio on the lxc and subvol folder on my zfs pool with results being within a margin of error of each other.

So in summary, the questions I'm hoping to have answered are as follows:
  1. Is the imbalance in my pool the cause of the differing fio tests?
  2. It seems only one of my original lxc containers is able to match the performance of the root pool, why?
  3. What can be done to improve my spinning rust performance?
Thank you in advance for any help or suggestions which are provided. If there is any further information which I can provide to assist please let me know.


fio command:
Code:
fio --name=test --size=5g --rw=write --ioengine=posixaio --direct=1 --bs=1m

zfs pool:
Running 4 mirrored vdevs.
Code:
  pool: tank
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
        The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: scrub repaired 0B in 12:50:55 with 0 errors on Sun Feb  6 12:50:57 2022
config:

        NAME                                          STATE     READ WRITE CKSUM
        tank                                          ONLINE       0     0     0
          mirror-0                                    ONLINE       0     0     0
            ata-WDC_WD30EFRX-REDACTED                 ONLINE       0     0     0
            ata-WDC_WD30EFRX-REDACTED                 ONLINE       0     0     0
          mirror-1                                    ONLINE       0     0     0
            ata-WDC_WD30EFRX-REDACTED                 ONLINE       0     0     0
            ata-WDC_WD30EFRX-REDACTED                 ONLINE       0     0     0
          mirror-2                                    ONLINE       0     0     0
            ata-WDC_WD30EFRX-REDACTED                 ONLINE       0     0     0
            ata-WDC_WD30EFRX-REDACTED                 ONLINE       0     0     0
          mirror-3                                    ONLINE       0     0     0
            ata-WDC_WD100EFAX-REDACTED                ONLINE       0     0     0
            ata-WDC_WD100EFAX-REDACTED                ONLINE       0     0     0

errors: No known data errors

Running fio on the root of my pool:​

Code:
test: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=posixaio, iodepth=1
fio-3.25
Starting 1 process
test: Laying out IO file (1 file / 5120MiB)
Jobs: 1 (f=1): [W(1)][100.0%][w=221MiB/s][w=221 IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=974661: Tue Feb  8 17:10:41 2022
  write: IOPS=609, BW=609MiB/s (639MB/s)(5120MiB/8404msec); 0 zone resets
    slat (usec): min=7, max=160, avg=22.84, stdev= 9.90
    clat (usec): min=134, max=33390, avg=1616.04, stdev=1687.71
     lat (usec): min=152, max=33434, avg=1638.87, stdev=1691.40
    clat percentiles (usec):
     |  1.00th=[  147],  5.00th=[  163], 10.00th=[  239], 20.00th=[  251],
     | 30.00th=[  265], 40.00th=[  289], 50.00th=[  396], 60.00th=[ 2114],
     | 70.00th=[ 2933], 80.00th=[ 3261], 90.00th=[ 3884], 95.00th=[ 4146],
     | 99.00th=[ 4752], 99.50th=[ 5538], 99.90th=[11994], 99.95th=[15139],
     | 99.99th=[33424]
   bw (  KiB/s): min=231424, max=3084288, per=100.00%, avg=642176.00, stdev=905215.68, samples=16
   iops        : min=  226, max= 3012, avg=627.12, stdev=884.00, samples=16
  lat (usec)   : 250=19.02%, 500=33.24%, 750=1.19%, 1000=0.18%
  lat (msec)   : 2=5.86%, 4=33.05%, 10=7.34%, 20=0.08%, 50=0.04%
  cpu          : usr=1.81%, sys=0.25%, ctx=5377, majf=0, minf=48
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,5120,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: bw=609MiB/s (639MB/s), 609MiB/s-609MiB/s (639MB/s-639MB/s), io=5120MiB (5369MB), run=8404-8404msec
Code:
root@:/tank# zpool iostat -vy 15 1
                                                capacity     operations     bandwidth
pool                                          alloc   free   read  write   read  write
--------------------------------------------  -----  -----  -----  -----  -----  -----
tank                                          7.93T  9.29T      4  1.23K   155K   493M
  mirror                                      2.23T   499G      1    315  34.7K   100M
    ata-WDC_WD30EFRX-REDACTED                     -      -      0    170  20.0K  50.4M
    ata-WDC_WD30EFRX-REDACTED                     -      -      0    144  14.7K  49.8M
  mirror                                      2.26T   474G      1    334  50.9K   119M
    ata-WDC_WD30EFRX-REDACTED                     -      -      0    150  23.5K  59.4M
    ata-WDC_WD30EFRX-REDACTED                     -      -      0    183  27.5K  59.7M
  mirror                                      2.27T   459G      1    274  36.8K   119M
    ata-WDC_WD30EFRX-REDACTED                     -      -      0    135  24.3K  59.1M
    ata-WDC_WD30EFRX-REDACTED                     -      -      0    139  12.5K  59.8M
  mirror                                      1.17T  7.89T      0    333  33.1K   155M
    ata-WDC_WD100EFAX-REDACTED                    -      -      0    165  19.7K  77.8M
    ata-WDC_WD100EFAX-REDACTED                    -      -      0    167  13.3K  77.4M
--------------------------------------------  -----  -----  -----  -----  -----  -----



fio test #2:​

Run on my first lxc from the original pool configuration.

Code:
test: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=posixaio, iodepth=1
fio-3.25
Starting 1 process
Jobs: 1 (f=1): [W(1)][100.0%][w=276MiB/s][w=276 IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=3976819: Wed Feb  9 01:55:53 2022
  write: IOPS=651, BW=651MiB/s (683MB/s)(5120MiB/7859msec); 0 zone resets
    slat (usec): min=7, max=578, avg=22.18, stdev=12.62
    clat (usec): min=145, max=59141, avg=1510.28, stdev=2204.29
     lat (usec): min=155, max=59191, avg=1532.46, stdev=2207.01
    clat percentiles (usec):
     |  1.00th=[  149],  5.00th=[  151], 10.00th=[  153], 20.00th=[  198],
     | 30.00th=[  265], 40.00th=[  318], 50.00th=[  424], 60.00th=[ 1532],
     | 70.00th=[ 2606], 80.00th=[ 3097], 90.00th=[ 3458], 95.00th=[ 3621],
     | 99.00th=[ 4490], 99.50th=[ 6783], 99.90th=[27919], 99.95th=[45876],
     | 99.99th=[58983]
   bw (  KiB/s): min=206435, max=3033088, per=100.00%, avg=684380.53, stdev=888845.19, samples=15
   iops        : min=  201, max= 2962, avg=668.27, stdev=868.05, samples=15
  lat (usec)   : 250=25.96%, 500=25.18%, 750=1.17%, 1000=0.37%
  lat (msec)   : 2=13.52%, 4=32.15%, 10=1.27%, 20=0.20%, 50=0.16%
  lat (msec)   : 100=0.04%
  cpu          : usr=1.68%, sys=0.43%, ctx=5742, majf=0, minf=49
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,5120,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: bw=651MiB/s (683MB/s), 651MiB/s-651MiB/s (683MB/s-683MB/s), io=5120MiB (5369MB), run=7859-7859msec

Code:
root@:/tank# zpool iostat -vy 15 1
                                                capacity     operations     bandwidth
pool                                          alloc   free   read  write   read  write
--------------------------------------------  -----  -----  -----  -----  -----  -----
tank                                          7.93T  9.29T      1  2.15K  8.80K   258M
  mirror                                      2.23T   499G      0    526  3.20K  61.0M
    ata-WDC_WD30EFRX-REDACTED                     -      -      0    272  1.87K  30.7M
    ata-WDC_WD30EFRX-REDACTED                     -      -      0    253  1.33K  30.3M
  mirror                                      2.26T   474G      0    597    819  44.5M
    ata-WDC_WD30EFRX-REDACTED                     -      -      0    292    273  22.2M
    ata-WDC_WD30EFRX-REDACTED                     -      -      0    304    546  22.4M
  mirror                                      2.27T   459G      0    360  3.73K   100M
    ata-WDC_WD30EFRX-REDACTED                     -      -      0    184  2.93K  50.2M
    ata-WDC_WD30EFRX-REDACTED                     -      -      0    175    819  50.0M
  mirror                                      1.17T  7.89T      0    713  1.07K  52.1M
    ata-WDC_WD100EFAX-REDACTED                    -      -      0    349      0  25.9M
    ata-WDC_WD100EFAX-REDACTED                    -      -      0    363  1.07K  26.3M
--------------------------------------------  -----  -----  -----  -----  -----  -----



fio test #3:

Run on a newly created lxc.

Code:
test: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=posixaio, iodepth=1
fio-3.25
clock setaffinity failed: Invalid argument
Starting 1 process
Jobs: 1 (f=1): [W(1)][100.0%][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=4104: Tue Feb  8 14:56:26 2022
  write: IOPS=246, BW=246MiB/s (258MB/s)(5120MiB/20793msec); 0 zone resets
    slat (usec): min=6, max=192, avg=12.78, stdev= 3.84
    clat (usec): min=144, max=2677.3k, avg=4047.24, stdev=97625.45
     lat (usec): min=155, max=2677.3k, avg=4060.02, stdev=97625.43
    clat percentiles (usec):
     |  1.00th=[    151],  5.00th=[    151], 10.00th=[    153],
     | 20.00th=[    153], 30.00th=[    153], 40.00th=[    155],
     | 50.00th=[    155], 60.00th=[    161], 70.00th=[    176],
     | 80.00th=[    198], 90.00th=[    210], 95.00th=[    219],
     | 99.00th=[    355], 99.50th=[    408], 99.90th=[2399142],
     | 99.95th=[2634023], 99.99th=[2667578]
   bw (  KiB/s): min=225280, max=1265664, per=100.00%, avg=837599.00, stdev=400357.30, samples=12
   iops        : min=  220, max= 1236, avg=817.92, stdev=390.92, samples=12
  lat (usec)   : 250=97.36%, 500=2.23%, 750=0.02%, 1000=0.02%
  lat (msec)   : 4=0.04%, 10=0.06%, 20=0.08%, 50=0.04%, >=2000=0.16%
  cpu          : usr=0.42%, sys=0.06%, ctx=10243, majf=0, minf=51
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,5120,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: bw=246MiB/s (258MB/s), 246MiB/s-246MiB/s (258MB/s-258MB/s), io=5120MiB (5369MB), run=20793-20793msec

Code:
root@:/tank# zpool iostat -vy 30 1
                                                capacity     operations     bandwidth
pool                                          alloc   free   read  write   read  write
--------------------------------------------  -----  -----  -----  -----  -----  -----
tank                                          7.93T  9.29T      1    921  14.3K   346M
  mirror                                      2.23T   499G      0    203  2.27K  75.2M
    ata-WDC_WD30EFRX-REDACTED                     -      -      0    101  1.20K  37.6M
    ata-WDC_WD30EFRX-REDACTED                     -      -      0    101  1.07K  37.6M
  mirror                                      2.26T   474G      0    266  2.80K  77.6M
    ata-WDC_WD30EFRX-REDACTED                     -      -      0    133    546  38.8M
    ata-WDC_WD30EFRX-REDACTED                     -      -      0    132  2.27K  38.8M
  mirror                                      2.27T   459G      0    192  2.93K  81.5M
    ata-WDC_WD30EFRX-REDACTED                     -      -      0     94  2.93K  40.7M
    ata-WDC_WD30EFRX-REDACTED                     -      -      0     97      0  40.7M
  mirror                                      1.17T  7.89T      0    259  6.27K   111M
    ata-WDC_WD100EFAX-REDACTED                    -      -      0    127  5.87K  55.6M
    ata-WDC_WD100EFAX-REDACTED                    -      -      0    132    409  55.6M
--------------------------------------------  -----  -----  -----  -----  -----  -----
 
It seems that the numbers I achieve from my first lxc which was on the original zdev configuration performs nearly identical to the pool pool, however running the same command in a newly created lxc the performance is vastly different.
It seems only one of my original lxc containers is able to match the performance of the root pool, why?
Just a hunch: Could it be that the old CT is still a privileged one and the newer one used the unprivileged default in the CT creation wizard we switched to a few releases ago? If that's the case I could imagine that the mitigations for all those CPU bugs found since a few years ago show its impact. To see if that's the case you could boot the system with the mitigations=off kernel command line parameter and repeat the test.

Is the imbalance in my pool the cause of the differing fio tests?
It may have some impact, but it be odd that it should be the sole reason. FWIW, if you have enough free space (> 55%) you can re-balance the pool using ZFS send/receive, i.e., create a new dataset, make a snapshot and send the whole original dataset to it, once that's done stop all guests and everything else that can write to the original ZFS and re-send the delta between first snapshot, then rename old to something else and the new one to the original old name, if all is OK you can then drop the renamed old one.
 
Last edited:
Hi @t.lamprecht,

Thank you for your reply.

Just a hunch: Could it be that the old CT is still a privileged one and the newer one used the unprivileged default in the CT creation wizard we switched to a few releases ago? If that's the case I could imagine that the mitigations for all those CPU bugs found since a few years ago show its impact. To see if that's the case you could boot the system with the mitigations=off kernel command line parameter and repeat the test.
Correct, all of the originals were privileged.

In order to perform some of this testing I've fired up another Proxmox server running 7.1-7 with less storage. 2 mirrors, one with 2x 3TB the other with 2x 1TB.

The problem still remains though. I enabled mitigations=off in /etc/default/grub with the following:

Code:
GRUB_CMDLINE_LINUX="$GRUB_CMDLINE_LINUX mitigations=off"

After rebooting the system and performing the same tests on the root /tank compared to the subvol folder the root pool always performs better.

Below tests performed on the new Proxmox server:

fio test 1:​

Running inside the root pool /tank/

Code:
root@:/tank# fio --name=test --size=5g --rw=write --ioengine=posixaio --direct=1 --bs=1m
test: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=posixaio, iodepth=1
fio-3.25
Starting 1 process
test: Laying out IO file (1 file / 5120MiB)
Jobs: 1 (f=1): [W(1)][100.0%][w=285MiB/s][w=285 IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=62908: Thu Feb 10 01:58:09 2022
  write: IOPS=463, BW=464MiB/s (486MB/s)(5120MiB/11036msec); 0 zone resets
    slat (usec): min=5, max=275, avg=13.48, stdev= 6.07
    clat (usec): min=84, max=9330, avg=2140.60, stdev=1703.65
     lat (usec): min=91, max=9343, avg=2154.08, stdev=1704.21
    clat percentiles (usec):
     |  1.00th=[   96],  5.00th=[  139], 10.00th=[  145], 20.00th=[  165],
     | 30.00th=[  174], 40.00th=[ 1254], 50.00th=[ 2769], 60.00th=[ 3458],
     | 70.00th=[ 3523], 80.00th=[ 3621], 90.00th=[ 4015], 95.00th=[ 4359],
     | 99.00th=[ 5014], 99.50th=[ 5276], 99.90th=[ 5407], 99.95th=[ 5407],
     | 99.99th=[ 9372]
   bw (  KiB/s): min=210944, max=4278272, per=100.00%, avg=475694.55, stdev=852642.47, samples=22
   iops        : min=  206, max= 4178, avg=464.55, stdev=832.66, samples=22
  lat (usec)   : 100=1.33%, 250=35.88%, 500=1.25%, 750=0.04%, 1000=0.18%
  lat (msec)   : 2=6.39%, 4=44.69%, 10=10.25%
  cpu          : usr=0.82%, sys=0.34%, ctx=5127, majf=0, minf=50
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,5120,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: bw=464MiB/s (486MB/s), 464MiB/s-464MiB/s (486MB/s-486MB/s), io=5120MiB (5369MB), run=11036-11036msec
Code:
root@:~# zpool iostat -vy 15 1
                                                capacity     operations     bandwidth
pool                                          alloc   free   read  write   read  write
--------------------------------------------  -----  -----  -----  -----  -----  -----
tank                                          8.96G  3.62T      0    805      0   478M
  mirror                                      3.06G   925G      0    446      0   219M
    ata-WDC_WD10EFRX-DEDACTED                     -      -      0    250      0   109M
    ata-WDC_WD10EFRX-DEDACTED                     -      -      0    195      0   110M
  mirror                                      5.89G  2.71T      0    358      0   259M
    ata-WDC_WD30EFRX-DEDACTED                     -      -      0    194      0   129M
    ata-WDC_WD30EFRX-DEDACTED                     -      -      0    164      0   129M
--------------------------------------------  -----  -----  -----  -----  -----  -----



fio test #2:​

Running inside the /tank/subvol-100-disk-0 folder.


Code:
root@:/tank/subvol-100-disk-0# fio --name=test --size=5g --rw=write --ioengine=posixaio --direct=1 --bs=1m
test: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=posixaio, iodepth=1
fio-3.25
Starting 1 process
test: Laying out IO file (1 file / 5120MiB)
Jobs: 1 (f=1): [W(1)][100.0%][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=82312: Thu Feb 10 02:00:35 2022
  write: IOPS=170, BW=171MiB/s (179MB/s)(5120MiB/30000msec); 0 zone resets
    slat (usec): min=5, max=220, avg=12.95, stdev= 6.26
    clat (usec): min=86, max=3713.3k, avg=5845.45, stdev=143796.59
     lat (usec): min=92, max=3713.3k, avg=5858.39, stdev=143796.61
    clat percentiles (usec):
     |  1.00th=[     88],  5.00th=[     89], 10.00th=[     90],
     | 20.00th=[     91], 30.00th=[    101], 40.00th=[    143],
     | 50.00th=[    149], 60.00th=[    167], 70.00th=[    176],
     | 80.00th=[    186], 90.00th=[    210], 95.00th=[    310],
     | 99.00th=[    424], 99.50th=[    486], 99.90th=[3607102],
     | 99.95th=[3674211], 99.99th=[3707765]
   bw (  KiB/s): min=616448, max=1669120, per=100.00%, avg=1046118.40, stdev=381438.80, samples=10
   iops        : min=  602, max= 1630, avg=1021.60, stdev=372.50, samples=10
  lat (usec)   : 100=29.77%, 250=63.16%, 500=6.66%, 750=0.20%, 1000=0.04%
  lat (msec)   : 10=0.02%, >=2000=0.16%
  cpu          : usr=0.28%, sys=0.06%, ctx=5222, majf=0, minf=51
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,5120,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: bw=171MiB/s (179MB/s), 171MiB/s-171MiB/s (179MB/s-179MB/s), io=5120MiB (5369MB), run=30000-30000msec
Code:
root@:~# zpool iostat -vy 30 1
                                                capacity     operations     bandwidth
pool                                          alloc   free   read  write   read  write
--------------------------------------------  -----  -----  -----  -----  -----  -----
tank                                          10.2G  3.62T      0    826      0   338M
  mirror                                      3.75G   924G      0    526      0  76.6M
    ata-WDC_WD10EFRX-DEDACTED                     -      -      0    258      0  38.3M
    ata-WDC_WD10EFRX-DEDACTED                     -      -      0    268      0  38.3M
  mirror                                      6.44G  2.71T      0    300      0   262M
    ata-WDC_WD30EFRX-DEDACTED                     -      -      0    160      0   131M
    ata-WDC_WD30EFRX-DEDACTED                     -      -      0    139      0   131M
--------------------------------------------  -----  -----  -----  -----  -----  -----

It may have some impact, but it be odd that it should be the sole reason. FWIW, if you have enough free space (> 55%) you can re-balance the pool using ZFS send/receive, i.e., create a new dataset, make a snapshot and send the whole original dataset to it, once that's done stop all guests and everything else that can write to the original ZFS and re-send the delta between first snapshot, then rename old to something else and the new one to the original old name, if all is OK you can then drop the renamed old one.
I'm only using 46% of my pool at the moment so what you've suggested I can try for sure. I was thinking of migrating everything off to this new temporary server and starting from scratch. Appreciate the brief instructions you've provided, pretty neat way to address the problem. However based on my findings you are also correct in assuming it's not the sole or any problem at all.

I'm open to anything else you feel I could try, at this stage I'm confused as to why the pool performs well but the volume folder inside doesn't.
 
The problem still remains though. I enabled mitigations=off in /etc/default/grub with the following:

Code:
GRUB_CMDLINE_LINUX="$GRUB_CMDLINE_LINUX mitigations=off"
FYI, the inner $GRUB_CMDLINE_LINUX part is not really required and just to be sure: did you run update-grub to actually bring that config change in effect? IOW. what's the output of cat /proc/cmdline
 
FYI, the inner $GRUB_CMDLINE_LINUX part is not really required and just to be sure: did you run update-grub to actually bring that config change in effect? IOW. what's the output of cat /proc/cmdline
Yep, forgot to mention I updated grub. Output of cat /proc/cmdline as requested below:

Code:
root@:/# cat /proc/cmdline
BOOT_IMAGE=/boot/vmlinuz-5.13.19-2-pve root=/dev/mapper/pve-root ro mitigations=off quiet
root@/#
 
hmm, ok. The fio version from inside the CT and PVE are the same though? Besides that I'm out of obvious ideas for now, I'd also test io_uring compared to posixaio though, while the former should be a bit faster in general I'd still not expecting too much change in difference between LXC/host there though.
 
hmm, ok. The fio version from inside the CT and PVE are the same though?
Yep, fio-3.25. Updated tests using io_uring are below. If you think of anything, please let me know. I'm not sure where I should be looking to troubleshoot things further. It's positive at least knowing that a fresh server still shows the same problem as my existing, which means the problem is reproduceable.

fio test 3 (using io_uring#:​

Running inside the root pool /tank/

Code:
root@:/tank# fio --name=test --size=5g --rw=write --ioengine=io_uring --direct=1 --bs=1m
test: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=io_uring, iodepth=1
fio-3.25
Starting 1 process
test: Laying out IO file (1 file / 5120MiB)
Jobs: 1 (f=1): [W(1)][100.0%][w=272MiB/s][w=272 IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=194969: Thu Feb 10 02:52:35 2022
  write: IOPS=453, BW=453MiB/s (475MB/s)(5120MiB/11296msec); 0 zone resets
    slat (usec): min=85, max=5458, avg=2204.46, stdev=1755.69
    clat (nsec): min=81, max=2466, avg=322.51, stdev=195.33
     lat (usec): min=85, max=5462, avg=2205.16, stdev=1755.88
    clat percentiles (nsec):
     |  1.00th=[  102],  5.00th=[  129], 10.00th=[  171], 20.00th=[  193],
     | 30.00th=[  215], 40.00th=[  258], 50.00th=[  306], 60.00th=[  330],
     | 70.00th=[  358], 80.00th=[  382], 90.00th=[  478], 95.00th=[  620],
     | 99.00th=[ 1144], 99.50th=[ 1400], 99.90th=[ 2160], 99.95th=[ 2320],
     | 99.99th=[ 2480]
   bw (  KiB/s): min=212992, max=4331520, per=100.00%, avg=469085.09, stdev=865651.98, samples=22
   iops        : min=  208, max= 4230, avg=458.09, stdev=845.36, samples=22
  lat (nsec)   : 100=0.82%, 250=37.56%, 500=52.73%, 750=5.94%, 1000=1.54%
  lat (usec)   : 2=1.19%, 4=0.21%
  cpu          : usr=0.57%, sys=5.84%, ctx=27418, majf=0, minf=10
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,5120,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: bw=453MiB/s (475MB/s), 453MiB/s-453MiB/s (475MB/s-475MB/s), io=5120MiB (5369MB), run=11296-11296msec

Code:
root@:~# zpool iostat -vy 15 1
                                                capacity     operations     bandwidth
pool                                          alloc   free   read  write   read  write
--------------------------------------------  -----  -----  -----  -----  -----  -----
tank                                          14.0G  3.61T      0    726    273   433M
  mirror                                      4.05G   924G      0    398      0   191M
    ata-WDC_WD10EFRX-REDACTED                     -      -      0    226      0  95.4M
    ata-WDC_WD10EFRX-REDACTED                     -      -      0    171      0  95.4M
  mirror                                      9.95G  2.71T      0    327    273   243M
    ata-WDC_WD30EFRX-REDACTED                     -      -      0    177    273   121M
    ata-WDC_WD30EFRX-REDACTED                     -      -      0    150      0   121M
--------------------------------------------  -----  -----  -----  -----  -----  -----



fio test 4 (using io_uring#:​

Running inside the /subvol-100-disk-0 folder.

Code:
root@:/tank/subvol-100-disk-0# fio --name=test --size=5g --rw=write --ioengine=io_uring --direct=1 --bs=1m
test: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=io_uring, iodepth=1
fio-3.25
Starting 1 process
Jobs: 1 (f=1): [W(1)][100.0%][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=169713: Thu Feb 10 02:48:04 2022
  write: IOPS=168, BW=168MiB/s (176MB/s)(5120MiB/30455msec); 0 zone resets
    slat (usec): min=88, max=2347.1k, avg=5947.35, stdev=115338.32
    clat (nsec): min=90, max=1414, avg=136.77, stdev=62.76
     lat (usec): min=89, max=2347.1k, avg=5947.64, stdev=115338.42
    clat percentiles (nsec):
     |  1.00th=[   98],  5.00th=[  102], 10.00th=[  107], 20.00th=[  111],
     | 30.00th=[  116], 40.00th=[  120], 50.00th=[  125], 60.00th=[  131],
     | 70.00th=[  137], 80.00th=[  151], 90.00th=[  177], 95.00th=[  195],
     | 99.00th=[  294], 99.50th=[  378], 99.90th=[ 1176], 99.95th=[ 1320],
     | 99.99th=[ 1416]
   bw (  KiB/s): min=47104, max=796672, per=100.00%, avg=689660.60, stdev=256447.60, samples=15
   iops        : min=   46, max=  778, avg=673.47, stdev=250.42, samples=15
  lat (nsec)   : 100=2.34%, 250=96.33%, 500=1.02%, 750=0.06%, 1000=0.06%
  lat (usec)   : 2=0.20%
  cpu          : usr=0.22%, sys=1.44%, ctx=207, majf=0, minf=9
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,5120,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: bw=168MiB/s (176MB/s), 168MiB/s-168MiB/s (176MB/s-176MB/s), io=5120MiB (5369MB), run=30455-30455msec

Code:
root@:~# zpool iostat -vy 30 1
                                                capacity     operations     bandwidth
pool                                          alloc   free   read  write   read  write
--------------------------------------------  -----  -----  -----  -----  -----  -----
tank                                          16.6G  3.61T      1    806  60.0K   328M
  mirror                                      5.20G   923G      0    502  30.9K  74.5M
    ata-WDC_WD10EFRX-REDACTED                     -      -      0    246  25.1K  37.2M
    ata-WDC_WD10EFRX-REDACTED                     -      -      0    256  5.87K  37.2M
  mirror                                      11.4G  2.71T      0    304  29.1K   254M
    ata-WDC_WD30EFRX-REDACTED                     -      -      0    162  7.73K   127M
    ata-WDC_WD30EFRX-REDACTED                     -      -      0    141  21.3K   127M
--------------------------------------------  -----  -----  -----  -----  -----  -----
 
I've been doing some further testing on the new Proxmox server I've spun up. All tests still performed with mitigations=off.

I created a Debian 11 VM with the following settings:
Code:
balloon: 0
boot: order=scsi0;ide2;net0
cores: 4
ide2: local:iso/debian-11.2.0-amd64-netinst.iso,media=cdrom
memory: 2048
meta: creation-qemu=6.1.0,ctime=1644458249
name: fio-vm
net0: virtio=DE:37:A7:C4:13:EB,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: tank:vm-103-disk-0,size=32G
scsihw: virtio-scsi-pci
smbios1: uuid=270967c5-c108-4e21-9f41-3c3425bda4bf
sockets: 1
vmgenid: 159e7a14-8f29-4d53-a844-4e1c239b1c63

Initial tests were run with sync=standard resulting in the following:

fio vm test #1:​

Running inside the root pool /tank/
Code:
root@:/tank# fio --name=test --size=5g --rw=write --ioengine=posixaio --direct=1 --bs=1m
test: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=posixaio, iodepth=1
fio-3.25
Starting 1 process
test: Laying out IO file (1 file / 5120MiB)
Jobs: 1 (f=1): [W(1)][100.0%][w=266MiB/s][w=266 IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=2409842: Thu Feb 10 13:48:51 2022
  write: IOPS=440, BW=440MiB/s (462MB/s)(5120MiB/11632msec); 0 zone resets
    slat (usec): min=5, max=115, avg=13.63, stdev= 4.71
    clat (usec): min=85, max=9488, avg=2256.86, stdev=1817.02
     lat (usec): min=91, max=9501, avg=2270.49, stdev=1817.64
    clat percentiles (usec):
     |  1.00th=[   89],  5.00th=[  143], 10.00th=[  147], 20.00th=[  169],
     | 30.00th=[  180], 40.00th=[ 1254], 50.00th=[ 2769], 60.00th=[ 3720],
     | 70.00th=[ 3785], 80.00th=[ 3916], 90.00th=[ 4228], 95.00th=[ 4686],
     | 99.00th=[ 5342], 99.50th=[ 5604], 99.90th=[ 5669], 99.95th=[ 5735],
     | 99.99th=[ 9503]
   bw (  KiB/s): min=192512, max=4268032, per=100.00%, avg=452786.09, stdev=835435.89, samples=23
   iops        : min=  188, max= 4168, avg=442.17, stdev=815.86, samples=23
  lat (usec)   : 100=1.50%, 250=35.39%, 500=1.60%, 750=0.02%, 1000=0.08%
  lat (msec)   : 2=6.60%, 4=39.61%, 10=15.20%
  cpu          : usr=0.86%, sys=0.24%, ctx=5121, majf=0, minf=50
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,5120,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: bw=440MiB/s (462MB/s), 440MiB/s-440MiB/s (462MB/s-462MB/s), io=5120MiB (5369MB), run=11632-11632msec

Code:
root@:~# zpool iostat -vy 15 1
                                                capacity     operations     bandwidth
pool                                          alloc   free   read  write   read  write
--------------------------------------------  -----  -----  -----  -----  -----  -----
tank                                          51.6G  3.57T      0    748    819   402M
  mirror                                      21.2G   907G      0    418    819   174M
    ata-WDC_WD10EFRX-REDACTED                     -      -      0    250      0  87.0M
    ata-WDC_WD10EFRX-REDACTED                     -      -      0    167    819  86.6M
  mirror                                      30.4G  2.69T      0    330      0   229M
    ata-WDC_WD30EFRX-REDACTED                     -      -      0    193      0   114M
    ata-WDC_WD30EFRX-REDACTED                     -      -      0    137      0   114M
--------------------------------------------  -----  -----  -----  -----  -----  -----



fio vm test #2:​

Running inside the Debian 11 VM.
Code:
root@debian-fio-vm:~/Documents$ fio --name=test --size=5g --rw=write --ioengine=posixaio --direct=1 --bs=1m
test: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=posixaio, iodepth=1
fio-3.25
Starting 1 process
test: Laying out IO file (1 file / 5120MiB)
Jobs: 1 (f=1): [W(1)][100.0%][w=22.0MiB/s][w=22 IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=2236: Thu Feb 10 13:45:30 2022
  write: IOPS=163, BW=163MiB/s (171MB/s)(5120MiB/31383msec); 0 zone resets
    slat (usec): min=7, max=180, avg=19.71, stdev= 8.15
    clat (usec): min=423, max=836775, avg=6107.68, stdev=18797.83
     lat (usec): min=439, max=836805, avg=6127.39, stdev=18798.56
    clat percentiles (usec):
     |  1.00th=[   445],  5.00th=[   469], 10.00th=[   490], 20.00th=[   545],
     | 30.00th=[   570], 40.00th=[   668], 50.00th=[   873], 60.00th=[  1549],
     | 70.00th=[  3261], 80.00th=[  5932], 90.00th=[ 14484], 95.00th=[ 33162],
     | 99.00th=[ 64750], 99.50th=[ 89654], 99.90th=[179307], 99.95th=[278922],
     | 99.99th=[834667]
   bw (  KiB/s): min=10240, max=1705984, per=100.00%, avg=171461.25, stdev=339541.09, samples=61
   iops        : min=   10, max= 1666, avg=167.44, stdev=331.58, samples=61
  lat (usec)   : 500=11.70%, 750=32.89%, 1000=8.34%
  lat (msec)   : 2=11.07%, 4=7.58%, 10=14.86%, 20=5.25%, 50=6.50%
  lat (msec)   : 100=1.43%, 250=0.31%, 500=0.04%, 1000=0.02%
  cpu          : usr=0.47%, sys=0.16%, ctx=5201, majf=0, minf=52
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,5120,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: bw=163MiB/s (171MB/s), 163MiB/s-163MiB/s (171MB/s-171MB/s), io=5120MiB (5369MB), run=31383-31383msec

Code:
root@:~# zpool iostat -vy 15 1
                                                capacity     operations     bandwidth
pool                                          alloc   free   read  write   read  write
--------------------------------------------  -----  -----  -----  -----  -----  -----
tank                                          51.6G  3.57T      0    748    819   402M
  mirror                                      21.2G   907G      0    418    819   174M
    ata-WDC_WD10EFRX-REDACTED                     -      -      0    250      0  87.0M
    ata-WDC_WD10EFRX-REDACTED                     -      -      0    167    819  86.6M
  mirror                                      30.4G  2.69T      0    330      0   229M
    ata-WDC_WD30EFRX-REDACTED                     -      -      0    193      0   114M
    ata-WDC_WD30EFRX-REDACTED                     -      -      0    137      0   114M
--------------------------------------------  -----  -----  -----  -----  -----  -----
zpool iostat -vy 30 1
                                                capacity     operations     bandwidth
pool                                          alloc   free   read  write   read  write
--------------------------------------------  -----  -----  -----  -----  -----  -----
tank                                          51.4G  3.57T     24  3.36K   991K   401M
  mirror                                      21.0G   907G     12  1.64K   506K   197M
    ata-WDC_WD10EFRX-REDACTED                     -      -      6    938   281K  98.7M
    ata-WDC_WD10EFRX-REDACTED                     -      -      5    741   226K  98.4M
  mirror                                      30.4G  2.69T     11  1.72K   485K   204M
    ata-WDC_WD30EFRX-REDACTED                     -      -      6    899   267K   102M
    ata-WDC_WD30EFRX-REDACTED                     -      -      5    859   217K   102M
--------------------------------------------  -----  -----  -----  -----  -----  -----



However, once setting sync=disabled the performance with within a margin of error to the root pool from within the VM.

fio vm test #3:​

Running inside the root pool /tank/
Code:
root@:/tank# fio --name=test --size=5g --rw=write --ioengine=posixaio --direct=1 --bs=1m
test: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=posixaio, iodepth=1
fio-3.25
Starting 1 process
Jobs: 1 (f=1): [W(1)][100.0%][w=271MiB/s][w=271 IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=2251331: Thu Feb 10 13:35:02 2022
  write: IOPS=437, BW=438MiB/s (459MB/s)(5120MiB/11694msec); 0 zone resets
    slat (usec): min=5, max=113, avg=13.16, stdev= 4.78
    clat (usec): min=85, max=6014, avg=2269.49, stdev=1832.69
     lat (usec): min=91, max=6034, avg=2282.65, stdev=1833.80
    clat percentiles (usec):
     |  1.00th=[   87],  5.00th=[   89], 10.00th=[   90], 20.00th=[   97],
     | 30.00th=[  159], 40.00th=[ 1369], 50.00th=[ 2769], 60.00th=[ 3654],
     | 70.00th=[ 3720], 80.00th=[ 3916], 90.00th=[ 4359], 95.00th=[ 4752],
     | 99.00th=[ 5473], 99.50th=[ 5735], 99.90th=[ 5866], 99.95th=[ 5932],
     | 99.99th=[ 5997]
   bw (  KiB/s): min=182272, max=4261888, per=100.00%, avg=451183.30, stdev=834439.53, samples=23
   iops        : min=  178, max= 4162, avg=440.61, stdev=814.88, samples=23
  lat (usec)   : 100=21.27%, 250=14.92%, 500=0.16%, 750=0.62%, 1000=0.18%
  lat (msec)   : 2=7.50%, 4=38.59%, 10=16.76%
  cpu          : usr=0.89%, sys=0.14%, ctx=5301, majf=0, minf=50
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,5120,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: bw=438MiB/s (459MB/s), 438MiB/s-438MiB/s (459MB/s-459MB/s), io=5120MiB (5369MB), run=11694-11694msec

Code:
root@:~# zpool iostat -vy 15 1
capacity     operations     bandwidth
pool                                          alloc   free   read  write   read  write
--------------------------------------------  -----  -----  -----  -----  -----  -----
tank                                          54.0G  3.57T      0    854  3.47K   443M
mirror                                        22.3G   906G      0    442    546   191M
ata-WDC_WD10EFRX-REDACTED                         -      -      0    260    546  96.0M
ata-WDC_WD10EFRX-REDACTED                         -      -      0    182      0  95.5M
mirror                                        31.7G  2.69T      0    412  2.93K   251M
ata-WDC_WD30EFRX-REDACTED                         -      -      0    195    546   126M
ata-WDC_WD30EFRX-REDACTED                         -      -      0    216  2.40K   126M
--------------------------------------------  -----  -----  -----  -----  -----  -----



fio vm test #4:​

Running inside the Debian 11 VM.
Code:
root@debian-fio-vm:~/Documents$ fio --name=test --size=5g --rw=write --ioengine=posixaio --direct=1 --bs=1m
test: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=posixaio, iodepth=1
fio-3.25
Starting 1 process
test: Laying out IO file (1 file / 5120MiB)
Jobs: 1 (f=1): [W(1)][100.0%][w=183MiB/s][w=183 IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=2209: Thu Feb 10 13:32:27 2022
  write: IOPS=402, BW=402MiB/s (422MB/s)(5120MiB/12721msec); 0 zone resets
    slat (usec): min=6, max=547, avg=19.50, stdev=19.28
    clat (usec): min=372, max=146893, avg=2463.35, stdev=4768.81
     lat (usec): min=407, max=146912, avg=2482.85, stdev=4769.93
    clat percentiles (usec):
     |  1.00th=[   437],  5.00th=[   461], 10.00th=[   486], 20.00th=[   523],
     | 30.00th=[   570], 40.00th=[   644], 50.00th=[   881], 60.00th=[  1549],
     | 70.00th=[  3720], 80.00th=[  5080], 90.00th=[  5473], 95.00th=[  5735],
     | 99.00th=[  8586], 99.50th=[ 13698], 99.90th=[ 72877], 99.95th=[107480],
     | 99.99th=[147850]
   bw (  KiB/s): min=106496, max=1816576, per=100.00%, avg=416317.44, stdev=456120.27, samples=25
   iops        : min=  104, max= 1774, avg=406.56, stdev=445.43, samples=25
  lat (usec)   : 500=13.83%, 750=31.97%, 1000=7.13%
  lat (msec)   : 2=10.76%, 4=7.01%, 10=28.59%, 20=0.35%, 50=0.20%
  lat (msec)   : 100=0.08%, 250=0.08%
  cpu          : usr=1.12%, sys=0.39%, ctx=5302, majf=0, minf=50
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,5120,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: bw=402MiB/s (422MB/s), 402MiB/s-402MiB/s (422MB/s-422MB/s), io=5120MiB (5369MB), run=12721-12721msec

Disk stats (read/write):
  sda: ios=0/5126, merge=0/177, ticks=0/12569, in_queue=12569, util=99.07%

Code:
root@:~# zpool iostat -vy 15 1
                                                capacity     operations     bandwidth
pool                                          alloc   free   read  write   read  write
--------------------------------------------  -----  -----  -----  -----  -----  -----
tank                                          51.5G  3.57T     43  9.74K  1.83M   334M
  mirror                                      21.2G   907G     22  4.53K   967K   160M
    ata-WDC_WD10EFRX-REDACTED                     -      -     12  2.49K   555K  80.0M
    ata-WDC_WD10EFRX-REDACTED                     -      -      9  2.04K   413K  80.0M
  mirror                                      30.4G  2.69T     20  5.20K   904K   174M
    ata-WDC_WD30EFRX-REDACTED                     -      -     13  2.67K   590K  87.0M
    ata-WDC_WD30EFRX-REDACTED                     -      -      7  2.54K   314K  87.0M
--------------------------------------------  -----  -----  -----  -----  -----  -----
 
Last edited:
Further to my previous post, if I create a zfs filesystem on the root pool and mount it to a privileged container the performance is within a margin of error to the host.

There seems to be some issue with the way the lxc container is treating any mounted storage created by the CT wizard.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!