PVE 4.1 LXC mount zfs limitation and zfs performance issue

elurex

Active Member
Oct 28, 2015
204
14
38
Taiwan
in PVE 4.1, for LXC to mount host file system, it need to add config

Code:
 mp0: /target/test,mp=/target

However, I found it limits @ mp9 and does not process mp10 and forth

Another huge performance issue I found out that some how zfs performance seemed to be capped!

I ran following command on Dual Intel Xeon with 64 GB ECC RAM with NVMe SSD as log and cache using PVE 4.1 and I got following result
Code:
root@nas:~# dd if=/dev/zero of=/rpool/output bs=4k count=16000k; rm -f /rpool/ou
tput
16384000+0 records in
16384000+0 records out
67108864000 bytes (67 GB) copied, 216.258 s, 310 MB/s

Same test ran on a i7 3770t CPU with 16 GB None ECC RAM without any SSD as log and cache use Ubuntu 15.10, and I got floowing result
Code:
root@tmedia:~# dd if=/dev/zero of=/rpool/output bs=4k count=16000k; rm -f /rpool/output
16384000+0 records in
16384000+0 records out
67108864000 bytes (67 GB) copied, 113.787 s, 590 MB/s

This does not make sense at all.
 
System 1: Proxmox VE4.1
Code:
root@nas:~# zfs get all rpool
NAME   PROPERTY              VALUE                  SOURCE
rpool  type                  filesystem             -
rpool  creation              Mon Dec 21  2:18 2015  -
rpool  used                  67.0G                  -
rpool  available             14.0T                  -
rpool  referenced            96K                    -
rpool  compressratio         1.67x                  -
rpool  mounted               yes                    -
rpool  quota                 none                   default
rpool  reservation           none                   default
rpool  recordsize            128K                   default
rpool  mountpoint            /rpool                 default
rpool  sharenfs              off                    default
rpool  checksum              on                     default
rpool  compression           lz4                    local
rpool  atime                 off                    local
rpool  devices               on                     default
rpool  exec                  on                     default
rpool  setuid                on                     default
rpool  readonly              off                    default
rpool  zoned                 off                    default
rpool  snapdir               hidden                 default
rpool  aclinherit            restricted             default
rpool  canmount              on                     default
rpool  xattr                 on                     default
rpool  copies                1                      default
rpool  version               5                      -
rpool  utf8only              off                    -
rpool  normalization         none                   -
rpool  casesensitivity       sensitive              -
rpool  vscan                 off                    default
rpool  nbmand                off                    default
rpool  sharesmb              off                    default
rpool  refquota              none                   default
rpool  refreservation        none                   default
rpool  primarycache          all                    default
rpool  secondarycache        all                    default
rpool  usedbysnapshots       0                      -
rpool  usedbydataset         96K                    -
rpool  usedbychildren        67.0G                  -
rpool  usedbyrefreservation  0                      -
rpool  logbias               latency                default
rpool  dedup                 off                    default
rpool  mlslabel              none                   default
rpool  sync                  standard               local
rpool  refcompressratio      1.00x                  -
rpool  written               96K                    -
rpool  logicalused           1.58G                  -
rpool  logicalreferenced     14.5K                  -
rpool  filesystem_limit      none                   default
rpool  snapshot_limit        none                   default
rpool  filesystem_count      none                   default
rpool  snapshot_count        none                   default
rpool  snapdev               hidden                 default
rpool  acltype               off                    default
rpool  context               none                   default
rpool  fscontext             none                   default
rpool  defcontext            none                   default
rpool  rootcontext           none                   default
rpool  relatime              off                    default
rpool  redundant_metadata    all                    default
rpool  overlay               off                    default
root@nas:~# zpool status
  pool: rpool
state: ONLINE
  scan: none requested
config:

        NAME                 STATE     READ WRITE CKSUM
        rpool                ONLINE       0     0     0
          mirror-0           ONLINE       0     0     0
            sda2             ONLINE       0     0     0
            sdb2             ONLINE       0     0     0
          mirror-1           ONLINE       0     0     0
            sdc              ONLINE       0     0     0
            sdd              ONLINE       0     0     0
          mirror-2           ONLINE       0     0     0
            sde              ONLINE       0     0     0
            sdf              ONLINE       0     0     0
          mirror-3           ONLINE       0     0     0
            sdg              ONLINE       0     0     0
            sdh              ONLINE       0     0     0
        logs
          oczpcie_10_0_ssd1  ONLINE       0     0     0
        cache
          oczpcie_10_0_ssd2  ONLINE       0     0     0

errors: No known data errors


System 2: i7 3770t Ubuntu 15.10
Code:
root@tmedia:~# zfs get all rpool
NAME   PROPERTY              VALUE                    SOURCE
rpool  type                  filesystem               -
rpool  creation              Sept 21 14:45 2015  -
rpool  used                  864G                     -
rpool  available             9.36T                    -
rpool  referenced            232K                     -
rpool  compressratio         1.01x                    -
rpool  mounted               yes                      -
rpool  quota                 none                     default
rpool  reservation           none                     default
rpool  recordsize            128K                     default
rpool  mountpoint            /rpool                   default
rpool  sharenfs              off                      default
rpool  checksum              on                       default
rpool  compression           lz4                      local
rpool  atime                 on                       default
rpool  devices               on                       default
rpool  exec                  on                       default
rpool  setuid                on                       default
rpool  readonly              off                      default
rpool  zoned                 off                      default
rpool  snapdir               hidden                   default
rpool  aclinherit            restricted               default
rpool  canmount              on                       default
rpool  xattr                 on                       default
rpool  copies                1                        default
rpool  version               5                        -
rpool  utf8only              off                      -
rpool  normalization         none                     -
rpool  casesensitivity       sensitive                -
rpool  vscan                 off                      default
rpool  nbmand                off                      default
rpool  sharesmb              off                      default
rpool  refquota              none                     default
rpool  refreservation        none                     default
rpool  primarycache          all                      default
rpool  secondarycache        all                      default
rpool  usedbysnapshots       0                        -
rpool  usedbydataset         232K                     -
rpool  usedbychildren        864G                     -
rpool  usedbyrefreservation  0                        -
rpool  logbias               latency                  default
rpool  dedup                 off                      default
rpool  mlslabel              none                     default
rpool  sync                  standard                 default
rpool  refcompressratio      1.00x                    -
rpool  written               232K                     -
rpool  logicalused           836G                     -
rpool  logicalreferenced     44K                      -
rpool  filesystem_limit      none                     default
rpool  snapshot_limit        none                     default
rpool  filesystem_count      none                     default
rpool  snapshot_count        none                     default
rpool  snapdev               hidden                   default
rpool  acltype               off                      default
rpool  context               none                     default
rpool  fscontext             none                     default
rpool  defcontext            none                     default
rpool  rootcontext           none                     default
rpool  relatime              on                       temporary
rpool  redundant_metadata    all                      default
rpool  overlay               off                      default

root@tmedia:~# zpool status
  pool: rpool
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
  still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
  the pool may no longer be accessible by software that does not support
  the features. See zpool-features(5) for details.
  scan: none requested
config:

  NAME  STATE  READ WRITE CKSUM
  rpool  ONLINE  0  0  0
  raidz1-0  ONLINE  0  0  0
  ata-WDC_WD40EFRX-68WT0N0_WD-WCC4E0370941-part1  ONLINE  0  0  0
  ata-WDC_WD40EFRX-68WT0N0_WD-WCC4E0374655-part1  ONLINE  0  0  0
  ata-WDC_WD40EFRX-68WT0N0_WD-WCC4E0434058-part1  ONLINE  0  0  0
  ata-WDC_WD40EFRX-68WT0N0_WD-WCC4ECH8SRA7-part1  ONLINE  0  0  0

errors: No known data errors
 
You didn't specify what type of dual intel xeon you have. You've measured the CPU here because you are dd-ing /dev/zero to a lz4 compressed dataset.
 
Did you understand what I've said?
You are benchmarking LZ4 performance on your CPU because you are writing /dev/zero to a compressed dataset.
If you want to to disk benchmarks, create a dataset with compression=off and try again.
 
sigxcpu.... the following test is done on the same system using the same hardwarek without any SSD as LOG (zil) but just using ubuntu 15.10

Code:
root@nas:/vmdisk# dd if=/dev/zero of=zerofile.000 bs=4k count=16000k; sleep 30 ; dd if=zerofile.000 of=/dev/null bs=1M
16384000+0 records in
16384000+0 records out
104857600000 bytes (105 GB) copied, 274.539 s, 382 MB/s

The pool has the same configuration/setting. Like I said, lz4 is not the issue..... compress 000000 only
 
Your SSD log has nothing to do here, because you are doing async writes. Again, if you want to benchmark disks, do not do /dev/zero to a compressed pool.

Also, if you need help, please define your system configurations as A, B, C and so on. This new test on "the same system" is not very helpful. You didn't mention what "/vmdisk" is.

Please create a new dataset on both systems, set compression=off, do your dd and check the output of iostat -x during the testing to find the bottleneck. If you keep doing ZFS tests with /dev/zero and compression, you are not benchmarking your discs because:

- I think zeroes do not reach the disk in new ZFS versions
- ~400MB/sec at 4k blocks means 100k IOPS which is a dream for everybody

Here are my tests to prove that:

Code:
root@gen8:/ssd/test# zpool status ssd
  pool: ssd
 state: ONLINE
  scan: none requested
config:

    NAME                                                          STATE     READ WRITE CKSUM
    ssd                                                           ONLINE       0     0     0
     ata-LITEONIT_LCS-512M6S_2.5_7mm_512GB_TW02XFM1550853A20857  ONLINE       0     0     0
     ata-M4-CT512M4SSD2_0000000013060929FBE9                     ONLINE       0     0     0

errors: No known data errors

root@gen8:/ssd/test# zfs list -o name,used,compression ssd/test
NAME       USED  COMPRESS
ssd/test   106K       lz4

root@gen8:/ssd/test# dd if=/dev/zero of=test.bin bs=4k count=1000000
1000000+0 records in
1000000+0 records out
4096000000 bytes (4.1 GB) copied, 6.85556 s, 597 MB/s

Fake disk usage:

root@gen8:/ssd/test# ls -lh test.bin
-rw-r--r-- 1 root root 3.9G Dec 22 09:44 test.bin

Actual disk usage:

# du -sh test.bin
512    test.bin

As you can see the file on the disk is using 512 bytes.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!