MySQL database dump / restore slow NVMe NFS

icepicknz

New Member
Feb 15, 2023
9
0
1
Tauranga, New Zealand
Hey guys

I've installed Ubuntu 20.04 with mysql and am running a simple test
Code:
root@mysql:/root# ls -alh test.sql
-rw-r--r-- 1 root root 65M Mar  1 05:30 test.sql
root@mysql:/root# time mysql -u root -p database < test.sql
Enter password:

real    0m29.912s
user    0m1.527s
sys    0m0.087s

The test.sql is 1.47m records and 65M in size but taking 30 seconds to import.
I have tried to run this in many different scenarios all giving the same slow import speed of 30 seconds,
  • Vol1 (RAIDZ1)
  • Vol2 (Mirror/Striped SSD's)
  • Vol3 (Mirrored NVMes)
  • even when running on local mirror SSD's on Proxmox they all vary within a second of each other.
I've gone as far as creating a Dataset with 16K record size and then mounting it from Proxmox as 16K and having mysql use that share.

Code:
nfs: Mysql
        export /mnt/Vol2/mysql
        path /mnt/pve/Mysql
        server 10.10.10.1
        content rootdir
        options rw,rsize=16384,wsize=16384
        prune-backups keep-all=1

When the database was MyISAM it was super super slow, taking 7 seconds to do a select record from the table, I changed it to InnoDB and that reduced it down to 0.007s
I have a mysql function that is taking 0.907 seconds, where it takes close to 0.009s in the most basic of AWS RDS or an old server I have running mysql in VirtualBox from years ago.

I've looked at the following tuning options for mysql, many which are depreciated now
Code:
[mysqld]

# ZFS Tuning - https://www.percona.com/blog/mysql-zfs-performance-update/
#skip-log-bin
#innodb_buffer_pool_size = 2G
#innodb_flush_log_at_trx_commit = 1 # TPCC reqs.
#innodb_log_file_size = 2G
#innodb_log_group_home_dir = /var/lib/mysql/log
#innodb_flush_neighbors = 0
#innodb_fast_shutdown = 2
#innodb_flush_method = fsync
innodb_doublewrite = 0 # ZFS is transactional
#innodb_use_native_aio = 0
#innodb_read_io_threads = 10
#innodb_write_io_threads = 10

On Proxmox, cache write through or no cache, makes no difference either.

Any help would be appreciated, been researching and running tests for 2 days now and can't get to the bottom of it.

TrueNAS on a DL380
  • 512Gb ram
  • 22 x 1.8TB 10k SAS disks in RAIDZ1, 4 wide (Vol1)
  • 4 x 1TB SSD disks in ZFS mirror, 2 disks wide (Vol2)
  • 2 x 128Gb NVMe disks in ZFS mirror, 2 wide (Vol3)
  • 10Gb direct connected to Proxmox
Proxmox on a DL360
  • 128Gb ram
  • 2 x 800gb SAS disks in ZFS Mirror
  • 10Gb direct connected to TrueNAS

IPerf tests show 10gbps between servers both ways and sub 0.2ms latency

Many thanks in advance
 
Last edited:
The test.sql is 1.47m records and 65M in size but taking 30 seconds to import.
I have tried to run this in many different scenarios all giving the same slow import speed of 30 seconds,
  • Vol1 (RAIDZ1)
  • Vol2 (Mirror/Striped SSD's)
  • Vol3 (Mirrored NVMes)
  • even when running on local mirror SSD's on Proxmox they all vary within a second of each other.
I've gone as far as creating a Dataset with 16K record size and then mounting it from Proxmox as 16K and having mysql use that share.
Hi,
did you always use a NFS share for your tests? I could be wrong but exposing the DB storage via a NFS share seems not like a performant setup to me. Have you considered using iscsi, if you rely on a network storage.
I've looked at the following tuning options for mysql, many which are depreciated now
You still have the NFS layer on top, so not sure if any storage optimization on ZFS level will be that useful for performance.

You could check your NFS storage speed with fio using random sync write.
I have a mysql function that is taking 0.907 seconds, where it takes close to 0.009s in the most basic of AWS RDS or an old server I have running mysql in VirtualBox from years ago.
Do those setups also use the NFS share as storage?
 
Thanks @Chris

did you always use a NFS share for your tests? I could be wrong but exposing the DB storage via a NFS share seems not like a performant setup to me. Have you considered using iscsi, if you rely on a network storage.

As mentioned above, I did move the VM to direct local storage (ZFS Mirrored SSD) and got the same results with the mysql server being local to the VM.

You could check your NFS storage speed with fio using random sync write.

Have run a few, this is from proxmox to the NFS shares:

Sync enabled:
Vol2 - WRITE: bw=10.5MiB/s (11.1MB/s), 10.5MiB/s-10.5MiB/s (11.1MB/s-11.1MB/s), io=833MiB (874MB), run=78998-78998msec - 690 IOPS

Sync Disabled
Vol2 - WRITE: bw=70.5MiB/s (73.9MB/s), 70.5MiB/s-70.5MiB/s (73.9MB/s-73.9MB/s), io=1049MiB (1100MB), run=14878-14878msec - 4713 IOPS
Vol3 - WRITE: bw=73.0MiB/s (77.6MB/s), 73.0MiB/s-73.0MiB/s (77.6MB/s-77.6MB/s), io=2371MiB (2486MB), run=32046-32046msec - 5295 IOPS

And lastly, to the mysql share I created with 16k record size (sync enabled)
These look great below:

Code:
root@proxmox1:~# sync; fio --randrepeat=1 --direct=1 --gtod_reduce=1 --name=test --filename=/mnt/pve/mysql/testFile --bs=16k --iodepth=256 -size=4G --readwrite=randwrite --ramp_time=4
test: (g=0): rw=randwrite, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=psync, iodepth=256
fio-3.25
Starting 1 process
Jobs: 1 (f=1): [w(1)][72.7%][w=246MiB/s][w=15.7k IOPS][eta 00m:06s]
test: (groupid=0, jobs=1): err= 0: pid=2910462: Wed Mar  1 22:06:45 2023
  write: IOPS=15.0k, BW=249MiB/s (262MB/s)(3000MiB/12025msec); 0 zone resets
   bw (  KiB/s): min=106848, max=584640, per=99.75%, avg=254781.25, stdev=94314.14, samples=24
   iops        : min= 6678, max=36540, avg=15923.75, stdev=5894.68, samples=24
  cpu          : usr=4.49%, sys=63.06%, ctx=33733, majf=0, minf=79
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,191971,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=256

Run status group 0 (all jobs):
  WRITE: bw=249MiB/s (262MB/s), 249MiB/s-249MiB/s (262MB/s-262MB/s), io=3000MiB (3145MB), run=12025-12025msec
 
I did a few additional FIO's from within the VM; still getting very slow imports etc, so this is just for further troubleshooting...
So this is a container on proxmox with it's hard disk over NFS and a mount point for /var/lib/mysql/ to the NFS share with 16k record size

bs=4k
Code:
root@mysql:~# sync; fio --randrepeat=1 --direct=1 --gtod_reduce=1 --name=test --filename=/var/lib/mysql/testFile --bs=4k --iodepth=256 -size=4G --readwrite=randwrite --ramp_time=4
Run status group 0 (all jobs):
  WRITE: bw=175MiB/s (183MB/s), 175MiB/s-175MiB/s (183MB/s-183MB/s), io=3386MiB (3550MB), run=19377-19377msec

Disk stats (read/write):
  loop5: ios=0/1041893, merge=0/0, ticks=0/30322, in_queue=46652, util=99.62%

bs=16k
Code:
root@mysql:~# sync; fio --randrepeat=1 --direct=1 --gtod_reduce=1 --name=test --filename=/var/lib/mysql/testFile --bs=16k --iodepth=256 -size=4G --readwrite=randwrite --ramp_time=4
Run status group 0 (all jobs):
  WRITE: bw=397MiB/s (416MB/s), 397MiB/s-397MiB/s (416MB/s-416MB/s), io=2102MiB (2204MB), run=5297-5297msec

Disk stats (read/write):
  loop5: ios=0/261060, merge=0/0, ticks=0/6568, in_queue=6568, util=98.90%


bs=128k
Code:
root@mysql:~# sync; fio --randrepeat=1 --direct=1 --gtod_reduce=1 --name=test --filename=/var/lib/mysql/testFile --bs=128k --iodepth=256 -size=4G --readwrite=randwrite --ramp_time=4
Run status group 0 (all jobs):
  WRITE: bw=764MiB/s (801MB/s), 764MiB/s-764MiB/s (801MB/s-801MB/s), io=833MiB (874MB), run=1091-1091msec

Disk stats (read/write):
  loop5: ios=0/32474, merge=0/0, ticks=0/4350, in_queue=4350, util=97.15%

Screenshot 2023-03-01 at 10.41.27 PM.png
 
Last edited:
Since the storage seems up for the task, you could check the resource usage of the container and host during the SQL import. Do you see high iowait values in iostat? Further, please post the VM config pct config <VMID>.
 
PVE Config

Code:
root@proxmox1:~# pct config 305
arch: amd64
cores: 4
features: nesting=1
hostname: mysql
memory: 16384
mp0: Mysql:305/vm-305-disk-0.raw,mp=/var/lib/mysql,backup=1,size=100G
net0: name=eth0,bridge=vmbr0,firewall=1,gw=192.168.33.1,hwaddr=42:39:CF:63:CB:BA,ip=192.168.33.8/28,type=veth
ostype: ubuntu
rootfs: Proxmox-SSD:305/vm-305-disk-0.raw,size=50G
swap: 10240
unprivileged: 1
root@proxmox1:~#

Before sql insert
Code:
root@mysql:~# iostat
Linux 5.15.85-1-pve (mysql)     03/01/23     _x86_64_    (64 CPU)
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.89    0.00    0.41    0.19    0.00   98.51

Device             tps    kB_read/s    kB_wrtn/s    kB_dscd/s    kB_read    kB_wrtn    kB_dscd
loop0            17.77       257.39       227.48         0.00   14328432   12663464          0
loop1             8.29        75.75        39.60         0.00    4216865    2204576          0
loop2            33.22       105.11       137.31         0.00    5851465    7643700          0
loop3            18.07       212.36        80.82         0.00   11821774    4499008          0
loop4           104.78        17.38       580.20         0.00     967279   32298144          0
loop5           291.52         1.84      1485.02         0.00     102618   82667600          0
sda             256.91       559.54      5357.48         0.00   31148078  298238460          0
sdb             256.62       557.76      5357.48         0.00   31049334  298238460          0
sr0               0.00         0.00         0.00         0.00          1          0          0

During sql insert nothing noticeable to me
Code:
root@mysql:~# iostat
Linux 5.15.85-1-pve (mysql)     03/01/23     _x86_64_    (64 CPU)
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.89    0.00    0.41    0.19    0.00   98.51

Device             tps    kB_read/s    kB_wrtn/s    kB_dscd/s    kB_read    kB_wrtn    kB_dscd
loop0            17.74       257.00       227.14         0.00   14328432   12663528          0
loop1             8.28        75.64        39.55         0.00    4216865    2205036          0
loop2            33.18       104.95       137.15         0.00    5851465    7646676          0
loop3            18.04       212.04        80.70         0.00   11821774    4499116          0
loop4           104.63        17.37       580.22         0.00     968667   32348644          0
loop5           291.40         1.84      1487.22         0.00     102618   82916120          0
sda             256.54       558.69      5349.61         0.00   31148078  298253168          0
sdb             256.25       556.92      5349.61         0.00   31049334  298253168          0
sr0               0.00         0.00         0.00         0.00          1          0          0

From TrueNAS, iostat of the SSD volume
Code:
capacity     operations     bandwidth
pool                                            alloc   free   read  write   read  write
----------------------------------------------  -----  -----  -----  -----  -----  -----
Vol2                                             101G  1.71T      0  2.17K      0  56.5M
  mirror-0                                      50.7G   877G      0  1.12K      0  29.4M
    gptid/a8dd0e3a-b559-11ed-93b2-6cc217350350      -      -      0    583      0  14.7M
    gptid/a8f2ad76-b559-11ed-93b2-6cc217350350      -      -      0    566      0  14.7M
  mirror-1                                      50.6G   877G      0  1.04K      0  27.1M
    gptid/a8f3ce97-b559-11ed-93b2-6cc217350350      -      -      0    539      0  13.6M
    gptid/a8f1886e-b559-11ed-93b2-6cc217350350      -      -      0    529      0  13.6M
----------------------------------------------  -----  -----  -----  -----  -----  -----
 
Last edited:
PVE Config

Code:
root@proxmox1:~# pct config 305
arch: amd64
cores: 4
features: nesting=1
hostname: mysql
memory: 16384
mp0: Mysql:305/vm-305-disk-0.raw,mp=/var/lib/mysql,backup=1,size=100G
net0: name=eth0,bridge=vmbr0,firewall=1,gw=192.168.33.1,hwaddr=42:39:CF:63:CB:BA,ip=192.168.33.8/28,type=veth
ostype: ubuntu
rootfs: Proxmox-SSD:305/vm-305-disk-0.raw,size=50G
swap: 10240
unprivileged: 1
root@proxmox1:~#

Before sql insert
Code:
root@mysql:~# iostat
Linux 5.15.85-1-pve (mysql)     03/01/23     _x86_64_    (64 CPU)
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.89    0.00    0.41    0.19    0.00   98.51

Device             tps    kB_read/s    kB_wrtn/s    kB_dscd/s    kB_read    kB_wrtn    kB_dscd
loop0            17.77       257.39       227.48         0.00   14328432   12663464          0
loop1             8.29        75.75        39.60         0.00    4216865    2204576          0
loop2            33.22       105.11       137.31         0.00    5851465    7643700          0
loop3            18.07       212.36        80.82         0.00   11821774    4499008          0
loop4           104.78        17.38       580.20         0.00     967279   32298144          0
loop5           291.52         1.84      1485.02         0.00     102618   82667600          0
sda             256.91       559.54      5357.48         0.00   31148078  298238460          0
sdb             256.62       557.76      5357.48         0.00   31049334  298238460          0
sr0               0.00         0.00         0.00         0.00          1          0          0

During sql insert nothing noticeable to me
Code:
root@mysql:~# iostat
Linux 5.15.85-1-pve (mysql)     03/01/23     _x86_64_    (64 CPU)
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.89    0.00    0.41    0.19    0.00   98.51

Device             tps    kB_read/s    kB_wrtn/s    kB_dscd/s    kB_read    kB_wrtn    kB_dscd
loop0            17.74       257.00       227.14         0.00   14328432   12663528          0
loop1             8.28        75.64        39.55         0.00    4216865    2205036          0
loop2            33.18       104.95       137.15         0.00    5851465    7646676          0
loop3            18.04       212.04        80.70         0.00   11821774    4499116          0
loop4           104.63        17.37       580.22         0.00     968667   32348644          0
loop5           291.40         1.84      1487.22         0.00     102618   82916120          0
sda             256.54       558.69      5349.61         0.00   31148078  298253168          0
sdb             256.25       556.92      5349.61         0.00   31049334  298253168          0
sr0               0.00         0.00         0.00         0.00          1          0          0

From TrueNAS, iostat of the SSD volume
Code:
capacity     operations     bandwidth
pool                                            alloc   free   read  write   read  write
----------------------------------------------  -----  -----  -----  -----  -----  -----
Vol2                                             101G  1.71T      0  2.17K      0  56.5M
  mirror-0                                      50.7G   877G      0  1.12K      0  29.4M
    gptid/a8dd0e3a-b559-11ed-93b2-6cc217350350      -      -      0    583      0  14.7M
    gptid/a8f2ad76-b559-11ed-93b2-6cc217350350      -      -      0    566      0  14.7M
  mirror-1                                      50.6G   877G      0  1.04K      0  27.1M
    gptid/a8f3ce97-b559-11ed-93b2-6cc217350350      -      -      0    539      0  13.6M
    gptid/a8f1886e-b559-11ed-93b2-6cc217350350      -      -      0    529      0  13.6M
----------------------------------------------  -----  -----  -----  -----  -----  -----
Sorry, iostat is rather useless here as NFS stats are not included, you will have to use nfsiostat and nfsstat, see https://www.redhat.com/sysadmin/using-nfsstat-nfsiostat

Nevertheless, the intention was to see if you are CPU limited or IO limited. If storage performance is not the issue, then the best approach for DB optimization is probably described here https://dev.mysql.com/doc/refman/8.0/en/optimizing-innodb.html

Please also be aware of the potential issues of MySQL on NFS, see https://dev.mysql.com/doc/refman/8.0/en/disk-issues.html