Hi,
first my sys:
Opteron 3280
16GB ECC
2*250GB SSD in ZFS-Raid1 (for Host- and Guest-OS)
3*1TB HDD in ZFS-RaidZ1 (for Data-vDisks)
I got some problems with the performance of the disks inside my VMs (both Win and Linux).
with dd or crystaldiskmark ("CDM") i get very good values. (around 400-1000mb/s writing and reading). And if I mount a sambashare and test it with CDM over LAN, I'll get 118mb/s reading and 60mb/s writing. so far so good. But when I'm trying to copy something with "cp" or over samba, I'll just get something like 20mb/s reading and 50mb/s writing. Changing the cache to writethrough increases the reading performance over samba up to 112mb/s, but the writing breaks down to underneath 1mb/s. Changing to "unsafe" gives me 20/60 for read/write.
Here some tests of the Host
	
	
	
		
	
	
	
		
	
	
	
		
	
	
	
		
	
	
	
		
Guest (SSD-ZFS Pool):
	
	
	
		
Guest (HDD-ZFS Pool; "no cache"):
	
	
	
		
Guest (HDD-ZFS Pool; "unsafe"):
	
	
	
		
Do you have any hints, how I can improve the HDD performance of the vDisks for filetransfer?
				
			first my sys:
Opteron 3280
16GB ECC
2*250GB SSD in ZFS-Raid1 (for Host- and Guest-OS)
3*1TB HDD in ZFS-RaidZ1 (for Data-vDisks)
I got some problems with the performance of the disks inside my VMs (both Win and Linux).
with dd or crystaldiskmark ("CDM") i get very good values. (around 400-1000mb/s writing and reading). And if I mount a sambashare and test it with CDM over LAN, I'll get 118mb/s reading and 60mb/s writing. so far so good. But when I'm trying to copy something with "cp" or over samba, I'll just get something like 20mb/s reading and 50mb/s writing. Changing the cache to writethrough increases the reading performance over samba up to 112mb/s, but the writing breaks down to underneath 1mb/s. Changing to "unsafe" gives me 20/60 for read/write.
Here some tests of the Host
		Code:
	
	root@pve:~# dd if=/dev/zero of=tempfile bs=1G count=1 conv=fdatasync,notrunc
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB) copied, 1.28362 s, 836 MB/s
	
		Code:
	
	root@pve:~# dd if=tempfile of=/dev/null bs=1G count=1
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB) copied, 0.773135 s, 1.4 GB/s
	
		Code:
	
	root@pve:~# pveperf
CPU BOGOMIPS:      38400.40
REGEX/SECOND:      1012884
HD SIZE:           150.40 GB (rpool/ROOT/pve-1)
FSYNCS/SECOND:     684.98
DNS EXT:           144.05 ms
	
		Code:
	
	 zfs get all rpool
NAME   PROPERTY              VALUE                  SOURCE
rpool  type                  filesystem             -
rpool  creation              Mon Feb 20 21:38 2017  -
rpool  used                  69.4G                  -
rpool  available             146G                   -
rpool  referenced            96K                    -
rpool  compressratio         1.25x                  -
rpool  mounted               yes                    -
rpool  quota                 none                   default
rpool  reservation           none                   default
rpool  recordsize            128K                   default
rpool  mountpoint            /rpool                 default
rpool  sharenfs              off                    default
rpool  checksum              on                     default
rpool  compression           on                     local
rpool  atime                 off                    local
rpool  devices               on                     default
rpool  exec                  on                     default
rpool  setuid                on                     default
rpool  readonly              off                    default
rpool  zoned                 off                    default
rpool  snapdir               hidden                 default
rpool  aclinherit            restricted             default
rpool  canmount              on                     default
rpool  xattr                 on                     default
rpool  copies                1                      default
rpool  version               5                      -
rpool  utf8only              off                    -
rpool  normalization         none                   -
rpool  casesensitivity       sensitive              -
rpool  vscan                 off                    default
rpool  nbmand                off                    default
rpool  sharesmb              off                    default
rpool  refquota              none                   default
rpool  refreservation        none                   default
rpool  primarycache          all                    default
rpool  secondarycache        all                    default
rpool  usedbysnapshots       0                      -
rpool  usedbydataset         96K                    -
rpool  usedbychildren        69.4G                  -
rpool  usedbyrefreservation  0                      -
rpool  logbias               latency                default
rpool  dedup                 off                    default
rpool  mlslabel              none                   default
rpool  sync                  standard               local
rpool  refcompressratio      1.00x                  -
rpool  written               96K                    -
rpool  logicalused           77.1G                  -
rpool  logicalreferenced     40K                    -
rpool  filesystem_limit      none                   default
rpool  snapshot_limit        none                   default
rpool  filesystem_count      none                   default
rpool  snapshot_count        none                   default
rpool  snapdev               hidden                 default
rpool  acltype               off                    default
rpool  context               none                   default
rpool  fscontext             none                   default
rpool  defcontext            none                   default
rpool  rootcontext           none                   default
rpool  relatime              off                    default
rpool  redundant_metadata    all                    default
rpool  overlay               off                    default
	
		Code:
	
	root@pve:~# zfs get all tank
NAME  PROPERTY              VALUE                  SOURCE
tank  type                  filesystem             -
tank  creation              Thu Apr 20 21:21 2017  -
tank  used                  380G                   -
tank  available             1.38T                  -
tank  referenced            128K                   -
tank  compressratio         1.00x                  -
tank  mounted               yes                    -
tank  quota                 none                   default
tank  reservation           none                   default
tank  recordsize            128K                   default
tank  mountpoint            /tank                  default
tank  sharenfs              off                    default
tank  checksum              on                     default
tank  compression           lz4                    local
tank  atime                 on                     default
tank  devices               on                     default
tank  exec                  on                     default
tank  setuid                on                     default
tank  readonly              off                    default
tank  zoned                 off                    default
tank  snapdir               hidden                 default
tank  aclinherit            restricted             default
tank  canmount              on                     default
tank  xattr                 on                     default
tank  copies                1                      default
tank  version               5                      -
tank  utf8only              off                    -
tank  normalization         none                   -
tank  casesensitivity       sensitive              -
tank  vscan                 off                    default
tank  nbmand                off                    default
tank  sharesmb              off                    default
tank  refquota              none                   default
tank  refreservation        none                   default
tank  primarycache          all                    default
tank  secondarycache        all                    default
tank  usedbysnapshots       0                      -
tank  usedbydataset         128K                   -
tank  usedbychildren        380G                   -
tank  usedbyrefreservation  0                      -
tank  logbias               latency                default
tank  dedup                 off                    default
tank  mlslabel              none                   default
tank  sync                  standard               default
tank  refcompressratio      1.00x                  -
tank  written               128K                   -
tank  logicalused           285G                   -
tank  logicalreferenced     40K                    -
tank  filesystem_limit      none                   default
tank  snapshot_limit        none                   default
tank  filesystem_count      none                   default
tank  snapshot_count        none                   default
tank  snapdev               hidden                 default
tank  acltype               off                    default
tank  context               none                   default
tank  fscontext             none                   default
tank  defcontext            none                   default
tank  rootcontext           none                   default
tank  relatime              off                    default
tank  redundant_metadata    all                    default
tank  overlay               off                    default
	Guest (SSD-ZFS Pool):
		Code:
	
	root@ubuntuVM:~$ dd if=/dev/zero of=tempfile bs=1G count=1 conv=fdatasync,notrunc
1+0 Datensätze ein
1+0 Datensätze aus
1073741824 bytes (1,1 GB, 1,0 GiB) copied, 17,0659 s, 62,9 MB/s
root@ubuntuVM:~$ echo 3 | sudo tee /proc/sys/vm/drop_caches echo 3 | sudo tee /proc/sys/vm/drop_caches
3
root@ubuntuVM:~$  dd if=tempfile of=/dev/null bs=1G count=1
1+0 Datensätze ein
1+0 Datensätze aus
1073741824 bytes (1,1 GB, 1,0 GiB) copied, 1,693 s, 634 MB/s
root@ubuntuVM:~$  dd if=tempfile of=/dev/null bs=1G count=1
1+0 Datensätze ein
1+0 Datensätze aus
1073741824 bytes (1,1 GB, 1,0 GiB) copied, 2,48739 s, 432 MB/s
	Guest (HDD-ZFS Pool; "no cache"):
		Code:
	
	root@ubuntuVM:~$ dd if=/dev/zero of=tempfile bs=1G count=1 conv=fdatasync,notrunc
1+0 Datensätze ein
1+0 Datensätze aus
1073741824 bytes (1,1 GB, 1,0 GiB) copied, 17,0659 s, 72,1 MB/s
root@ubuntuVM:~$  dd if=tempfile of=/dev/null bs=1G count=1
1+0 Datensätze ein
1+0 Datensätze aus
1073741824 bytes (1,1 GB, 1,0 GiB) copied, 0,91693 s, 1.2 GB/s
	Guest (HDD-ZFS Pool; "unsafe"):
		Code:
	
	root@ubuntuVM:~$ dd if=/dev/zero of=tempfile bs=1G count=1 conv=fdatasync,notrunc
1+0 Datensätze ein
1+0 Datensätze aus
1073741824 bytes (1,1 GB, 1,0 GiB) copied, 2,0659 s, 371,1 MB/s
root@ubuntuVM:~$  dd if=tempfile of=/dev/null bs=1G count=1
1+0 Datensätze ein
1+0 Datensätze aus
1073741824 bytes (1,1 GB, 1,0 GiB) copied, 0,91693 s, 985 MB/s
	Do you have any hints, how I can improve the HDD performance of the vDisks for filetransfer?