Unstable Transferspeed/Network (Windows Server 2016)

drno

New Member
May 16, 2017
16
0
1
24
Whenever I try to copy a file from a client pc (tried that from different pcs) to the server, I get very weird tranferspeeds and it even freezes (drops down few kb/s) at the middle of the transfer.

Also it uses VirtIO Ethernet and 4 Broadcom Limited NetXtreme II BCM5709 Gigabit Ethernet Adapters in LACP.

filetranfer.PNG

Disk Benchmark

raid10-performance.PNG

Iperf result
From Client PC to Windows Server
Code:
iperf3.exe -c 192.168.1.4 Connecting to host 192.168.1.4, port 5201 
[ 4] local 192.168.1.38 port 60459 connected to 192.168.1.4 port 5201 
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-1.00 sec 98.2 MBytes 824 Mbits/sec 
[ 4] 1.00-2.00 sec 99.2 MBytes 833 Mbits/sec 
[ 4] 2.00-3.00 sec 99.1 MBytes 832 Mbits/sec
[ 4] 3.00-4.00 sec 99.2 MBytes 832 Mbits/sec
[ 4] 4.00-5.00 sec 99.6 MBytes 836 Mbits/sec 
[ 4] 5.00-6.00 sec 102 MBytes 851 Mbits/sec
[ 4] 6.00-7.00 sec 98.4 MBytes 826 Mbits/sec 
[ 4] 7.00-8.00 sec 98.6 MBytes 827 Mbits/sec 
[ 4] 8.00-9.00 sec 98.6 MBytes 828 Mbits/sec 
[ 4] 9.00-10.00 sec 96.9 MBytes 813 Mbits/sec
[ ID] Interval Transfer Bandwidth 
[ 4] 0.00-10.00 sec 990 MBytes 830 Mbits/sec sender 
[ 4] 0.00-10.00 sec 989 MBytes 830 Mbits/sec receiver iperf Done.

Package versions
Code:
proxmox-ve: 5.0-9 (running kernel: 4.10.11-1-pve)
pve-manager: 5.0-10 (running version: 5.0-10/0d270679) 
pve-kernel-4.10.8-1-pve: 4.10.8-7 
pve-kernel-4.10.11-1-pve: 4.10.11-9 
libpve-http-server-perl: 2.0-4 
lvm2: 2.02.168-pve2 
corosync: 2.4.2-pve2 
libqb0: 1.0.1-1 
pve-cluster: 5.0-7 
qemu-server: 5.0-4 
pve-firmware: 2.0-2 
libpve-common-perl: 5.0-12 
libpve-guest-common-perl: 2.0-1 
libpve-access-control: 5.0-4 
libpve-storage-perl: 5.0-3 
pve-libspice-server1: 0.12.8-3 
vncterm: 1.4-1 
pve-docs: 5.0-1 
pve-qemu-kvm: 2.9.0-1 
pve-container: 2.0-6
pve-firewall: 3.0-1
pve-ha-manager: 2.0-1
ksm-control-daemon: 1.2-2 
glusterfs-client: 3.8.8-1 
xc-pve: 2.0.8-1 
lxcfs: 2.0.7-pve1 
criu: 2.11.1-1~bpo90 
novnc-pve: 0.5-9 
smartmontools: 6.5+svn4324-1 
zfsutils-linux: 0.6.5.9-pve16~bpo90

Network config
Code:
auto lo
iface lo inet loopback

iface eth0 inet manual

iface eth1 inet manual

iface eth2 inet manual

iface eth3 inet manual

auto bond0
iface bond0 inet manual
        slaves eth0 eth1 eth2 eth3
        bond_miimon 100
        bond_mode 802.3ad
        bond_xmit_hash_policy layer3+4

auto vmbr0
iface vmbr0 inet static
        address  192.168.1.2
        netmask  255.255.255.0
        gateway  192.168.1.1
        bridge_ports bond0
        bridge_stp off
        bridge_fd 0

Windows Server 2016 config
Code:
boot: cdn
bootdisk: virtio0
cores: 2
ide1: local:iso/virtio-win-0.1.137.iso,media=cdrom,size=166330K
ide2: local:iso/de_windows_server_2016_essentials_x64_dvd_9719470.iso,media=cdrom
memory: 12288
name: dc1
net0: virtio=AE:04:B3:4B:89:F9,bridge=vmbr0
numa: 0
onboot: 1
ostype: win10
scsihw: virtio-scsi-pci
smbios1: uuid=811ea455-7e3b-4d45-8a99-f90ca10289e8
sockets: 2
startup: order=1,up=300
virtio0: vmdata:vm-114-disk-1,size=1000G
 
what kind of storage is 'vmdata' ?
 
how much memory has the host? and how much of it is for the arc? ssds? hdds?
i guess the beginning is writing in your cache (guest or host or both) and then your disks slow you down
 
About 48G, didnt set any limit for arc.
Also I use a SSD for l2arc and zil.
 
Would it be possible to restest using a single Ethernet card, with the 3 others temporarily disabled. And if you get the same speed results, can you try a different switch/cat6 cable?
 
Would it be possible to restest using a single Ethernet card, with the 3 others temporarily disabled. And if you get the same speed results, can you try a different switch/cat6 cable?
Just installed samba on a linux container, everything looks normal there.
Unbenannt.PNG
But when I do it from a VM (Debian), I get same issue as above shown.
Unbenannt2.PNG
 
The 4 drives, are they connected directly via sata port or connected to raid controller card? Also if connected to raid controller card, is it set to HBA or JBOD mode? Can you also post lsblk and fdisk -l for more info?

Finally, it looks like you are using 5.0 beta. Do you get same results with 4.4 stable?
 
The 4 drives, are they connected directly via sata port or connected to raid controller card? Also if connected to raid controller card, is it set to HBA or JBOD mode? Can you also post lsblk and fdisk -l for more info?

Finally, it looks like you are using 5.0 beta. Do you get same results with 4.4 stable?

They are connected to the lsi 9211-i8, that is in HBA mode.
Got the same issue with 4.4, thats why I upgraded to 5.0 for hoping better results.
Also there is a another drive connected, but only for backups (if its matters).

fdisk -l
Code:
Sector size (logical/physical): 512 bytes / 8192 bytes
I/O size (minimum/optimal): 8192 bytes / 8192 bytes
Disklabel type: dos
Disk identifier: 0xd6760c14

Device      Boot    Start      End  Sectors  Size Id Type
/dev/zd64p1 *        2048 64286719 64284672 30.7G 83 Linux
/dev/zd64p2      64288766 67106815  2818050  1.4G  5 Extended
/dev/zd64p5      64288768 67106815  2818048  1.4G 82 Linux swap / Solaris

Partition 2 does not start on physical sector boundary.


Disk /dev/zd80: 32 GiB, 34359738368 bytes, 67108864 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 8192 bytes
I/O size (minimum/optimal): 8192 bytes / 8192 bytes
Disklabel type: dos
Disk identifier: 0x46fd7f83

Device      Boot    Start      End  Sectors  Size Id Type
/dev/zd80p1 *        2048 64286719 64284672 30.7G 83 Linux
/dev/zd80p2      64288766 67106815  2818050  1.4G  5 Extended
/dev/zd80p5      64288768 67106815  2818048  1.4G 82 Linux swap / Solaris

Partition 2 does not start on physical sector boundary.


Disk /dev/zd96: 128 GiB, 137438953472 bytes, 268435456 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 8192 bytes
I/O size (minimum/optimal): 8192 bytes / 8192 bytes
Disklabel type: dos
Disk identifier: 0xb7d6f67c

Device      Boot     Start       End   Sectors  Size Id Type
/dev/zd96p1 *         2048 266338303 266336256  127G 83 Linux
/dev/zd96p2      266340350 268433407   2093058 1022M  5 Extended
/dev/zd96p5      266340352 268433407   2093056 1022M 82 Linux swap / Solaris

Partition 2 does not start on physical sector boundary.


Disk /dev/zd112: 8 GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
root@pve:~# clear
root@pve:~# fdisk -l
Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 30F7A11B-5599-4ECA-A917-A062E3F3390E

Device          Start        End    Sectors   Size Type
/dev/sda1          34       2047       2014  1007K BIOS boot
/dev/sda2        2048 1953508749 1953506702 931.5G Solaris /usr & Apple ZFS
/dev/sda9  1953508750 1953525134      16385     8M Solaris reserved 1

Partition 1 does not start on physical sector boundary.
Partition 9 does not start on physical sector boundary.


Disk /dev/sdd: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 87C2917A-069F-4248-A029-C6FB630D86DD

Device          Start        End    Sectors   Size Type
/dev/sdd1          34       2047       2014  1007K BIOS boot
/dev/sdd2        2048 1953508749 1953506702 931.5G Solaris /usr & Apple ZFS
/dev/sdd9  1953508750 1953525134      16385     8M Solaris reserved 1

Partition 1 does not start on physical sector boundary.
Partition 9 does not start on physical sector boundary.


Disk /dev/sde: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 976C4074-9F9B-AE4A-BF02-07377B4FAC6F

Device          Start        End    Sectors   Size Type
/dev/sde1        2048 1953507327 1953505280 931.5G Solaris /usr & Apple ZFS
/dev/sde9  1953507328 1953523711      16384     8M Solaris reserved 1


Disk /dev/sdf: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: CEA0A2B0-A147-AD43-9D39-360AD1AD13C4

Device          Start        End    Sectors   Size Type
/dev/sdf1        2048 1953507327 1953505280 931.5G Solaris /usr & Apple ZFS
/dev/sdf9  1953507328 1953523711      16384     8M Solaris reserved 1


Disk /dev/sdc: 85.9 GiB, 92185686528 bytes, 180050169 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: AAE40A55-F90F-4361-B435-DB7D1EF33D45

Device        Start       End   Sectors Size Type
/dev/sdc1      2048  41945087  41943040  20G Linux filesystem
/dev/sdc2  41945088 176162815 134217728  64G Linux filesystem


Disk /dev/sdb: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 0CB4C635-7346-43F9-B711-D5DA8908CBC9

Device     Start        End    Sectors   Size Type
/dev/sdb1   2048 1953525134 1953523087 931.5G Linux filesystem


Disk /dev/zd0: 128 GiB, 137438953472 bytes, 268435456 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 8192 bytes
I/O size (minimum/optimal): 8192 bytes / 8192 bytes
Disklabel type: dos
Disk identifier: 0x31172455

Device     Boot     Start       End   Sectors  Size Id Type
/dev/zd0p1 *         2048 260050943 260048896  124G 83 Linux
/dev/zd0p2      260052990 268433407   8380418    4G  5 Extended
/dev/zd0p5      260052992 268433407   8380416    4G 82 Linux swap / Solaris

Partition 2 does not start on physical sector boundary.


Disk /dev/zd16: 250 GiB, 268435456000 bytes, 524288000 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 8192 bytes
I/O size (minimum/optimal): 8192 bytes / 8192 bytes
Disklabel type: dos
Disk identifier: 0x205dfc39

Device      Boot     Start       End   Sectors   Size Id Type
/dev/zd16p1 *         2048 502994943 502992896 239.9G 83 Linux
/dev/zd16p2      502996990 524285951  21288962  10.2G  5 Extended
/dev/zd16p5      502996992 524285951  21288960  10.2G 82 Linux swap / Solaris

Partition 2 does not start on physical sector boundary.


Disk /dev/zd32: 1000 GiB, 1073741824000 bytes, 2097152000 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 8192 bytes
I/O size (minimum/optimal): 8192 bytes / 8192 bytes
Disklabel type: dos
Disk identifier: 0xbdae006f

Device      Boot   Start        End    Sectors   Size Id Type
/dev/zd32p1 *       2048    1026047    1024000   500M  7 HPFS/NTFS/exFAT
/dev/zd32p2      1026048 2097149951 2096123904 999.5G  7 HPFS/NTFS/exFAT


Disk /dev/zd48: 32 GiB, 34359738368 bytes, 67108864 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 8192 bytes
I/O size (minimum/optimal): 8192 bytes / 8192 bytes
Disklabel type: dos
Disk identifier: 0x52aef2a1

Device      Boot    Start      End  Sectors  Size Id Type
/dev/zd48p1 *        2048 64286719 64284672 30.7G 83 Linux
/dev/zd48p2      64288766 67106815  2818050  1.4G  5 Extended
/dev/zd48p5      64288768 67106815  2818048  1.4G 82 Linux swap / Solaris

Partition 2 does not start on physical sector boundary.


Disk /dev/zd64: 32 GiB, 34359738368 bytes, 67108864 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 8192 bytes
I/O size (minimum/optimal): 8192 bytes / 8192 bytes
Disklabel type: dos
Disk identifier: 0xd6760c14

Device      Boot    Start      End  Sectors  Size Id Type
/dev/zd64p1 *        2048 64286719 64284672 30.7G 83 Linux
/dev/zd64p2      64288766 67106815  2818050  1.4G  5 Extended
/dev/zd64p5      64288768 67106815  2818048  1.4G 82 Linux swap / Solaris

Partition 2 does not start on physical sector boundary.


Disk /dev/zd80: 32 GiB, 34359738368 bytes, 67108864 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 8192 bytes
I/O size (minimum/optimal): 8192 bytes / 8192 bytes
Disklabel type: dos
Disk identifier: 0x46fd7f83

Device      Boot    Start      End  Sectors  Size Id Type
/dev/zd80p1 *        2048 64286719 64284672 30.7G 83 Linux
/dev/zd80p2      64288766 67106815  2818050  1.4G  5 Extended
/dev/zd80p5      64288768 67106815  2818048  1.4G 82 Linux swap / Solaris

Partition 2 does not start on physical sector boundary.


Disk /dev/zd96: 128 GiB, 137438953472 bytes, 268435456 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 8192 bytes
I/O size (minimum/optimal): 8192 bytes / 8192 bytes
Disklabel type: dos
Disk identifier: 0xb7d6f67c

Device      Boot     Start       End   Sectors  Size Id Type
/dev/zd96p1 *         2048 266338303 266336256  127G 83 Linux
/dev/zd96p2      266340350 268433407   2093058 1022M  5 Extended
/dev/zd96p5      266340352 268433407   2093056 1022M 82 Linux swap / Solaris

Partition 2 does not start on physical sector boundary.


Disk /dev/zd112: 8 GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

lsblk
Code:
NAME     MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda        8:0    0 931.5G  0 disk
├─sda1     8:1    0  1007K  0 part
├─sda2     8:2    0 931.5G  0 part
└─sda9     8:9    0     8M  0 part
sdb        8:16   0 931.5G  0 disk
└─sdb1     8:17   0 931.5G  0 part /mnt/bck
sdc        8:32   0  85.9G  0 disk
├─sdc1     8:33   0    20G  0 part
└─sdc2     8:34   0    64G  0 part
sdd        8:48   0 931.5G  0 disk
├─sdd1     8:49   0  1007K  0 part
├─sdd2     8:50   0 931.5G  0 part
└─sdd9     8:57   0     8M  0 part
sde        8:64   0 931.5G  0 disk
├─sde1     8:65   0 931.5G  0 part
└─sde9     8:73   0     8M  0 part
sdf        8:80   0 931.5G  0 disk
├─sdf1     8:81   0 931.5G  0 part
└─sdf9     8:89   0     8M  0 part
sr0       11:0    1  1024M  0 rom
zd0      230:0    0   128G  0 disk
├─zd0p1  230:1    0   124G  0 part
├─zd0p2  230:2    0     1K  0 part
└─zd0p5  230:5    0     4G  0 part
zd16     230:16   0   250G  0 disk
├─zd16p1 230:17   0 239.9G  0 part
├─zd16p2 230:18   0     1K  0 part
└─zd16p5 230:21   0  10.2G  0 part
zd32     230:32   0  1000G  0 disk
├─zd32p1 230:33   0   500M  0 part
└─zd32p2 230:34   0 999.5G  0 part
zd48     230:48   0    32G  0 disk
├─zd48p1 230:49   0  30.7G  0 part
├─zd48p2 230:50   0     1K  0 part
└─zd48p5 230:53   0   1.4G  0 part
zd64     230:64   0    32G  0 disk
├─zd64p1 230:65   0  30.7G  0 part
├─zd64p2 230:66   0     1K  0 part
└─zd64p5 230:69   0   1.4G  0 part
zd80     230:80   0    32G  0 disk
├─zd80p1 230:81   0  30.7G  0 part
├─zd80p2 230:82   0     1K  0 part
└─zd80p5 230:85   0   1.4G  0 part
zd96     230:96   0   128G  0 disk
├─zd96p1 230:97   0   127G  0 part
├─zd96p2 230:98   0     1K  0 part
└─zd96p5 230:101  0  1022M  0 part
zd112    230:112  0     8G  0 disk [SWAP]
 
I don't know if this might help, but on freenas forums they recommended using firmware "P16" or "P19". There's always the risk of bricking the controller however. You can also check your ZFS volume ashift setting via this CLI command if it's set at 12 (generally preferred) or 9.

zdb -C vmdata | grep ashift
 
I don't know if this might help, but on freenas forums they recommended using firmware "P16" or "P19". There's always the risk of bricking the controller however. You can also check your ZFS volume ashift setting via this CLI command if it's set at 12 (generally preferred) or 9.

zdb -C vmdata | grep ashift
ashift is set to 12, I think there is a problem with the VMs and VirtIO.
Also i get very constant speed if I do it in container, so the controller should be fine.
 
did you try to set the network adapter to 'e1000' instead of virtio? which version of the virtio drivers did you install?
 
did you try to set the network adapter to 'e1000' instead of virtio? which version of the virtio drivers did you install?
I tried Version 0.1.126 and 0.1.137.
The e1000 adapter is even worse.
 
You mentioned there was an additional hard drive, used for backups. I assume this is also connected to the lsi 9211-i8, but not part of the 'vmdata' pool; If so, could you test a VM (Debian) on that backup drive, and check if speeds are normal?
 
You mentioned there was an additional hard drive, used for backups. I assume this is also connected to the lsi 9211-i8, but not part of the 'vmdata' pool; If so, could you test a VM (Debian) on that backup drive, and check if speeds are normal?
The drive is a bit old, but I get atleast good constant speed.
So it has something to do with ZFS or the Disk Image itself?
 
Awesome! Ok we getting closer (I think). There's 2 things to try.

1.) (quick test)

modify your vm114 hard drive config to have cache=writeback. I noticed you didn't have writeback enabled. And then restest speeds.

2.) ( this requires more work, I hope this is just a test server! :) )

Backup all your VMs on vmdata pool to that backup drive. Then export your ZFS pool and remove those 4 vmdata disks. Add 2 new blank hard drives to the LSi controller, and create a new raid1 ZFS pool. Restore vm114 on that pool and restest with cache=writeback. Let us know how it works! thanks!
 
1.) I get now constant 90 Mb/s, but it makes the VM and even the Client unresponsive (Windows Explorer freezes/crahes).
Also I dont understand why the Bandwidth is only at 830 Mbits/s (Iperf result first post) on the other VMs I get the full speed.

2.) Cant really do that, Proxmox is also on that pool
 
You could try a couple things, try again with hard drive cache=none, and also try cache=write through. If you have at least 8 cores available in ProxMox,, try increasing VM114 from 2 to 4 CPU cores. And always use VirtIO 'stable' windows drivers instead of latest. I've had past issues with latest VirtIO drivers (Windows VM won't boot with 4.x pvetest repository/latest virtio combo)

The only thing I can think of is a full rebuild. Recreate your setup by separating the boot/system drives and data drives. Use single ext4 disk or zfs raid1 on onboard SATA controllers as the primary boot/system drive, and use 4 drives on Lsi controller as the data ZFS volume.
Mount/Add your previous 'backup' drive to ProxMox and restore VM backups to the new ZFS volume. If it works fine in this setup, then add SSD as ZFS cache.

best wishes
 
You could try a couple things, try again with hard drive cache=none, and also try cache=write through. If you have at least 8 cores available in ProxMox,, try increasing VM114 from 2 to 4 CPU cores. And always use VirtIO 'stable' windows drivers instead of latest. I've had past issues with latest VirtIO drivers (Windows VM won't boot with 4.x pvetest repository/latest virtio combo)

The only thing I can think of is a full rebuild. Recreate your setup by separating the boot/system drives and data drives. Use single ext4 disk or zfs raid1 on onboard SATA controllers as the primary boot/system drive, and use 4 drives on Lsi controller as the data ZFS volume.
Mount/Add your previous 'backup' drive to ProxMox and restore VM backups to the new ZFS volume. If it works fine in this setup, then add SSD as ZFS cache.

best wishes
I will probably rebuild it, but is it important that i sperate the system drive it from the lsi controller? The server got only a onboard raid controller (p410i) that I cant set to HBA mode.
 
Hi, it's totally up to you. If you use 2 ports on the Lsi for system drive, you still have 6ports left (4 for raid10+1 SSD cache+1 backup drive)
If the server can fit more than 8 drives, than utilizing the built-in p410i as hardware raid1would free up all 8ports on the Lsi for anything you want. Anyways let us know how the new configurations works out. best wishes!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!