I/O problems on Windows 2016 VM

yena

Renowned Member
Nov 18, 2011
373
4
83
Hello,
i have tested every options to have decent performance on Windows 2016 VM but with big writes ( over 30/40Giga ) the VM hang and i can see on dmesg:

--------------------------------------------------------------------------------------------------------------------------------------

[ 1330.170678] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 1330.171227] kvm D 0 5940 1 0x00000000
[ 1330.171231] Call Trace:
[ 1330.171241] __schedule+0x3e3/0x880
[ 1330.171244] ? bit_wait+0x60/0x60
[ 1330.171245] schedule+0x36/0x80
[ 1330.171248] io_schedule+0x16/0x40
[ 1330.171249] bit_wait_io+0x11/0x60
[ 1330.171251] __wait_on_bit+0x5a/0x90
[ 1330.171252] out_of_line_wait_on_bit+0x8e/0xb0
[ 1330.171255] ? bit_waitqueue+0x40/0x40
[ 1330.171259] __block_write_begin_int+0x262/0x5b0
[ 1330.171261] ? I_BDEV+0x20/0x20
[ 1330.171263] ? I_BDEV+0x20/0x20
[ 1330.171265] block_write_begin+0x4d/0xe0
[ 1330.171267] blkdev_write_begin+0x23/0x30
[ 1330.171270] generic_perform_write+0xb9/0x1b0
[ 1330.171272] __generic_file_write_iter+0x185/0x1c0
[ 1330.171276] ? hrtimer_cancel+0x19/0x20
[ 1330.171278] blkdev_write_iter+0xa8/0x130
[ 1330.171281] do_iter_readv_writev+0x116/0x180
[ 1330.171283] ? __blkdev_get+0x4d0/0x4d0
[ 1330.171285] ? do_iter_readv_writev+0x116/0x180
[ 1330.171286] do_iter_write+0x87/0x1a0
[ 1330.171288] vfs_writev+0x98/0x110
[ 1330.171291] do_pwritev+0xb2/0xd0
[ 1330.171292] ? do_pwritev+0xb2/0xd0
[ 1330.171295] ? fire_user_return_notifiers+0x33/0x50
[ 1330.171298] SyS_pwritev+0x11/0x20
[ 1330.171302] do_syscall_64+0x73/0x130
[ 1330.171305] entry_SYSCALL_64_after_hwframe+0x3d/0xa2
[ 1330.171308] RIP: 0033:0x7fb0ec9c1f53
[ 1330.171309] RSP: 002b:00007faad7ffc5a0 EFLAGS: 00000293 ORIG_RAX: 0000000000000128
[ 1330.171311] RAX: ffffffffffffffda RBX: 00007faaeaf70d00 RCX: 00007fb0ec9c1f53
[ 1330.171312] RDX: 0000000000000007 RSI: 00007faae8f00e10 RDI: 0000000000000015
[ 1330.171313] RBP: 00007faaeaf70d00 R08: 0000000000000000 R09: 00000000ffffffff
[ 1330.171314] R10: 000000008ae30000 R11: 0000000000000293 R12: 00007faae8f00b70
[ 1330.171314] R13: 00007fb0e11a3258 R14: 0000000000000000 R15: 00007fb104edd040
---------------------------------------------------------------------------------------------------------------------------------------------------------------

No problem using Linux VM.

pveversion -V
proxmox-ve: 5.1-41 (running kernel: 4.15.10-1-pve)
pve-manager: 5.1-46 (running version: 5.1-46/ae8241d4)
pve-kernel-4.15.10-1-pve: 4.15.10-4
pve-kernel-4.13.13-6-pve: 4.13.13-42
pve-kernel-4.13.13-2-pve: 4.13.13-33
corosync: 2.4.2-pve3
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-common-perl: 5.0-30
libpve-guest-common-perl: 2.0-14
libpve-http-server-perl: 2.0-8
libpve-storage-perl: 5.0-18
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 3.0.0-2
lxcfs: 3.0.0-1
novnc-pve: 0.6-4
proxmox-widget-toolkit: 1.0-14
pve-cluster: 5.0-20
pve-container: 2.0-21
pve-docs: 5.1-17
pve-firewall: 3.0-7
pve-firmware: 2.0-4
pve-ha-manager: 2.0-5
pve-i18n: 1.0-4
pve-libspice-server1: 0.12.8-3
pve-qemu-kvm: 2.11.1-5
pve-xtermjs: 1.0-2
qemu-server: 5.0-22
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.7-pve1~bpo9
----------------------------------------------------------------------------------------------------
zpool status
pool: STORAGE
state: ONLINE
scan: scrub repaired 0B in 2h1m with 0 errors on Sun Apr 8 02:25:52 2018
config:
NAME STATE READ WRITE CKSUM
STORAGE ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
sdc ONLINE 0 0 0
sdd ONLINE 0 0 0
logs
sde2 ONLINE 0 0 0
cache
sde1 ONLINE 0 0 0
errors: No known data errors
pool: rpool
state: ONLINE
scan: scrub repaired 0B in 0h0m with 0 errors on Sun Apr 8 00:24:07 2018
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
sda2 ONLINE 0 0 0
sdb2 ONLINE 0 0 0
errors: No known data errors
----------------------------------------------------------------------------------------------------------------------------

Something better using Zil and l2arc on dedicated SSD...

Thanks
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!