Sometimes very slow Windows 2008 guest startup

palaroda

Renowned Member
Oct 24, 2013
3
0
66
Fresh installed proxmox, 2 HDD for system and 2 HDD for storage :

root@proxmox0:~# zpool status -v
pool: rpool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
sda2 ONLINE 0 0 0
sdb2 ONLINE 0 0 0
errors: No known data errors

pool: zfs-pve-data
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
zfs-pve-data ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
sdc1 ONLINE 0 0 0
sdd1 ONLINE 0 0 0
errors: No known data errors

root@proxmox0:/# zfs get sync
NAME PROPERTY VALUE SOURCE
rpool sync standard local
rpool/ROOT sync standard inherited from rpool
rpool/ROOT/pve-1 sync standard inherited from rpool
rpool/data sync standard inherited from rpool
rpool/swap sync always local
zfs-pve-data sync standard local
zfs-pve-data/vm-360-disk-1 sync standard inherited from zfs-pve-data
zfs-pve-data/vm-360-disk-2 sync standard inherited from zfs-pve-data

root@proxmox0:/# zfs get compression
NAME PROPERTY VALUE SOURCE
rpool compression off local
rpool/ROOT compression off inherited from rpool
rpool/ROOT/pve-1 compression off inherited from rpool
rpool/data compression off inherited from rpool
rpool/data/vm-110-disk-1 compression off inherited from rpool
rpool/swap compression zle local
zfs-pve-data compression off default
zfs-pve-data/vm-360-disk-1 compression off default
zfs-pve-data/vm-360-disk-2 compression off default

The only guest installed - windows2008r2 with latest VirtIO drivers from virtio-win-0.1.126.iso

root@proxmox0:/etc/pve/qemu-server# cat 360.conf
agent: 1
balloon: 0
boot: cd
bootdisk: virtio0
cores: 2
ide2: local:iso/virtio-win-0.1.126.iso,media=cdrom,size=152204K
memory: 16384
name: windows2008r2
net0: virtio=BA:DB:B0:2C:8A:42,bridge=vmbr0
numa: 0
onboot: 1
ostype: win7
scsihw: virtio-scsi-single
smbios1: uuid=ed5b3b44-c66e-4d5a-b092-252da516e07d
sockets: 2
tablet: 0
virtio0: zfs-pve-data:vm-360-disk-1,size=100G
virtio1: zfs-pve-data:vm-360-disk-2,size=50G

Normally windows guest starts in 10-20 seconds, but sometime it may go on up to one hour.
At this moment atop shows that system drives sda and sdb are busy, not storage drives sdc and sdd :

PRC | sys 0.79s | user 29.92s | #proc 1098 | #tslpi 1120 | #tslpu 1 | #zombie 0 | #exit 11 |
CPU | sys 6% | user 298% | irq 12% | idle 1181% | wait 104% | avgf 2.40GHz | avgscal 100% |
CPL | avg1 3.86 | avg5 1.98 | avg15 1.40 | csw 30981 | intr 23961 | | numcpu 16 |
MEM | tot 31.4G | free 17.8G | cache 157.9M | buff 1.3M | slab 205.5M | vmbal 0.0M | hptot 0.0M |
SWP | tot 8.0G | free 7.9G | | | | vmcom 19.7G | vmlim 23.7G |
PAG | scan 6 | steal 1028 | stall 0 | | | swin 0 | swout 1025 |
DSK | sdb | busy 89% | read 1030 | write 1104 | MBr/s 0.0 | MBw/s 0.6 | avio 4.20 ms |
DSK | sda | busy 85% | read 1030 | write 1110 | MBr/s 0.0 | MBw/s 0.6 | avio 3.98 ms |
DSK | sdd | busy 0% | read 5 | write 0 | MBr/s 0.0 | MBw/s 0.0 | avio 0.80 ms |

hdparm -tT /dev/sda and hdparm -tT /dev/sdb shows the not critically reduced performance about 50-100 MB/s at this moment, not 180-190 MB/s as usual.

I can reboot proxmox and guest will start very fast. But I cannot learn the cause of such behavior.
 
Hi,

can you send me the output of

Code:
cat /proc/spl/kstat/zfs/arcstats
 
Hi,

can you send me the output of

Code:
cat /proc/spl/kstat/zfs/arcstats

6 1 0x01 91 4368 2397999098 261026861509354
name type data
hits 4 54180958
misses 4 11281164
demand_data_hits 4 42057269
demand_data_misses 4 383728
demand_metadata_hits 4 11926138
demand_metadata_misses 4 258213
prefetch_data_hits 4 190200
prefetch_data_misses 4 10637668
prefetch_metadata_hits 4 7351
prefetch_metadata_misses 4 1555
mru_hits 4 25650481
mru_ghost_hits 4 14344
mfu_hits 4 28424936
mfu_ghost_hits 4 18369
deleted 4 11524684
mutex_miss 4 400
evict_skip 4 35878
evict_not_enough 4 2265
evict_l2_cached 4 0
evict_l2_eligible 4 138727391744
evict_l2_ineligible 4 708405248
evict_l2_skip 4 0
hash_elements 4 8920
hash_elements_max 4 1963259
hash_collisions 4 4212859
hash_chains 4 14
hash_chain_max 4 7
p 4 34705995
c 4 71522166
c_min 4 33554432
c_max 4 16862185472
size 4 67385224
hdr_size 4 3387184
data_size 4 30885888
metadata_size 4 21211136
other_size 4 11901016
anon_size 4 2174464
anon_evictable_data 4 0
anon_evictable_metadata 4 0
mru_size 4 20297216
mru_evictable_data 4 3909120
mru_evictable_metadata 4 681984
mru_ghost_size 4 44602368
mru_ghost_evictable_data 4 1825280
mru_ghost_evictable_metadata 4 42777088
mfu_size 4 29625344
mfu_evictable_data 4 24838144
mfu_evictable_metadata 4 664576
mfu_ghost_size 4 16597504
mfu_ghost_evictable_data 4 9878528
mfu_ghost_evictable_metadata 4 6718976
l2_hits 4 0
l2_misses 4 0
l2_feeds 4 0
l2_rw_clash 4 0
l2_read_bytes 4 0
l2_write_bytes 4 0
l2_writes_sent 4 0
l2_writes_done 4 0
l2_writes_error 4 0
l2_writes_lock_retry 4 0
l2_evict_lock_retry 4 0
l2_evict_reading 4 0
l2_evict_l1cached 4 0
l2_free_on_write 4 0
l2_cdata_free_on_write 4 0
l2_abort_lowmem 4 0
l2_cksum_bad 4 0
l2_io_error 4 0
l2_size 4 0
l2_asize 4 0
l2_hdr_size 4 0
l2_compress_successes 4 0
l2_compress_zeros 4 0
l2_compress_failures 4 0
memory_throttle_count 4 0
duplicate_buffers 4 0
duplicate_buffers_size 4 0
duplicate_reads 4 0
memory_direct_count 4 1450
memory_indirect_count 4 0
arc_no_grow 4 0
arc_tempreserve 4 0
arc_loaned_bytes 4 0
arc_prune 4 0
arc_meta_used 4 36499336
arc_meta_limit 4 12646639104
arc_meta_max 4 1632345112
arc_meta_min 4 16777216
arc_need_free 4 0
arc_sys_free 4 526942208
 
  • Like
Reactions: palaroda