Really? Over 30 minutes to start a Just installed(so clean) Windows vm? It is insane...
Really? Over 30 minutes to start a Just installed(so clean) Windows vm? It is insane...
To make ZFS work better with the same resources is tricky.
1. What is your disks sector size?
2. What is your ZFS pool ashift setup?
3. What is your ZFS volume volblocksize, compression?
4. ARC size? #arc_summary
------------------------------------------------------------------------
ZFS Subsystem Report Wed Jan 17 21:47:39 2018
ARC Summary: (HEALTHY)
Memory Throttle Count: 0
ARC Misc:
Deleted: 4.22m
Mutex Misses: 578
Evict Skips: 578
ARC Size: 96.20% 1.92 GiB
Target Size: (Adaptive) 100.00% 2.00 GiB
Min Size (Hard Limit): 50.00% 1.00 GiB
Max Size (High Water): 2:1 2.00 GiB
ARC Size Breakdown:
Recently Used Cache Size: 83.81% 1.68 GiB
Frequently Used Cache Size: 16.19% 331.60 MiB
ARC Hash Breakdown:
Elements Max: 442.00k
Elements Current: 86.66% 383.05k
Collisions: 1.33m
Chain Max: 5
Chains: 53.63k
ARC Total accesses: 5.52m
Cache Hit Ratio: 37.29% 2.06m
Cache Miss Ratio: 62.71% 3.46m
Actual Hit Ratio: 23.96% 1.32m
Data Demand Efficiency: 33.49% 3.62m
Data Prefetch Efficiency: 45.99% 1.65m
CACHE HITS BY CACHE LIST:
Anonymously Used: 29.81% 613.58k
Most Recently Used: 52.26% 1.08m
Most Frequently Used: 12.00% 246.99k
Most Recently Used Ghost: 3.19% 65.58k
Most Frequently Used Ghost: 2.75% 56.57k
CACHE HITS BY DATA TYPE:
Demand Data: 58.91% 1.21m
Prefetch Data: 36.97% 760.95k
Demand Metadata: 3.18% 65.48k
Prefetch Metadata: 0.94% 19.33k
CACHE MISSES BY DATA TYPE:
Demand Data: 69.54% 2.41m
Prefetch Data: 25.81% 893.67k
Demand Metadata: 4.43% 153.45k
Prefetch Metadata: 0.21% 7.27k
DMU Prefetch Efficiency: 14.28m
Hit Ratio: 10.71% 1.53m
Miss Ratio: 89.29% 12.75m
ZFS Tunable:
zvol_volmode 1
l2arc_headroom 2
dbuf_cache_max_shift 5
zfs_free_leak_on_eio 0
zfs_free_max_blocks 100000
zfs_read_chunk_size 1048576
metaslab_preload_enabled 1
zfs_dedup_prefetch 0
zfs_txg_history 0
zfs_scrub_delay 4
zfs_vdev_async_read_max_active 3
zfs_read_history 0
zfs_arc_sys_free 0
l2arc_write_max 8388608
zil_slog_bulk 786432
zfs_dbuf_state_index 0
zfs_sync_taskq_batch_pct 75
metaslab_debug_unload 0
zvol_inhibit_dev 0
zfs_abd_scatter_enabled 1
zfs_arc_pc_percent 0
zfetch_max_streams 8
zfs_recover 0
metaslab_fragmentation_factor_enabled 1
zfs_deadman_checktime_ms 5000
zfs_sync_pass_rewrite 2
zfs_object_mutex_size 64
zfs_arc_min_prefetch_lifespan 0
zfs_arc_meta_prune 10000
zfs_read_history_hits 0
zfetch_max_distance 8388608
l2arc_norw 0
zfs_dirty_data_max_percent 10
zfs_per_txg_dirty_frees_percent 30
zfs_arc_meta_min 0
metaslabs_per_vdev 200
zfs_arc_meta_adjust_restarts 4096
spa_load_verify_maxinflight 10000
spa_load_verify_metadata 1
zfs_multihost_history 0
zfs_send_corrupt_data 0
zfs_delay_min_dirty_percent 60
zfs_vdev_sync_read_max_active 10
zfs_dbgmsg_enable 0
zfs_metaslab_segment_weight_enabled 1
zio_requeue_io_start_cut_in_line 1
l2arc_headroom_boost 200
zfs_zevent_cols 80
zfs_dmu_offset_next_sync 0
spa_config_path /etc/zfs/zpool.cache
zfs_vdev_cache_size 0
dbuf_cache_hiwater_pct 10
zfs_multihost_interval 1000
zfs_multihost_fail_intervals 5
zio_dva_throttle_enabled 1
zfs_vdev_sync_write_min_active 10
zfs_vdev_scrub_max_active 2
ignore_hole_birth 1
zvol_major 230
zil_replay_disable 0
zfs_dirty_data_max_max_percent 25
zfs_expire_snapshot 300
zfs_sync_pass_deferred_free 2
spa_asize_inflation 24
dmu_object_alloc_chunk_shift 7
zfs_vdev_mirror_rotating_seek_offset 1048576
l2arc_feed_secs 1
zfs_autoimport_disable 1
zfs_arc_p_aggressive_disable 1
zfs_zevent_len_max 64
zfs_arc_meta_limit_percent 75
l2arc_noprefetch 1
zfs_vdev_raidz_impl [fastest] original scalar sse2 ssse3 avx2
zfs_arc_meta_limit 0
zfs_flags 0
zfs_dirty_data_max_max 2065346560
zfs_arc_average_blocksize 8192
zfs_vdev_cache_bshift 16
zfs_vdev_async_read_min_active 1
zfs_arc_dnode_reduce_percent 10
zfs_free_bpobj_enabled 1
zfs_arc_grow_retry 0
zfs_vdev_mirror_rotating_inc 0
l2arc_feed_again 1
zfs_vdev_mirror_non_rotating_inc 0
zfs_arc_lotsfree_percent 10
zfs_zevent_console 0
zvol_prefetch_bytes 131072
zfs_free_min_time_ms 1000
zfs_arc_dnode_limit_percent 10
zio_taskq_batch_pct 75
dbuf_cache_max_bytes 104857600
spa_load_verify_data 1
zfs_delete_blocks 20480
zfs_vdev_mirror_non_rotating_seek_inc 1
zfs_multihost_import_intervals 10
zfs_dirty_data_max 826138624
zfs_vdev_async_write_max_active 10
zfs_dbgmsg_maxsize 4194304
zfs_nocacheflush 0
zfetch_array_rd_sz 1048576
zfs_arc_meta_strategy 1
zfs_dirty_data_sync 67108864
zvol_max_discard_blocks 16384
zvol_threads 32
zfs_vdev_async_write_active_max_dirty_percent 60
zfs_arc_p_dampener_disable 1
zfs_txg_timeout 5
metaslab_aliquot 524288
zfs_mdcomp_disable 0
zfs_vdev_sync_read_min_active 10
zfs_arc_dnode_limit 0
dbuf_cache_lowater_pct 10
zfs_abd_scatter_max_order 10
metaslab_debug_load 0
zfs_vdev_aggregation_limit 131072
metaslab_lba_weighting_enabled 1
zfs_vdev_scheduler noop
zfs_vdev_scrub_min_active 1
zfs_no_scrub_io 0
zfs_vdev_cache_max 16384
zfs_scan_idle 50
zfs_arc_shrink_shift 0
spa_slop_shift 5
zfs_vdev_mirror_rotating_seek_inc 5
zfs_deadman_synctime_ms 1000000
send_holes_without_birth_time 1
metaslab_bias_enabled 1
zvol_request_sync 0
zfs_admin_snapshot 1
zfs_no_scrub_prefetch 0
zfs_metaslab_fragmentation_threshold 70
zfs_max_recordsize 1048576
zfs_arc_min 1073741824
zfs_nopwrite_enabled 1
zfs_arc_p_min_shift 0
zfs_multilist_num_sublists 0
zfs_vdev_queue_depth_pct 1000
zfs_mg_fragmentation_threshold 85
l2arc_write_boost 8388608
zfs_prefetch_disable 0
l2arc_feed_min_ms 200
zio_delay_max 30000
zfs_vdev_write_gap_limit 4096
zfs_pd_bytes_max 52428800
zfs_scan_min_time_ms 1000
zfs_resilver_min_time_ms 3000
zfs_delay_scale 500000
zfs_vdev_async_write_active_min_dirty_percent 30
zfs_vdev_sync_write_max_active 10
zfs_mg_noalloc_threshold 0
zfs_deadman_enabled 1
zfs_resilver_delay 2
zfs_metaslab_switch_threshold 2
zfs_arc_max 2147483648
zfs_top_maxinflight 32
zfetch_min_sec_reap 2
zfs_immediate_write_sz 32768
zfs_vdev_async_write_min_active 2
zfs_sync_pass_dont_compress 5
zfs_vdev_read_gap_limit 32768
zfs_compressed_arc_enabled 1
zfs_vdev_max_active 1000
1. they are normal sata hp branded 1 TB disk, so 4k i think.
2. default setting from installer. so 12
3. compression lz4, volblock i don't know how to see it. anyway default setting.
4. ARC limited to 1 GB min and 2 GB max.
arc summary
---
ARC Size Breakdown:
Recently Used Cache Size: 83.81% 1.68 GiB
Frequently Used Cache Size: 16.19% 331.60 MiB
---
ARC Total accesses: 5.52m
Cache Hit Ratio: 37.29% 2.06m
Cache Miss Ratio: 62.71% 3.46m
Actual Hit Ratio: 23.96% 1.32m
---
For SATA disks, you're almost at the maximum IOPS for 7.2k rpm, so the disks are the bottleneck. Do you have 7,2k rpm or just 5,4k?
Sorry, missed your post above!You are very distracted. Can you replay to post above ?
for i in /dev/sd{a,b}; do smartctl -i $i | grep Sector; done
Sector Sizes: 512 bytes logical, 4096 bytes physical
Sector Sizes: 512 bytes logical, 4096 bytes physical
# zpool get ashift
NAME PROPERTY VALUE SOURCE
rpool ashift 12 local
zfs get volblocksize,compression
NAME PROPERTY VALUE SO URCE
rpool volblocksize - -
rpool compression on lo cal
rpool/ROOT volblocksize - -
rpool/ROOT compression on in herited from rpool
rpool/ROOT/pve-1 volblocksize - -
rpool/ROOT/pve-1 compression on in herited from rpool
rpool/data volblocksize - -
rpool/data compression on in herited from rpool
rpool/data/subvol-101-disk-1 volblocksize - -
rpool/data/subvol-101-disk-1 compression on in herited from rpool
rpool/data/vm-100-disk-2 volblocksize 8K de fault
rpool/data/vm-100-disk-2 compression on in herited from rpool
rpool/data/vm-102-disk-1 volblocksize 8K de fault
rpool/data/vm-102-disk-1 compression on in herited from rpool
rpool/data/vm-102-disk-1@installazione_aggiornamenti volblocksize - -
rpool/data/vm-102-disk-1@installazione_aggiornamenti compression - -
rpool/data/vm-102-state-installazione_aggiornamenti volblocksize 8K de fault
rpool/data/vm-102-state-installazione_aggiornamenti compression on in herited from rpool
rpool/swap volblocksize 4K -
rpool/swap compression zle lo cal
Thank you, it did!See if it helps:
echo never > /sys/kernel/mm/transparent_hugepage/enabled
echo never > /sys/kernel/mm/transparent_hugepage/defrag
echo defer > /sys/kernel/mm/transparent_hugepage/defrag
a deferred defrag is all it actually needs
echo defer > /sys/kernel/mm/transparent_hugepage/defrag
Do you have more then one Windows VM using QXL video drivers?Hello,
have the same problem.. windows is so slow in booting.. it takes up to 30 minutes to boot a windows machine.
I also deactivated balooning already..