Too slow windows startup on zfs?

Really? Over 30 minutes to start a Just installed(so clean) Windows vm? It is insane...
 
Really? Over 30 minutes to start a Just installed(so clean) Windows vm? It is insane...

I do not see this problem on my boxes, also windows is booting up fast.
 
Really? Over 30 minutes to start a Just installed(so clean) Windows vm? It is insane...

ZFS have a tool like "#zpool iostat -v 1" to see disk activity. ZFS speed depend on slowest disk on the pool. If you see big activity different between disks - it may be:
1. You are using different model of disks.
2. If your ZFS pool is made from identically disk -> one of the disk have problem and may be faulted in coming time.

I like to watch current activity of the disks using #atop.
 
here while is booting.


#zpool iostat -v 5

pmx02.JPG

#atop
pmx01.JPG

cpu is at 100% in atop. and 25% global from proxmox interface. it seem it is using only 1 core, i think this could be aggravate even more the problem of having only 8 gb of ram.


EDIT: started!! 56min for booting. :(
 

Attachments

  • upload_2018-1-17_19-29-45.png
    upload_2018-1-17_19-29-45.png
    8.8 KB · Views: 26
Last edited:
To make ZFS work better with the same resources is tricky.

1. What is your disks sector size?
2. What is your ZFS pool ashift setup?
3. What is your ZFS volume volblocksize, compression?
4. ARC size? #arc_summary
 
To make ZFS work better with the same resources is tricky.

1. What is your disks sector size?
2. What is your ZFS pool ashift setup?
3. What is your ZFS volume volblocksize, compression?
4. ARC size? #arc_summary

1. they are normal sata hp branded 1 TB disk, so 4k i think.
2. default setting from installer. so 12
3. compression lz4, volblock i don't know how to see it. anyway default setting.
4. ARC limited to 1 GB min and 2 GB max.
arc summary
Code:
------------------------------------------------------------------------
ZFS Subsystem Report                            Wed Jan 17 21:47:39 2018
ARC Summary: (HEALTHY)
        Memory Throttle Count:                  0

ARC Misc:
        Deleted:                                4.22m
        Mutex Misses:                           578
        Evict Skips:                            578

ARC Size:                               96.20%  1.92    GiB
        Target Size: (Adaptive)         100.00% 2.00    GiB
        Min Size (Hard Limit):          50.00%  1.00    GiB
        Max Size (High Water):          2:1     2.00    GiB

ARC Size Breakdown:
        Recently Used Cache Size:       83.81%  1.68    GiB
        Frequently Used Cache Size:     16.19%  331.60  MiB

ARC Hash Breakdown:
        Elements Max:                           442.00k
        Elements Current:               86.66%  383.05k
        Collisions:                             1.33m
        Chain Max:                              5
        Chains:                                 53.63k

ARC Total accesses:                                     5.52m
        Cache Hit Ratio:                37.29%  2.06m
        Cache Miss Ratio:               62.71%  3.46m
        Actual Hit Ratio:               23.96%  1.32m

        Data Demand Efficiency:         33.49%  3.62m
        Data Prefetch Efficiency:       45.99%  1.65m

        CACHE HITS BY CACHE LIST:
          Anonymously Used:             29.81%  613.58k
          Most Recently Used:           52.26%  1.08m
          Most Frequently Used:         12.00%  246.99k
          Most Recently Used Ghost:     3.19%   65.58k
          Most Frequently Used Ghost:   2.75%   56.57k

        CACHE HITS BY DATA TYPE:
          Demand Data:                  58.91%  1.21m
          Prefetch Data:                36.97%  760.95k
          Demand Metadata:              3.18%   65.48k
          Prefetch Metadata:            0.94%   19.33k

        CACHE MISSES BY DATA TYPE:
          Demand Data:                  69.54%  2.41m
          Prefetch Data:                25.81%  893.67k
          Demand Metadata:              4.43%   153.45k
          Prefetch Metadata:            0.21%   7.27k


DMU Prefetch Efficiency:                                        14.28m
        Hit Ratio:                      10.71%  1.53m
        Miss Ratio:                     89.29%  12.75m



ZFS Tunable:
        zvol_volmode                                      1
        l2arc_headroom                                    2
        dbuf_cache_max_shift                              5
        zfs_free_leak_on_eio                              0
        zfs_free_max_blocks                               100000
        zfs_read_chunk_size                               1048576
        metaslab_preload_enabled                          1
        zfs_dedup_prefetch                                0
        zfs_txg_history                                   0
        zfs_scrub_delay                                   4
        zfs_vdev_async_read_max_active                    3
        zfs_read_history                                  0
        zfs_arc_sys_free                                  0
        l2arc_write_max                                   8388608
        zil_slog_bulk                                     786432
        zfs_dbuf_state_index                              0
        zfs_sync_taskq_batch_pct                          75
        metaslab_debug_unload                             0
        zvol_inhibit_dev                                  0
        zfs_abd_scatter_enabled                           1
        zfs_arc_pc_percent                                0
        zfetch_max_streams                                8
        zfs_recover                                       0
        metaslab_fragmentation_factor_enabled             1
        zfs_deadman_checktime_ms                          5000
        zfs_sync_pass_rewrite                             2
        zfs_object_mutex_size                             64
        zfs_arc_min_prefetch_lifespan                     0
        zfs_arc_meta_prune                                10000
        zfs_read_history_hits                             0
        zfetch_max_distance                               8388608
        l2arc_norw                                        0
        zfs_dirty_data_max_percent                        10
        zfs_per_txg_dirty_frees_percent                   30
        zfs_arc_meta_min                                  0
        metaslabs_per_vdev                                200
        zfs_arc_meta_adjust_restarts                      4096
        spa_load_verify_maxinflight                       10000
        spa_load_verify_metadata                          1
        zfs_multihost_history                             0
        zfs_send_corrupt_data                             0
        zfs_delay_min_dirty_percent                       60
        zfs_vdev_sync_read_max_active                     10
        zfs_dbgmsg_enable                                 0
        zfs_metaslab_segment_weight_enabled               1
        zio_requeue_io_start_cut_in_line                  1
        l2arc_headroom_boost                              200
        zfs_zevent_cols                                   80
        zfs_dmu_offset_next_sync                          0
        spa_config_path                                   /etc/zfs/zpool.cache
        zfs_vdev_cache_size                               0
        dbuf_cache_hiwater_pct                            10
        zfs_multihost_interval                            1000
        zfs_multihost_fail_intervals                      5
        zio_dva_throttle_enabled                          1
        zfs_vdev_sync_write_min_active                    10
        zfs_vdev_scrub_max_active                         2
        ignore_hole_birth                                 1
        zvol_major                                        230
        zil_replay_disable                                0
        zfs_dirty_data_max_max_percent                    25
        zfs_expire_snapshot                               300
        zfs_sync_pass_deferred_free                       2
        spa_asize_inflation                               24
        dmu_object_alloc_chunk_shift                      7
        zfs_vdev_mirror_rotating_seek_offset              1048576
        l2arc_feed_secs                                   1
        zfs_autoimport_disable                            1
        zfs_arc_p_aggressive_disable                      1
        zfs_zevent_len_max                                64
        zfs_arc_meta_limit_percent                        75
        l2arc_noprefetch                                  1
        zfs_vdev_raidz_impl                               [fastest] original scalar sse2 ssse3 avx2
        zfs_arc_meta_limit                                0
        zfs_flags                                         0
        zfs_dirty_data_max_max                            2065346560
        zfs_arc_average_blocksize                         8192
        zfs_vdev_cache_bshift                             16
        zfs_vdev_async_read_min_active                    1
        zfs_arc_dnode_reduce_percent                      10
        zfs_free_bpobj_enabled                            1
        zfs_arc_grow_retry                                0
        zfs_vdev_mirror_rotating_inc                      0
        l2arc_feed_again                                  1
        zfs_vdev_mirror_non_rotating_inc                  0
        zfs_arc_lotsfree_percent                          10
        zfs_zevent_console                                0
        zvol_prefetch_bytes                               131072
        zfs_free_min_time_ms                              1000
        zfs_arc_dnode_limit_percent                       10
        zio_taskq_batch_pct                               75
        dbuf_cache_max_bytes                              104857600
        spa_load_verify_data                              1
        zfs_delete_blocks                                 20480
        zfs_vdev_mirror_non_rotating_seek_inc             1
        zfs_multihost_import_intervals                    10
        zfs_dirty_data_max                                826138624
        zfs_vdev_async_write_max_active                   10
        zfs_dbgmsg_maxsize                                4194304
        zfs_nocacheflush                                  0
        zfetch_array_rd_sz                                1048576
        zfs_arc_meta_strategy                             1
        zfs_dirty_data_sync                               67108864
        zvol_max_discard_blocks                           16384
        zvol_threads                                      32
        zfs_vdev_async_write_active_max_dirty_percent     60
        zfs_arc_p_dampener_disable                        1
        zfs_txg_timeout                                   5
        metaslab_aliquot                                  524288
        zfs_mdcomp_disable                                0
        zfs_vdev_sync_read_min_active                     10
        zfs_arc_dnode_limit                               0
        dbuf_cache_lowater_pct                            10
        zfs_abd_scatter_max_order                         10
        metaslab_debug_load                               0
        zfs_vdev_aggregation_limit                        131072
        metaslab_lba_weighting_enabled                    1
        zfs_vdev_scheduler                                noop
        zfs_vdev_scrub_min_active                         1
        zfs_no_scrub_io                                   0
        zfs_vdev_cache_max                                16384
        zfs_scan_idle                                     50
        zfs_arc_shrink_shift                              0
        spa_slop_shift                                    5
        zfs_vdev_mirror_rotating_seek_inc                 5
        zfs_deadman_synctime_ms                           1000000
        send_holes_without_birth_time                     1
        metaslab_bias_enabled                             1
        zvol_request_sync                                 0
        zfs_admin_snapshot                                1
        zfs_no_scrub_prefetch                             0
        zfs_metaslab_fragmentation_threshold              70
        zfs_max_recordsize                                1048576
        zfs_arc_min                                       1073741824
        zfs_nopwrite_enabled                              1
        zfs_arc_p_min_shift                               0
        zfs_multilist_num_sublists                        0
        zfs_vdev_queue_depth_pct                          1000
        zfs_mg_fragmentation_threshold                    85
        l2arc_write_boost                                 8388608
        zfs_prefetch_disable                              0
        l2arc_feed_min_ms                                 200
        zio_delay_max                                     30000
        zfs_vdev_write_gap_limit                          4096
        zfs_pd_bytes_max                                  52428800
        zfs_scan_min_time_ms                              1000
        zfs_resilver_min_time_ms                          3000
        zfs_delay_scale                                   500000
        zfs_vdev_async_write_active_min_dirty_percent     30
        zfs_vdev_sync_write_max_active                    10
        zfs_mg_noalloc_threshold                          0
        zfs_deadman_enabled                               1
        zfs_resilver_delay                                2
        zfs_metaslab_switch_threshold                     2
        zfs_arc_max                                       2147483648
        zfs_top_maxinflight                               32
        zfetch_min_sec_reap                               2
        zfs_immediate_write_sz                            32768
        zfs_vdev_async_write_min_active                   2
        zfs_sync_pass_dont_compress                       5
        zfs_vdev_read_gap_limit                           32768
        zfs_compressed_arc_enabled                        1
        zfs_vdev_max_active                               1000
 
1. they are normal sata hp branded 1 TB disk, so 4k i think.

for i in /dev/sd{a,b}; do smartctl -i $i | grep Sector; done

2. default setting from installer. so 12

# zpool get ashift

3. compression lz4, volblock i don't know how to see it. anyway default setting.

zfs get volblocksize,compression

4. ARC limited to 1 GB min and 2 GB max.
arc summary

---
ARC Size Breakdown:
Recently Used Cache Size: 83.81% 1.68 GiB
Frequently Used Cache Size: 16.19% 331.60 MiB
---
ARC Total accesses: 5.52m
Cache Hit Ratio: 37.29% 2.06m
Cache Miss Ratio: 62.71% 3.46m
Actual Hit Ratio: 23.96% 1.32m
---

As you can see only 37% data comes from ARC cache.

My server with 12G ARC hit rate is 87%.

You can try to disable prefetch
#echo 1 > /sys/module/zfs/parameters/zfs_prefetch_disable
 
For SATA disks, you're almost at the maximum IOPS for 7.2k rpm, so the disks are the bottleneck. Do you have 7,2k rpm or just 5,4k?

7.2K rpm.

I found the "problem". setting my only vm with 2 gb ram, rebooting proxmox , shut down and restarted the vm several time (something like 30) and is blazing fast (5 second for shutdown and 10/15 for startup to windows password login prompt).

Set the vm with 4 gb ram and rebooted proxmox. first time the machine start fast and next shut down vm and restart it take 50min.

So i came to the conclusion that the problem is not the sata disk( bare mate install take 2 min to boot) but is low ram for ZFS that turn in something weird and probably it keep filling and flushing the arc cache slowing down all things
 
You are very distracted. Can you replay to post above ?
Sorry, missed your post above!

for i in /dev/sd{a,b}; do smartctl -i $i | grep Sector; done


Code:
Sector Sizes:     512 bytes logical, 4096 bytes physical
Sector Sizes:     512 bytes logical, 4096 bytes physical



# zpool get ashift

Code:
NAME   PROPERTY  VALUE   SOURCE
rpool  ashift    12      local

zfs get volblocksize,compression

Code:
NAME                                                  PROPERTY      VALUE     SO                                                            URCE
rpool                                                 volblocksize  -         -
rpool                                                 compression   on        lo                                                            cal
rpool/ROOT                                            volblocksize  -         -
rpool/ROOT                                            compression   on        in                                                            herited from rpool
rpool/ROOT/pve-1                                      volblocksize  -         -
rpool/ROOT/pve-1                                      compression   on        in                                                            herited from rpool
rpool/data                                            volblocksize  -         -
rpool/data                                            compression   on        in                                                            herited from rpool
rpool/data/subvol-101-disk-1                          volblocksize  -         -
rpool/data/subvol-101-disk-1                          compression   on        in                                                            herited from rpool
rpool/data/vm-100-disk-2                              volblocksize  8K        de                                                            fault
rpool/data/vm-100-disk-2                              compression   on        in                                                            herited from rpool
rpool/data/vm-102-disk-1                              volblocksize  8K        de                                                            fault
rpool/data/vm-102-disk-1                              compression   on        in                                                            herited from rpool
rpool/data/vm-102-disk-1@installazione_aggiornamenti  volblocksize  -         -
rpool/data/vm-102-disk-1@installazione_aggiornamenti  compression   -         -
rpool/data/vm-102-state-installazione_aggiornamenti   volblocksize  8K        de                                                            fault
rpool/data/vm-102-state-installazione_aggiornamenti   compression   on        in                                                            herited from rpool
rpool/swap                                            volblocksize  4K        -
rpool/swap                                            compression   zle       lo                                                            cal

As you can see only 37% data comes from ARC cache.

My server with 12G ARC hit rate is 87%.

You can try to disable prefetch
#echo 1 > /sys/module/zfs/parameters/zfs_prefetch_disable
 
Everything looks normal. Not sure about compression. I manually set to lz4.

ZFS pool speed depends on slowest disk performance. To speed it request bigger ARC. Then ARC is cold you get disks very busy. To make ARC 'hot' some say it takes 2 days to warmup. ZFS advantages have it own cost.
 
I have the same problem but I'm using a single NVME. It takes half an hour to boot a win 10 VM. a similar Win 8 VM boots alot faster...

strange thing: If I'm running during the boot of the win 10 guest a "pveperf" at the host, it starts immediatly?!
After deactivating qemu-agent it starts immediatly as well

edit: after changing from spice to virtio-gpu it boots faster with qemu activated!
 
Last edited:
See if it helps:
echo never > /sys/kernel/mm/transparent_hugepage/enabled
echo never > /sys/kernel/mm/transparent_hugepage/defrag
Thank you, it did!

When one Windows VM with QXL drivers is running, any other Windows+QXL will have a very slow cold boot.
Pausing running Windows+QXL VMs will also help.

Cheers
 
Hello,

have the same problem.. windows is so slow in booting.. it takes up to 30 minutes to boot a windows machine.

I also deactivated balooning already..
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!