[SOLVED] Taking 40min to create a VM...

Nodemansland

New Member
Nov 1, 2024
11
1
3
So, as the title says, its taking 40min to create a VM and I can't figure out for the life of me why this is happening... I attached an image with all the settings I'm using to create the VM... Any help would be greatly appreciated!
  • There is no hardware raid
  • VM OS Ubuntu 24.04

System Specs:
PowerEdge R730xd:​
2 Intel(R) Xeon(R) CPU E5-2683 v4 @ 2.10GHz​
256GB DDR4 2400MHz PC4-2400T ECC​
12x16TB Seagate IronWolf Pro NAS Hard Drives --Yes I know they are not enterprise...
Not sure if this has to do with anything?
Screenshot 2025-04-05 161550.png

System when creating the VM:


Screenshot 2025-04-05 160215.png
VM Settings:

newvm.png
 
Last edited:
I don't known how much datas or files you need to write for the whole setup, but what do you expected with 1 hdd ?

Maybe try to format your disk with lvm thin , instead using qcow2 files for your vm.
 
What is the Proxmox backend storage configuration for HDD07, show both image from GUI & output for cat /etc/pve/storage.cfg
 
Hi @Nodemansland ,

Based on your description, I can see that it is most likely due to the IO Delay. A healthy IO delay will be 0% at most times. If you are getting more than 0%, you might want to check on the latency between your node and the storage. If it is more than 1ms, you should upgrade your hardware or update your firmware.

Thank you
 
So, as the title says, its taking 40min to create a VM and I can't figure out for the life of me why this is happening

The basis of everything is "storage". A hypervisor needs many IOPS. (Only) using rotating rust is too slow for nearly everything nowadays. And you are using only one single spindle, as you mentioned.

Rebuild your system and use ZFS mirrors only = six mirrors with two drives each. These six vdevs are (automatically) striped, giving you a six times higher IOPS. (And six times higher read bandwidth while writing data stays at "one".)

If you have any chance add a "Special Device" using another two (small, 100 GB is enough) SSDs/NVMe high quality drives (with PLP), mirrored of course. Try hard to do this: if you have no additional slots replace two of those large drives - that's what I would do. (Do not try to utilize an SLOG or a Cache.) This will lift the felt performance by another factor of three to ten - for the very most (but not all) use cases.

PS: make sure to use the controller in HBA mode; do not specify "single-drive", effectively virtual drives inside of the RAID controller. ZFS needs direct access to each physical drive.
 
Last edited:
Hi @Nodemansland ,

Based on your description, I can see that it is most likely due to the IO Delay. A healthy IO delay will be 0% at most times. If you are getting more than 0%, you might want to check on the latency between your node and the storage. If it is more than 1ms, you should upgrade your hardware or update your firmware.

Thank you
Thanks, I'll try @UdoB suggestion below and see if that helps.
 
The basis of everything is "storage". A hypervisor needs many IOPS. (Only) using rotating rust is too slow for nearly everything nowadays. And you are using only one single spindle, as you mentioned.

Rebuild your system and use ZFS mirrors only = six mirrors with two drives each. These six vdevs are (automatically) striped, giving you a six times higher IOPS. (And six times higher read bandwidth while writing data stays at "one".)

If you have any chance add a "Special Device" using another two (small, 100 GB is enough) SSDs/NVMe high quality drives (with PLP), mirrored of course. Try hard to do this: if you have no additional slots replace two of those large drives - that's what I would do. (Do not try to utilize an SLOG or a Cache.) This will lift the felt performance by another factor of three to ten - for the very most (but not all) use cases.

PS: make sure to use the controller in HBA mode; do not specify "single-drive", effectively virtual drives inside of the RAID controller. ZFS needs direct access to each physical drive.
Thanks @UdoB I'll give this a shot. Unfortunately I still have client data on here and won't be able to re-configure everything for a week or two.
 
So based on your reply, its a plain ~16tb qcow2 image being created on an ext4 directory storage located on the drive. The creation itself of the VM should not take so long - since this is just a sparse qcow2 image. I would check cabling/seating of this drive.
 
So based on your reply, its a plain ~16tb qcow2 image being created on an ext4 directory storage located on the drive. The creation itself of the VM should not take so long - since this is just a sparse qcow2 image. I would check cabling/seating of this drive.
Its weird because once the VM is up and running, the drive performance seems to be OK..

Code:
$ sudo hdparm -Tt /dev/sda

/dev/sda:
 Timing cached reads:   10038 MB in  1.99 seconds = 5043.41 MB/sec
 Timing buffered disk reads: 766 MB in  3.00 seconds = 255.12 MB/sec

$ dd if=/dev/zero of=/tmp/output bs=8k count=10k; rm -f /tmp/output
10240+0 records in
10240+0 records out
83886080 bytes (84 MB, 80 MiB) copied, 0.357659 s, 235 MB/s
 
Last edited:
Assuming that disk has enough workable space, & was wiped/clean before setup as a Proxmox dir storage, it is indeed strange behavior.
 
Hi,
by default qcow2 images allocate all the metadata up front, so for a huge image on consumer grade disks, it can take a while. If you don't want this, you can change the preallocation setting in the storage configuration. It's a trade-off. It only has to be done once during creation and will help performance later. If you don't plan to re-create the disks very often, it might be better to keep the default setting.
 
  • Like
Reactions: Nodemansland
Hi,
by default qcow2 images allocate all the metadata up front, so for a huge image on consumer grade disks, it can take a while. If you don't want this, you can change the preallocation setting in the storage configuration. It's a trade-off. It only has to be done once during creation and will help performance later. If you don't plan to re-create the disks very often, it might be better to keep the default setting.
Thank you @fiona ! This solved the issue! Can you elaborate on what the drawback of disabling preallocation is?

Screenshot 2025-04-07 095955.png
 
  • Like
Reactions: waltar
Can you elaborate on what the drawback of disabling preallocation is?
From the QEMU official info:
[SIZE=4][FONT=trebuchet ms]preallocation[/FONT][/SIZE]
Preallocation mode (allowed values: off, metadata, falloc, full). An image with preallocated metadata is initially larger but can improve performance when the image needs to grow. falloc and full preallocations are like the same options of raw format, but sets up metadata also.