Proxmox is slow to create and install VM

johnnyb

New Member
Sep 30, 2014
7
0
1
Hello,

I'm using Proxmox installed on Debian Wheezy.

Server configuration is Intel Corei5 3,1 Ghz 16Go ram with RAID 1

I have 3 instances like this.

I can see that sometimes, when i want to create a simple VM (30 Go HDD) PVE take longtime to create it. then Debian installation is very slow but my CPU and Memory are not overloaded.

I check I/O performance but i don't see any problems with I/O (i'm using VIRTIO drivers)

Have you an idea ?

thank you

Code:
proxmox-ve-2.6.32: 3.1-114 (running kernel: 2.6.32-26-pve)
pve-manager: 3.1-21 (running version: 3.1-21/93bf03d4)
pve-kernel-2.6.32-26-pve: 2.6.32-114
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-2
pve-cluster: 3.0-8
qemu-server: 3.1-8
pve-firmware: 1.0-23
libpve-common-perl: 3.0-8
libpve-access-control: 3.0-7
libpve-storage-perl: 3.0-17
pve-libspice-server1: 0.12.4-2
vncterm: 1.1-4
vzctl: 4.0-1pve4
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-17
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.1-1
 
Hello,

Thank you for your answer :

@Tom :

1 - I make upgrade 3.1 to 3.3

2 - This is my /proc/mount
rootfs / rootfs rw 0 0
/dev/root / xfs rw,relatime,attr2,inode64,noquota 0 0
devtmpfs /dev devtmpfs rw,relatime,size=32968872k,nr_inodes=8242218,mode=755 0 0
tmpfs /run tmpfs rw,nosuid,noexec,relatime,size=6600464k,mode=755 0 0
tmpfs /run/lock tmpfs rw,nosuid,nodev,noexec,relatime,size=5120k 0 0
proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
tmpfs /dev/shm tmpfs rw,nosuid,nodev,noexec,relatime,size=13410220k 0 0
devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620 0 0
/dev/mapper/vg-home /home xfs rw,relatime,attr2,inode64,noquota 0 0
/dev/mapper/vg-var /var xfs rw,relatime,attr2,inode64,noquota 0 0
rpc_pipefs /var/lib/nfs/rpc_pipefs rpc_pipefs rw,relatime 0 0
/dev/fuse /etc/pve fuse rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other 0 0

- My pveperf
pveperf /var/lib/vz
CPU BOGOMIPS: 59202.56
REGEX/SECOND: 1316498
HD SIZE: 1646.86 GB (/dev/mapper/vg-var)
BUFFERED READS: 140.87 MB/sec
AVERAGE SEEK TIME: 11.96 ms
FSYNCS/SECOND: 88.89
DNS EXT: 24.78 ms
DNS INT: 3.69 ms (ovh.net)

- qm CONFIG
boot: cdn
bootdisk: virtio0
cores: 2
ide2: local:iso/debian-7.6.0-amd64-netinst.iso,media=cdrom
keyboard: fr
machine: pc-i440fx-1.4
memory: 4096
name: bluemind
net0: virtio=8A:3B:93:89:69:B7,bridge=vmbr1
ostype: l26
parent: blumeindsnap
sockets: 2
virtio0: local:102/vm-102-disk-1.qcow2,format=qcow2,size=32G


@mmenaz

Yes virtualization is enabled i have 40 VMs on 3 Proxmox instance on Corei7 dedicated servers so i think virtualization is OK :)

I have to change the cache mode. If cache Writethrough is enable I/O perf are very slow. With No cache mode I/O perf are better. ( 40 MB/s > 170 MB/s in write)

Server is no overloaded and space disk is OK ( Intel Core i7 / 64 Go Memory / 1 To HDD RAID 1)

Sometimes Debian installation is very slow when Unpacking base system stage is working it is very strange, i could see it on my 3 independant dedicated servers.

thank you for your help
 
Last edited:
...

2 - This is my /proc/mount
rootfs / rootfs rw 0 0
/dev/root / xfs rw,relatime,attr2,inode64,noquota 0 0
devtmpfs /dev devtmpfs rw,relatime,size=32968872k,nr_inodes=8242218,mode=755 0 0
tmpfs /run tmpfs rw,nosuid,noexec,relatime,size=6600464k,mode=755 0 0
tmpfs /run/lock tmpfs rw,nosuid,nodev,noexec,relatime,size=5120k 0 0
proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
tmpfs /dev/shm tmpfs rw,nosuid,nodev,noexec,relatime,size=13410220k 0 0
devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620 0 0
/dev/mapper/vg-home /home xfs rw,relatime,attr2,inode64,noquota 0 0
/dev/mapper/vg-var /var xfs rw,relatime,attr2,inode64,noquota 0 0
rpc_pipefs /var/lib/nfs/rpc_pipefs rpc_pipefs rw,relatime 0 0
/dev/fuse /etc/pve fuse rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other 0 0

....
Hi,
xfs instead of ext3/4?
Can it be that you use mdraid?

Udo
 
Hello,

Thank you for your reply Udo

- Yes i always use XFS filesystem with my Debian servers, it isn't recommended with Proxmox ?
- Yes i have a soft RAID with mdadm

thank you
 
Last edited:
Your FSYNCS/SECOND: 88.89 is *really low*, in a simple sata with ext3 and no barrier (the default) you should have about 800-900, a WD velociraptor about 1300. With ext4 and barrier I have a very low result, setting nobarrier increases perofrmance a lot with the risk in case of power failure (read around about this subject, since ext3 in proxmox has no barrier by default and is not clear if with ext4 is more risky or was just a matter of an on bug of ext4).
So fine tune your XFS, or buy a real hardware controller with bbu and enable, in that controller, the writeback.
 
Thank you mmenaz for your help

I have dedicated servers with RAID soft, so i have to find a way to migrate into new servers with hard raid controller

Finally Proxmox should always deployed into raid hard controller infrastructure

Thanks all guys for your help

Regards
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!