PVE 5.0 Problems during virtual disk movement

andy77

Renowned Member
Jul 6, 2016
248
13
83
41
Hello @all,

I am having problems during virtual disk move from a NFS Storage to a thin-lvm (SSD).

After starting the move (that takes arround 25min for 25GB what is quite long for a 1GB Network) I am getting problems with all VMs on that node. It seems that the VMs loose connection for short time. So for example the RDP connection to a VM will be disconnected without a reason.

In syslog of the node I see following error:
Code:
 VM 120 qmp command failed - VM 120 qmp command 'balloon' failed - got timeout

This line is shown multiple times with all VM IDs.

Any idea why this happans? I mean, sure the VMs are getting slower because of IO delay, but on SSDs this should not be a problem copying 25G in 25min.

I am having similar clusters with PVE 4.4 that are not causing this errors. So I am wondering if this is a problem with Version 5.

Regards
Andy
 
Hi,

where are the Storages of this VM's locates?
I mean the VM's where you get the error.
 
To explain my environment, I have a PVE 5.0 cluster with two nodes.

Node1 with SATA HDDs that has additionaly installed NFS Server and these storage is added to the cluster as NFS, so that it is available on every node.

Node2 with SATA SSDs where these are configured as thin-lvm + also the NFS storage from node1.

My VMTemplate is on Node1 on the NFS storage.

Now I do a clone of the VMTemplate on node1 to node2. And then move the virtual disks on node2 form "NFS Storage" to local "thin-lvm".

The error is shown in syslog of node2.
 
Last edited:
To explain my environment, I have a PVE 5.0 cluster with two nodes.

Node1 with SATA HDDs that has additionaly installed NFS Server and these storage is added to the cluster as NFS, so that it is available on every node.
Hi,
I think you know, that this is an bad idea at view of redundancy. If you have trouble with node1 you also have trouble with all other nodes too.

But back to the issue. Simple Sata-Disks?
Sound for me, that the disks are busy with IO, so that a) not much IO can used for nfs-copy (app. 17MB/s).
Also is the IO for the Clients not high enough - perhaps you can see an higher load in other VMs which are running on the same storage due to iowait.

Udo
 
This NFS Storage is only for Backups and cloning VMTemplates. So it is not a problem when it goes down, because the live VMs are running on nodes with local thin-lvm conifugred SSDs. I do not use the NFS Storage as "running live VMs Storage". As I said it is only for transfering VMs or dooing backups.

On node2, where the problems occour, these are SATAIII SSDs drives with 500 Mbyte/s Read/Write so I think it should not be a problem to copy a VM via 1GB ethernet because this max. reaches arround 100Mbyte/s transfer rate.

Or do I miss something?
 
I have this same problem when doing restore, migration or cloning (5 node cluster).
Never had the issues before and changed no hardware, been using ProxMox since early v3.
Did the back everything up and install v5 on existing hardware, then restore VM's.
 
I did tried now to use ionice to lower the IO prio but still having the same problems that running KVMs on the node get stuck, close connections, etc.

Code:
ionice -c3 qm move_disk 120 scsi0 vmdata
 
After having this troubles I tried to use ZFS istead of thin-lvm. And these problems do not occour with ZFS.
With ZFS all the movement, restore, etc. tasks from ZFS to NFS and vice versa work incredibly fast (10 times faster then with thin-lvm volumes).

So my way to go now with Proxmox 5.1 is with ZFS.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!