Hi,
First of all, apologies if this question has been answered already but I haven't found the answer in the archives.
Also I'm rather a noob :-/
Here is my situation :
I'm running PVE on Debian 8 on 2 hosts :
I have HOST1 with 3 VMs (.raw) running, and HOST2 where I backup those 3 VMs every night (via NFS), then I restore them (but keep them stopped).
So if HOST1 fails, I'll just have to start the VMs in HOST2.
So I backup the 3 VMs using a PVE task (snapshot). Everything is working has expected.
Then, on the backup host, I'm using a cron script which :
This has beeen working like a charm for months.
Then, 2 days ago I made a Debian update, starting with HOST2
And now my qmrestore is waaaaay more slow, so I checked the logs and I found :
Then I checked the restored VMs sizes, and yes, they now have the size I gave to their HD, whereas before the restored VMs kept the dynamic size I gave to the original VMs.
So my guess is some default settings has changed in qmrestore but I can't find where (and I don't see why).
Could anybody give me some help please ?
First of all, apologies if this question has been answered already but I haven't found the answer in the archives.
Also I'm rather a noob :-/
Here is my situation :
I'm running PVE on Debian 8 on 2 hosts :
I have HOST1 with 3 VMs (.raw) running, and HOST2 where I backup those 3 VMs every night (via NFS), then I restore them (but keep them stopped).
So if HOST1 fails, I'll just have to start the VMs in HOST2.
So I backup the 3 VMs using a PVE task (snapshot). Everything is working has expected.
Then, on the backup host, I'm using a cron script which :
- erases the old stopped VMs if found (qm destroy)
- restores some new VMs (qmrestore, no fancy options)
- restores some new VMs (qmrestore, no fancy options)
This has beeen working like a charm for months.
Then, 2 days ago I made a Debian update, starting with HOST2
I went from :
- pve-manager 4.1-4/ccba54b0 to 4.1-13/cfb599fb
- kernel 4.2.6-1-pve to 4.2.8-1-pve
- pve-manager 4.1-4/ccba54b0 to 4.1-13/cfb599fb
- kernel 4.2.6-1-pve to 4.2.8-1-pve
And now my qmrestore is waaaaay more slow, so I checked the logs and I found :
map 'drive-virtio0' to '/var/lib/vz/images/101/vm-101-disk-1.raw' (write zeros = 1)
whereas before it was (write zeros = 0)Then I checked the restored VMs sizes, and yes, they now have the size I gave to their HD, whereas before the restored VMs kept the dynamic size I gave to the original VMs.
So my guess is some default settings has changed in qmrestore but I can't find where (and I don't see why).
Could anybody give me some help please ?