Didn't mean that there wasn't any. just that the relevant one was gone... I currently have another stalled backup running right now. It started at 6:46 am but hasn't written to the backup file since 8:53 am:
atom3:/var/log/vzdump# cat qemu-201.log
Sep 21 06:45:01 INFO: Starting Backup of VM 201...
Long story short:
We had 3 proxmox servers in a customer environment all running v1.9
We had an issue with backups to a NFS filer taking a long time that we posted both here and purchased a support ticket for. The 'fix' was to move to the following kernel:
atom1:/# pveversion -v
pve-manager...
Re: proxmox 1.9 vzbackup failing with snapshot error: INACTIVE destination for /dev..
So interestingly enough... I added the line:
atom2:/etc# cat vzdump.conf
size: 30720
And since then the last 2 backups have completed successfully...
The actual TGZ file being created is about 100G... as...
Re: unable to start server: unable to create socket - PVE::APIDaemon: Address already
I have 3 different networks in use on the server. eth0 is used for 'infrastructure' and is the nic/interface used to connect to the server for ssh, web interface, etc. It's one of the onboard NICs (INtel)...
Re: proxmox 1.9 vzbackup failing with snapshot error: INACTIVE destination for /dev..
So in looking at this though... The 'failed' snapshot was the same size as the actual LV in the first place:
LV Size 150.00 GB
It's 150 GB for both the original and the snapshot LV.
Re: proxmox 1.9 vzbackup failing with snapshot error: INACTIVE destination for /dev..
Ok, I'll give that a try, however it was working and then occassionally it actually works. But mostly it doesn't.
Re: proxmox 1.9 vzbackup failing with snapshot error: INACTIVE destination for /dev..
Sorry I guess I should have said vzdump... The built in backup facility in proxmox.
The snapshot keeps failing and essentially hanging the vzdump process:
atom2:/var/log/vzdump# ps aux | grep vzb
root...
Re: unable to start server: unable to create socket - PVE::APIDaemon: Address already
I'm having a similar issue currently with V1.9 where I can't login to web interface. After looking through logs and threads I've also tried to restart the pvedaemon and it won't stop:
Stopping PVE daemon...
It times out when using the same credentials I'm using for ssh... and that used to work fine for the web interface.
hyper4:/etc/pve# pveversion -v
pve-manager: 1.9-26 (pve-manager/1.9/6567)
running kernel: 2.6.32-6-pve
proxmox-ve-2.6.32: 1.9-55
pve-kernel-2.6.32-4-pve: 2.6.32-33...
I have looked around and have not found any instructions on how to move hyper-v vms to proxmox I have looked at the doc's for Migration of servers to Proxmox VE but there is nothing to move the VM to proxmox only a live server.
So my question is this can you move or migrate the VM files over or...
Thanks Dietmar,
I did that and it worked for 1 and 2... and did unmount on 3 but didn't clear up the issue. I did end up having to manually migrate the servers off 3 and then reboot it. The only thing I noticed is that the load average was quite high on 3 still after unmounting the failed NFS...
So all 3 of these servers had an NFS mount from a filer that has failed and been removed. When I do a umount -f 'mount point' it cleared up hyper1 and hyper2 but not 3... I didn't mention that we were unable to connect to the web interface on any of these either.
We can now get to the web...
Since 4:15Pm yesterday the logs on all 3 nodes in my cluster are filling up with:
MASTER:
Dec 20 06:24:06 hyper1 pvemirror[3373]: starting cluster syncronization
Dec 20 06:24:16 hyper1 pvemirror[3373]: syncing vzlist from '10.9.0.8' failed: 500 read timeout
Dec 20 06:24:26 hyper1...
and vms are unusable.
Strangely with only one 1vm on the hypervisor it backs up fine. As soon as we added a second VM it no longer backs up correctly.
Using sar it looks to me like the bottleneck is WRC: wrc=Percentage of RPC-call type Write Cache.
It's pegged at 100% while doing the...
Thanks for all the info. I did an lvdisplay to determine what LVM's were there and then did an lvremove of the snapshot and then was able to remove the VM.
Thanks!