Strange problem with Proxmox - Please help

sohaib

Well-Known Member
May 14, 2011
124
0
56
When I try to download something from my VM or transferring data over from my NAS drive to my VM it seems like network of that particular VM stop working therefore , I had to stop the VM and restart and then it works fine . I am not sure why this is happening this start happening after I updated my Proxmox yesterday.

Code:
pve-manager: 2.2-32 (pve-manager/2.2/3089a616)running kernel: 2.6.32-17-pve
proxmox-ve-2.6.32: 2.2-83
pve-kernel-2.6.32-10-pve: 2.6.32-63
pve-kernel-2.6.32-17-pve: 2.6.32-83
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-34
qemu-server: 2.0-71
pve-firmware: 1.0-21
libpve-common-perl: 1.0-40
libpve-access-control: 1.0-25
libpve-storage-perl: 2.0-36
vncterm: 1.0-3
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.3-10
ksm-control-daemon: 1.1-1
 
Code:
[COLOR=#000000][FONT=tahoma]Dec 21 12:26:46 NOD1 rrdcached[1620]: rotating journals[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 12:26:46 NOD1 rrdcached[1620]: started new journal /var/lib/rrdcached/journal//rrd.journal.1356110806.831595[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 12:26:46 NOD1 rrdcached[1620]: removing old journal /var/lib/rrdcached/journal//rrd.journal.1356103606.831539[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 12:33:52 NOD1 pvedaemon[12633]: <root@pam> successful auth for user 'root@pam'[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 12:36:38 NOD1 pvedaemon[2100]: worker 12648 finished[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 12:36:38 NOD1 pvedaemon[2100]: starting 1 worker(s)[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 12:36:38 NOD1 pvedaemon[2100]: worker 13256 started[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 12:36:46 NOD1 kernel: device-mapper: snapshots: Invalidating snapshot: Unable to allocate exception.[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 12:37:15 NOD1 kernel: Buffer I/O error on device dm-2, logical block 0[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 12:37:15 NOD1 kernel: lost page write due to I/O error on dm-2[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 12:37:15 NOD1 kernel: EXT3-fs (dm-2): I/O error while writing superblock[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 12:37:17 NOD1 pvedaemon[12321]: ERROR: Backup of VM 100 failed - command '/usr/lib/qemu-server/vmtar  '/mnt/pve/WDigital/dump/vzdump-qemu-100-2012_12_21-12_13_21.tmp/qemu-server.conf' 'qemu-server.conf' '/mnt/vzsnap0/images/100/vm-100-disk-1.raw' 'vm-disk-ide0.raw'|lzop >/mnt/pve/WDigital/dump/vzdump-qemu-100-2012_12_21-12_13_21.tar.dat' failed: exit code 255[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 12:37:17 NOD1 pvedaemon[12321]: INFO: Backup job finished with errors[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 12:37:17 NOD1 pvedaemon[12321]: job errors[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 12:37:22 NOD1 pvedaemon[2100]: worker 12633 finished[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 12:37:22 NOD1 pvedaemon[2100]: starting 1 worker(s)[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 12:37:22 NOD1 pvedaemon[2100]: worker 13316 started[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 12:40:57 NOD1 pvedaemon[2100]: worker 12792 finished[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 12:40:57 NOD1 pvedaemon[2100]: starting 1 worker(s)[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 12:40:57 NOD1 pvedaemon[2100]: worker 13451 started[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 12:48:52 NOD1 pvedaemon[13451]: <root@pam> successful auth for user 'root@pam'[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 12:53:57 NOD1 pvedaemon[2100]: worker 13316 finished[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 12:53:57 NOD1 pvedaemon[2100]: starting 1 worker(s)[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 12:53:57 NOD1 pvedaemon[2100]: worker 13897 started[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 12:54:18 NOD1 pvedaemon[2100]: worker 13256 finished[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 12:54:18 NOD1 pvedaemon[2100]: starting 1 worker(s)[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 12:54:18 NOD1 pvedaemon[2100]: worker 13908 started[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 12:58:07 NOD1 pvedaemon[2100]: worker 13451 finished[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 12:58:07 NOD1 pvedaemon[2100]: starting 1 worker(s)[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 12:58:07 NOD1 pvedaemon[2100]: worker 14048 started[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 13:03:53 NOD1 pvedaemon[14048]: <root@pam> successful auth for user 'root@pam'[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 13:11:11 NOD1 pvedaemon[2100]: worker 13897 finished[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 13:11:11 NOD1 pvedaemon[2100]: starting 1 worker(s)[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 13:11:11 NOD1 pvedaemon[2100]: worker 14501 started[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 13:12:52 NOD1 pvedaemon[2100]: worker 13908 finished[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 13:12:52 NOD1 pvedaemon[2100]: starting 1 worker(s)[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 13:12:52 NOD1 pvedaemon[2100]: worker 14556 started[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 13:14:47 NOD1 pvedaemon[2100]: worker 14048 finished[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 13:14:47 NOD1 pvedaemon[2100]: starting 1 worker(s)[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 13:14:47 NOD1 pvedaemon[2100]: worker 14627 started[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 13:17:01 NOD1 /USR/SBIN/CRON[14709]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 13:18:53 NOD1 pvedaemon[14501]: <root@pam> successful auth for user 'root@pam'[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 13:26:46 NOD1 pmxcfs[1635]: [dcdb] notice: data verification successful[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 13:26:46 NOD1 rrdcached[1620]: flushing old values[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 13:26:46 NOD1 rrdcached[1620]: rotating journals[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 13:26:46 NOD1 rrdcached[1620]: started new journal /var/lib/rrdcached/journal//rrd.journal.1356114406.831556[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 13:26:46 NOD1 rrdcached[1620]: removing old journal /var/lib/rrdcached/journal//rrd.journal.1356107206.831540[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 13:27:49 NOD1 pvedaemon[2100]: worker 14501 finished[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 13:27:49 NOD1 pvedaemon[2100]: starting 1 worker(s)[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 13:27:49 NOD1 pvedaemon[2100]: worker 15091 started[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 13:30:20 NOD1 pvedaemon[2100]: worker 14556 finished[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 13:30:20 NOD1 pvedaemon[2100]: starting 1 worker(s)[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 13:30:20 NOD1 pvedaemon[2100]: worker 15240 started[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 13:33:54 NOD1 pvedaemon[15091]: <root@pam> successful auth for user 'root@pam'[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 13:39:18 NOD1 pvedaemon[15566]: starting vnc proxy UPID:NOD1:00003CCE:002220F9:50D4ACD6:vncproxy:100:root@pam:[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 13:39:18 NOD1 pvedaemon[14627]: <root@pam> starting task UPID:NOD1:00003CCE:002220F9:50D4ACD6:vncproxy:100:root@pam:[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 13:39:18 NOD1 pvedaemon[15240]: <root@pam> successful auth for user 'root@pam'[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 13:39:20 NOD1 pvedaemon[14627]: <root@pam> end task UPID:NOD1:00003CCE:002220F9:50D4ACD6:vncproxy:100:root@pam: OK[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 13:39:26 NOD1 pvedaemon[2100]: worker 14627 finished[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 13:39:26 NOD1 pvedaemon[2100]: starting 1 worker(s)[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 13:39:26 NOD1 pvedaemon[2100]: worker 15586 started[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 13:39:26 NOD1 pvedaemon[15240]: <root@pam> starting task UPID:NOD1:00003CE8:00222474:50D4ACDE:qmstart:106:root@pam:[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 13:39:26 NOD1 pvedaemon[15592]: start VM 106: UPID:NOD1:00003CE8:00222474:50D4ACDE:qmstart:106:root@pam:[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 13:39:27 NOD1 kernel: device tap106i0 entered promiscuous mode[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 13:39:27 NOD1 kernel: vmbr0: port 6(tap106i0) entering forwarding state[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 13:39:27 NOD1 kernel: device tap106i1 entered promiscuous mode[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 13:39:27 NOD1 kernel: vmbr0: port 8(tap106i1) entering forwarding state[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 13:39:28 NOD1 pvedaemon[15240]: <root@pam> end task UPID:NOD1:00003CE8:00222474:50D4ACDE:qmstart:106:root@pam: OK[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 13:39:37 NOD1 kernel: tap106i0: no IPv6 routers present[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 13:39:37 NOD1 kernel: tap106i1: no IPv6 routers present[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 13:40:42 NOD1 pvedaemon[15671]: starting vnc proxy UPID:NOD1:00003D37:002241F3:50D4AD2A:vncproxy:106:root@pam:[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 13:40:42 NOD1 pvedaemon[15586]: <root@pam> starting task UPID:NOD1:00003D37:002241F3:50D4AD2A:vncproxy:106:root@pam:[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 13:40:42 NOD1 pvedaemon[15240]: <root@pam> successful auth for user 'root@pam'[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 13:40:45 NOD1 pvedaemon[15240]: <root@pam> successful auth for user 'root@pam'[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 13:41:47 NOD1 ntpd[1584]: Listen normally on 33 tap106i0 fe80::54e3:bcff:febc:9ede UDP 123[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 13:41:47 NOD1 ntpd[1584]: Listen normally on 34 tap106i1 fe80::1c7d:54ff:fe3c:1664 UDP 123[/FONT][/COLOR]

Operating System is Windows 2008 R2 - Its a fresh install with updates and I was going to setup Exchange Server.
 
Code:
[COLOR=#000000][FONT=tahoma]Dec 21 12:36:46 NOD1 kernel: device-mapper: snapshots: Invalidating snapshot: Unable to allocate exception.[/FONT][/COLOR]
[COLOR=#000000][FONT=tahoma]Dec 21 12:37:15 NOD1 kernel: Buffer I/O error on device dm-2, logical block 0
[/FONT][/COLOR][COLOR=#000000][FONT=tahoma]Dec 21 12:37:15 NOD1 kernel: lost page write due to I/O error on dm-2
[/FONT][/COLOR][COLOR=#000000][FONT=tahoma]Dec 21 12:37:15 NOD1 kernel: EXT3-fs (dm-2): I/O error while writing superblock
[/FONT][/COLOR][COLOR=#000000][FONT=tahoma]Dec 21 12:37:17 NOD1 pvedaemon[12321]: ERROR: Backup of VM 100 failed - command '/usr/lib/qemu-server/vmtar  '/mnt/pve/WDigital/dump/vzdump-qemu-100-2012_12_21-12_13_21.tmp/qemu-server.conf' 'qemu-server.conf' '/mnt/vzsnap0/images/100/vm-100-disk-1.raw' 'vm-disk-ide0.raw'|lzop >/mnt/pve/WDigital/dump/vzdump-qemu-100-2012_12_21-12_13_21.tar.dat' failed: exit code 255
[/FONT][/COLOR][COLOR=#000000][FONT=tahoma]Dec 21 12:37:17 NOD1 pvedaemon[12321]: INFO: Backup job finished with errors
[/FONT][/COLOR][COLOR=#000000][FONT=tahoma]Dec 21 12:37:17 NOD1 pvedaemon[12321]: job errors[/FONT][/COLOR][/QUOTE]You seem to have a data disk which is broken (/var/lib/vz)
 
Its happening on all VMS not just this one - how do I fix this any clue.
 
As you always fix a bad disk. Rescue as must valuable data as possible and replace the disk with a new one.

so you are saying one particular VM which drive is messed up is causing network problems on all the VM's
 
exactly thank you - its because of the backup this network issue is not related to that I am sure.
 
exactly thank you - its because of the backup this network issue is not related to that I am sure.

It looks like your backup disk is having I/O problems. I find that occasionally when a backup fails (disk issue or network disruption) the VM gets stuck until it's either shutdown or you use the qm unlock <vmid> command.

Try running a diagnostics on your backup disk. Going by your mount point, it's a western digital device. Their diagnostics software is called Data Lifeguard Diagnostics. There is a Windows application as well as a bootable DOS disk.
 
Code:
kernel: device-mapper: snapshots: Invalidating snapshot: Unable to allocate exception.

you run out of snapshot space!
 
I have just test this with my new machine - Downloaded PROXMOX install fresh and then after upgrading the Proxmox I ran into same problem while transferring data from NAS Drive network connection on my VM get disconnected.

This is now happening on all my VM and I am not in a position to re-install PROXMOX again this is a BUG 100%, otherwise I wont be able to re-produce this error.
 
Same problem here after upgrade from PVE2.1 to PVE2.2.
Until now it happens only with windows machines (Seven and 2000 Server) when there is a considerable network load.
 
Same problem here after upgrade from PVE2.1 to PVE2.2.
Until now it happens only with windows machines (Seven and 2000 Server) when there is a considerable network load.

Change your network adaptor from Realtak to E1000 and it should be fixed.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!