vzdump using lvm snapshot - kills the box

pls note, recently we changed default io scheduler from cfq to deadline. switch back and see if this helps.
 
hi tom,

martin said in bugtracker is not a bug, if you said, you have changed it to a newer version it is a bug, we think.

we would help to solve this. can you tell us please, what we must do, to change scheduler back/downgrade, please. we would test it for you.

thanks and regards
 
There are 2 ways to change the IO scheduler:

1)
You can change it on the fly temporarily which will stick until the next reboot.
1.1) you have to find out the hard drive device name in /dev/ (The first harddrive is usually sda, second is sdb ... do a ls /dev/sd* to find out)
1.2) check if you found the correct device name by looking up it's IO scheduler right now by: cat /sys/block/DEVICENAME/queue/scheduler (e.g. cat /sys/block/sda/queue/scheduler). The output should be: noop anticipatory [deadline] cfq
The scheduler in the [] brackets is the active one.
1.3) To change if for this session do echo 'cfq' > /sys/block/DEVICENAME/queue/scheduler (e.g. echo 'cfq' > /sys/block/sda/queue/scheduler)
1.4) Check the change with cat /sys/block/DEVICENAME/queue/scheduler (e.g. cat /sys/block/sda/queue/scheduler). The output should now be: noop anticipatory deadline [cfq]

2) To change the IO scheduler permanent you have to add the following into /etc/default/grub
2.1) Add the following to the line GRUB_CMDLINE_LINUX_DEFAULT: elevator=cfq, so it looks like:
GRUB_CMDLINE_LINUX_DEFAULT="quiet elevator=cfq"
2.2) run update-grub
2.3) You have to restart Proxmox for it to take effect or do 1) now
2.4) To undo the permanent IO change just remove elevator=cfq from /etc/default/grub and run update-grub again

Please let me know if it has any affect on the error

Sven
 
hi,

thanlks for this tutorial.

we have test it on 2 hp servers with this issue. we have change the io permanently, in test-backups it works fine for us, without kills.
can other users test it, too please and give a response about the status. very thanks

@tom, we have the problem on all hp servers g5 g6 g8 series. if other users have same success, can you include in next update a fix for this issue, please?

regards
 
I switched the scheduler as indicated in the above link, however, I didn't restart the server. I am not aware of how to restart ProxMox entirely without a reboot. Anyway, last night at midnight my backups started running, Proxmox backed up 3 openvz containers without issue and then for whatever reason one of our qemu VMs started shooting up in load. As this is what has happened in the past, I had htop / top / atop running trying to figure out what was going on. Other VMs on the server stopped being responsive. I manually killed the run away VM via kill -9 (using the Proxmox GUI to stop it / command line resulted in a connection timeout) The load on the box continued to escalate above 100,200, etc and there was nothing listed in the process table in any of my utils or a ps aux that could explain the increasing system load. I eventually had to do what I always do, log in to the remote power strips and force a hard reboot on the affected server.

This has happened several times with Prox 3.0.x, when trying to back up to local storage (ext4), iSCSI mount (ext4), and an NFS shared mount. It seems to be possibly dying when attempting to start a back-up on a live qemu VM using "snapshot" mode (I would prefer not to suspend the VM if possible)

I have other Proxmox 3.0.23 servers that do no exhibit this behavior using the default install, backing up qemu / openVZ VMs to local storage, formatted ext3.

Now that server has rebooted and everything is using the CFQ scheduler I will see how it behaves and report back what I find. It may be related to ext4 or LVM snapshots but I am not certain at this point.

Thanks,
Joe







hi,

thanlks for this tutorial.

we have test it on 2 hp servers with this issue. we have change the io permanently, in test-backups it works fine for us, without kills.
can other users test it, too please and give a response about the status. very thanks

@tom, we have the problem on all hp servers g5 g6 g8 series. if other users have same success, can you include in next update a fix for this issue, please?

regards
 
hi @subversion,

we have only ext3 filesystem wioth nfs backup storages. we test at the moment, if this issue solved all server or only the testerver. can you tell us please, if your backup run tonight normal without high io load, very thanks.

we would inform you about our tests. thanks

do anybody have a solution with this change, or do you have the same issue, too? please send us a answer @other users.

regards
 
Update -

I tried running backups again last night on ProxMox 3.0.23, only on openVZ containers - which caused the exact same problem. System load skyrocketed and I was forced to reboot the machine to get it running properly again. Dumping the backup logs and dmesg / kernel logs shows extensive issues in Quota / ext4 / journaling -

Logs / dmesg and strace output from the affected server:
http://pastebin.com/NMXUc4Ru
http://pastebin.com/R6ZTvQj5

The first two VMs (both openVZ) 1001, 1002 seem to back up fine and it's when it hits VM 1003 that it seems to go haywire. VM 1003 is an OpenVZ container, 60GB running Ubuntu 12.10 template from the OpenVZ repository. We have other VMs running this same template that are backing up just fine on other ProxMox 3.0.23 boxes.

The biggest difference between the working servers and this failing one is that this server is the use of EXT4 filesystem:

Failed server EXT4 / iSCSI mounts:
root@xx5:~# mount | grep ext
/dev/mapper/pve-root on / type ext3 (rw,relatime,errors=remount-ro,user_xattr,acl,barrier=0,data=ordered)
/dev/sda1 on /boot type ext3 (rw,relatime,errors=continue,user_xattr,acl,barrier=0,data=ordered)
/dev/mapper/xx5-vz on /var/lib/vz type ext4 (rw,noatime,barrier=1,nodelalloc,data=ordered,_netdev)

All the running VMs are on an iSCSI SAN mounted EXT4 filesystem. If you look at the logs I linked above there are tons of kernel errors / issues apparently related to EXT4 in this configuration.

This Proxmox server was cleanly installed just a few weeks ago and the iSCSI mount set up to provide the filesystem for /var/lib/vz (ext4) that we are using with success on several ProxMox 1.9 servers (these are being upgraded to 3.0.x as time permits, XX5 was the very first go at it.)

Example Proxmox 1.9 working mounts (ext3 and ext4):
xx1:~# mount | grep ext
/dev/mapper/pve-root on / type ext3 (rw,errors=remount-ro)
/dev/sda1 on /boot type ext3 (rw)
/dev/mapper/xx1-vz on /var/lib/vz type ext4 (rw,_netdev,noatime,nodelalloc)
/dev/mapper/xx1-vzsnap--xx1--0 on /mnt/vzsnap0 type ext4 (rw)

Example Proxmox 3.0.23 working mounts (all ext3):
root@prox1:~# mount | grep ext
/dev/mapper/pve-root on / type ext3 (rw,relatime,errors=remount-ro,user_xattr,acl,barrier=0,data=ordered)
/dev/mapper/pve-data on /var/lib/vz type ext3 (rw,relatime,errors=continue,user_xattr,acl,barrier=0,data=ordered)
/dev/sda1 on /boot type ext3 (rw,relatime,errors=continue,user_xattr,acl,barrier=0,data=ordered)



I am not sure what can be done to correct this outside of moving all the VMs, and reformatting the iSCSI volume to ext3, which is painful and will take some time, and I have no idea if it will fix the issue. Per prior recommendation we are still running the CFQ scheduler.

Any help would be appreciated, hope this helps others.

Cheers,
Joe Jenkins



hi @subversion,

we have only ext3 filesystem wioth nfs backup storages. we test at the moment, if this issue solved all server or only the testerver. can you tell us please, if your backup run tonight normal without high io load, very thanks.

we would inform you about our tests. thanks

do anybody have a solution with this change, or do you have the same issue, too? please send us a answer @other users.

regards
 
we two hope to get a solution of the proxmox team, the problem is, we have open a bug and the master of the team is close it with announce, its not a bug. but if many peoples have this same issue, its a bug or a configuration problem in the new version 3.*. we think.


regards
 
Clearly the problem is with vzdump (and possibly of certain types of RAID5.)
Until that is fixed we could see if we can change the way vzdump does the snapshots.
Is there a way to manually define what device vzdump uses for backup?
 
Clearly the problem is with vzdump (and possibly of certain types of RAID5.)

no, one users got a faulty firmware on the adaptec raid controller, others reported also a fix using latest drivers for their HP controllers (they use now 2.6.32-22-107)

Until that is fixed we could see if we can change the way vzdump does the snapshots.
Is there a way to manually define what device vzdump uses for backup?

no.
 
This is likely not a vzdump problem (although looks related), since vzdump is a userland process which is probably unable to cause a system-wide IO freeze by itself.

The problem most likely lies in the kernel, and is influenced by several factors:
- raid controller (several raid controllers exhibited the problem, to varying degrees)
- logical volume manager (so far, lvm only)
- filesystem (ext3 less likely, ext4 more likely)
- io scheduler (deadline and cfq, so probably scheduler is not related)
- vzdump (problem shows up during vzdump backups, surely related)
- specific OpenVZ VE's (simfs ?)

For us it looks like this: vzdump snapshot backups start at 11 pm, by 3am they reach VE 215, and the entire system freezes (all disk IO stops, load keeps climbing to the sky). If VE 215 is moved off the server then the problem does not appear.

We run Proxmox VE 3.0 on Core i7 servers, Adaptec 6805E RAID10, ext4 filesystem, deadline scheduler.
 
We are using Dell servers - the servers that are failing are using Dell PERC H700 RAID controllers with write / read caching enabled. We have a cluster of 6 new Dell 620 servers that use the Dell PERC H310 controller - these servers are working fine and backups are happening without any issue. Below I will post the details of the working and non-working hardware.

We have tried both schedulers with no luck. I did do the latest kernel on the failing server and will reboot it this weekend and attempt backups again and post my results. Thanks everyone for trying to sort this problem out, it's a painful one. Hopefully this information will be useful to someone else for comparison. I have also posted other kernel logs / info in this and other threads. I agree with the ProxMox team that it now appears kernel / RAID or filesystem driver related. I am digging more into that as well.

Cheers,
Joe Jenkins

All Servers are DELL -
FAILING SERVER - please note it is now running the new kernel / pve released a day or two ago - I have NOT yet rebooted the box to retest VMs, but will soon!
megaraid_sas 0000:05:00.0: irq 100 for MSI/MSI-X
ahci 0000:00:11.0: version 3.0
alloc irq_desc for 22 on node 0
alloc kstat_irqs on node 0
ahci 0000:00:11.0: PCI INT A -> GSI 22 (level, low) -> IRQ 22
ahci 0000:00:11.0: AHCI 0001.0100 32 slots 4 ports 3 Gbps 0xf impl SATA mode
ahci 0000:00:11.0: flags: 64bit ncq sntf ilck pm led clo pmp pio slum part ccc
scsi1 : ahci
scsi2 : ahci
scsi3 : ahci
scsi4 : ahci
ata1: SATA max UDMA/133 abar m1024@0xef2ff800 port 0xef2ff900 irq 22
ata2: SATA max UDMA/133 abar m1024@0xef2ff800 port 0xef2ff980 irq 22
ata3: SATA max UDMA/133 abar m1024@0xef2ff800 port 0xef2ffa00 irq 22
ata4: SATA max UDMA/133 abar m1024@0xef2ff800 port 0xef2ffa80 irq 22
megasas_init_mfi: fw_support_ieee=67108864
megasas: INIT adapter done
scsi0 : LSI SAS based MegaRAID driver
scsi 0:0:0:0: Direct-Access SEAGATE ST9146803SS FS64 PQ: 0 ANSI: 5
Refined TSC clocksource calibration: 1900.022 MHz.
Switching to clocksource tsc
scsi 0:0:1:0: Direct-Access SEAGATE ST9146803SS FS64 PQ: 0 ANSI: 5
scsi 0:0:2:0: Direct-Access SEAGATE ST9146803SS FS64 PQ: 0 ANSI: 5
scsi 0:0:32:0: Enclosure DP BACKPLANE 1.07 PQ: 0 ANSI: 5
scsi 0:2:0:0: Direct-Access DELL PERC H700 2.10 PQ: 0 ANSI: 5
sd 0:2:0:0: [sda] 570949632 512-byte logical blocks: (292 GB/272 GiB)
sd 0:2:0:0: [sda] Write Protect is off
sd 0:2:0:0: [sda] Mode Sense: 1f 00 00 08
sd 0:2:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
sda: sda1 sda2
sd 0:2:0:0: [sda] Attached SCSI disk

pve-manager: 3.0-23 (pve-manager/3.0/957f0862)
running kernel: 2.6.32-20-pve
proxmox-ve-2.6.32: 3.0-107
pve-kernel-2.6.32-22-pve: 2.6.32-107
pve-kernel-2.6.32-20-pve: 2.6.32-100
lvm2: 2.02.95-pve3
clvm: 2.02.95-pve3
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-1
pve-cluster: 3.0-4
qemu-server: 3.0-20
pve-firmware: 1.0-23
libpve-common-perl: 3.0-4
libpve-access-control: 3.0-4
libpve-storage-perl: 3.0-8
vncterm: 1.1-4
vzctl: 4.0-1pve3
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-13
ksm-control-daemon: 1.1-1

primary mounts:
/dev/sda1 on /boot type ext3 (rw,relatime,errors=continue,user_xattr,acl,barrier=0,data=ordered)
/dev/mapper/pve-root on / type ext3 (rw,relatime,errors=remount-ro,user_xattr,acl,barrier=0,data=ordered)
/dev/fuse on /etc/pve type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)
/dev/mapper/xx5-vz on /var/lib/vz type ext4 (rw,noatime,barrier=1,nodelalloc,data=ordered,_netdev)


All VMs run on an iSCSI mount / ext4 (see last line above)

----------------------------------------------------------------------------------------------------

WORKING SERVERS - Dell 620, brand new, no issues with backups as of yet.
scsi1 : ahci
scsi2 : ahci
scsi3 : ahci
scsi4 : ahci
scsi5 : ahci
scsi6 : ahci
ata1: SATA max UDMA/133 abar m2048@0xdf8ff000 port 0xdf8ff100 irq 105
ata2: SATA max UDMA/133 abar m2048@0xdf8ff000 port 0xdf8ff180 irq 105
ata3: SATA max UDMA/133 abar m2048@0xdf8ff000 port 0xdf8ff200 irq 105
ata4: SATA max UDMA/133 abar m2048@0xdf8ff000 port 0xdf8ff280 irq 105
ata5: SATA max UDMA/133 abar m2048@0xdf8ff000 port 0xdf8ff300 irq 105
ata6: SATA max UDMA/133 abar m2048@0xdf8ff000 port 0xdf8ff380 irq 105
megasas_init_mfi: fw_support_ieee=67108864
megasas: INIT adapter done
scsi0 : LSI SAS based MegaRAID driver
scsi 0:0:0:0: Direct-Access ATA ST9250610NS AA09 PQ: 0 ANSI: 5
scsi 0:0:1:0: Direct-Access ATA ST9250610NS AA09 PQ: 0 ANSI: 5
scsi 0:0:2:0: Direct-Access ATA ST9250610NS AA09 PQ: 0 ANSI: 5
scsi 0:0:3:0: Direct-Access ATA ST9250610NS AA09 PQ: 0 ANSI: 5
scsi 0:0:32:0: Enclosure DP BP12G+ 1.00 PQ: 0 ANSI: 5
scsi 0:2:0:0: Direct-Access DELL PERC H310 2.12 PQ: 0 ANSI: 5
sd 0:2:0:0: [sda] 974651392 512-byte logical blocks: (499 GB/464 GiB)
sd 0:2:0:0: [sda] Write Protect is off
sd 0:2:0:0: [sda] Mode Sense: 1f 00 10 08
sd 0:2:0:0: [sda] Write cache: disabled, read cache: disabled, supports DPO and FUA
sda: sda1 sda2
sd 0:2:0:0: [sda] Attached SCSI disk

pve-manager: 3.0-23 (pve-manager/3.0/957f0862)
running kernel: 2.6.32-20-pve
proxmox-ve-2.6.32: 3.0-100
pve-kernel-2.6.32-20-pve: 2.6.32-100
lvm2: 2.02.95-pve3
clvm: 2.02.95-pve3
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-1
pve-cluster: 3.0-4
qemu-server: 3.0-20
pve-firmware: 1.0-22
libpve-common-perl: 3.0-4
libpve-access-control: 3.0-4
libpve-storage-perl: 3.0-8
vncterm: 1.1-4
vzctl: 4.0-1pve3
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-13
ksm-control-daemon: 1.1-1

primary mounts:
/dev/sda1 on /boot type ext3 (rw,relatime,errors=continue,user_xattr,acl,barrier=0,data=ordered)
/dev/mapper/pve-root on / type ext3 (rw,relatime,errors=remount-ro,user_xattr,acl,barrier=0,data=ordered)
/dev/mapper/pve-data on /var/lib/vz type ext3 (rw,relatime,errors=continue,user_xattr,acl,barrier=0,data=ordered)
/dev/fuse on /etc/pve type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)
All VMs on this server run on a local EXT3 mount on the RAID controller (3rd line of mounts above)

---------------------------------------


This is likely not a vzdump problem (although looks related), since vzdump is a userland process which is probably unable to cause a system-wide IO freeze by itself.

The problem most likely lies in the kernel, and is influenced by several factors:
- raid controller (several raid controllers exhibited the problem, to varying degrees)
- logical volume manager (so far, lvm only)
- filesystem (ext3 less likely, ext4 more likely)
- io scheduler (deadline and cfq, so probably scheduler is not related)
- vzdump (problem shows up during vzdump backups, surely related)
- specific OpenVZ VE's (simfs ?)

For us it looks like this: vzdump snapshot backups start at 11 pm, by 3am they reach VE 215, and the entire system freezes (all disk IO stops, load keeps climbing to the sky). If VE 215 is moved off the server then the problem does not appear.

We run Proxmox VE 3.0 on Core i7 servers, Adaptec 6805E RAID10, ext4 filesystem, deadline scheduler.
 
Hello!

Here with a Dell R420 + PERC H710 Mini and last -107 kernel uploaded yesterday, the server crashed again this night. The server contains 3 openvz CT.
 
Dell R620 PERC H710 with kernel 2.6.32-107 no problem here.
We use KVM only.
 
Same here Dell T420 + Perc H710 on the last kernel 2.6.32-107 and openVZ Snapshot killed the server again last night
 
Hello more on this. May be a way to look.

I have 2 Dell R420 servers with the same config. The first one contains 3 larges openvz CT and was crashing. I removed the night backup script, no more crash. The 3 CT contains 10 of thousand of file (small jpg and text cache files). I have a second Dell R420 servers with 7 small openvz CT. This one is not crashing.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!