new kernel oops

G

gray_graff

Guest
hi all
get stable kernel oops on latest kernel
2.6.32-10-pve - work fine
2.6.32-11-pve - oops
server - IBM eserver xSeries 346 (on 2 HP servers work fine)
oops happens every time I do a live migration of vz machine from HP server to IBM
storage - local

#pveversion --verbose
pve-manager: 2.0-59 (pve-manager/2.0/18400f07)
running kernel: 2.6.32-10-pve
proxmox-ve-2.6.32: 2.0-66
pve-kernel-2.6.32-10-pve: 2.6.32-63
pve-kernel-2.6.32-11-pve: 2.6.32-66
lvm2: 2.02.88-2pve2
clvm: 2.02.88-2pve2
corosync-pve: 1.4.3-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.8-3
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.7-2
pve-cluster: 1.0-26
qemu-server: 2.0-38
pve-firmware: 1.0-15
libpve-common-perl: 1.0-26
libpve-access-control: 1.0-18
libpve-storage-perl: 2.0-17
vncterm: 1.0-2
vzctl: 3.0.30-2pve2
vzprocps: 2.0.11-2
vzquota: 3.0.12-3
pve-qemu-kvm: 1.0-9
ksm-control-daemon: 1.1-1

IMG_20120413_171605.jpg
 
What OS template do you use? Is it reproducible when you create a new container using the same template?
 
What OS template do you use? Is it reproducible when you create a new container using the same template?

centos-5-standard_5.6-1_i386.tar.gz and debian-6.0-standard_6.0-4_i386.tar.gz
always reproducible.
Note that the case is only when the selected local storage.
If i use Storage on Directory on glusterfs. all work fine

Log from web. on "restore container state" - oops:
Apr 16 13:14:29 starting migration of CT 107 to node 'proxmox-ibm1' (192.168.2.99)
Apr 16 13:14:29 container is running - using online migration
Apr 16 13:14:29 starting rsync phase 1
Apr 16 13:14:29 # /usr/bin/rsync -aH --delete --numeric-ids --sparse /var/lib/vz/private/107 root@192.168.2.99:/var/lib/vz/private
Apr 16 13:14:50 start live migration - suspending container
Apr 16 13:14:50 dump container state
Apr 16 13:14:50 copy dump file to target node
Apr 16 13:14:50 starting rsync (2nd pass)
Apr 16 13:14:50 # /usr/bin/rsync -aH --delete --numeric-ids /var/lib/vz/private/107 root@192.168.2.99:/var/lib/vz/private
Apr 16 13:14:51 dump 2nd level quota
Apr 16 13:14:51 copy 2nd level quota to target node
Apr 16 13:14:52 initialize container on remote node 'proxmox-ibm1'
Apr 16 13:14:52 initializing remote quota
Apr 16 13:14:53 turn on remote quota
Apr 16 13:14:53 load 2nd level quota
Apr 16 13:14:53 starting container on remote node 'proxmox-ibm1'
Apr 16 13:14:53 restore container state

--
i use google transtate
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!