New 3.10.0 Kernel

Hi,

Same error here with a Dell PE R620 server, with H710p Raid controller, and 8 500 Go NearLine SAS drives in Raid 10.
At boot, it says 'no controller found', then, 'LVM volume not found', and 'volume pve-root not found'.

In the rescue shell (busybox), 'lvm vgchange -a -y' then Ctrl-D allows to boot normally.

Editing /etc/default/grub, GRUB_CMDLINE_LINUX_DEFAULT and add "scsi_mod.scan=sync" (after quiet), then grub-update, did the trick, it rebooted without problem.

Thanks for the hint !

Note : it is on a fresh 3.2 install, using the stock kernel in the repository, 3.10.0-1.

Hi,
just installed 3.10.0-4 on a test system and the volume group was also not found.
An scsi_mod.scan=sync don't change anything, but with rootdelay=10 (both) it's work.

The system is installed on an ssd, which is simply connected to the motherboard.
Code:
lspci
...
00:11.0 SATA controller: Advanced Micro Devices, Inc. [AMD/ATI] SB7x0/SB8x0/SB9x0 SATA Controller [IDE mode]
...
Udo
 
quick note on 3.10 kernel, when i try to install iscsitarget-dkms i get errors, it looks to be a good match to https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=696383

i think this is all we need http://sourceforge.net/p/iscsitarget/patches/22/

Code:
root@neon:~# apt-get install iscsitarget iscsitarget-dkms
Reading package lists... Done
Building dependency tree
Reading state information... Done
Recommended packages:
  iscsitarget-module
The following NEW packages will be installed:
  iscsitarget iscsitarget-dkms
0 upgraded, 2 newly installed, 0 to remove and 6 not upgraded.
Need to get 0 B/154 kB of archives.
After this operation, 512 kB of additional disk space will be used.
Selecting previously unselected package iscsitarget.
(Reading database ... 68741 files and directories currently installed.)
Unpacking iscsitarget (from .../iscsitarget_1.4.20.2-10.1_amd64.deb) ...
Selecting previously unselected package iscsitarget-dkms.
Unpacking iscsitarget-dkms (from .../iscsitarget-dkms_1.4.20.2-10.1_all.deb) ...
Processing triggers for man-db ...
Setting up iscsitarget (1.4.20.2-10.1) ...
iscsitarget not enabled in "/etc/default/iscsitarget", not starting... ... (warning).
Setting up iscsitarget-dkms (1.4.20.2-10.1) ...


Creating symlink /var/lib/dkms/iscsitarget/1.4.20.2/source ->
                 /usr/src/iscsitarget-1.4.20.2


DKMS: add completed.


Kernel preparation unnecessary for this kernel.  Skipping...


Building module:
cleaning build area....
make KERNELRELEASE=3.10.0-4-pve -C /lib/modules/3.10.0-4-pve/build M=/var/lib/dkms/iscsitarget/1.4.20.2/build....(bad exit status: 2)
Error! Bad return status for module build on kernel: 3.10.0-4-pve (x86_64)
Consult /var/lib/dkms/iscsitarget/1.4.20.2/build/make.log for more information.

Code:
root@neon:~# cat /var/lib/dkms/iscsitarget/1.4.20.2/build/make.log
DKMS make.log for iscsitarget-1.4.20.2 for kernel 3.10.0-4-pve (x86_64)
Thu Aug 21 15:56:19 EDT 2014
make: Entering directory `/usr/src/linux-headers-3.10.0-4-pve'
  LD      /var/lib/dkms/iscsitarget/1.4.20.2/build/built-in.o
  LD      /var/lib/dkms/iscsitarget/1.4.20.2/build/kernel/built-in.o
  CC [M]  /var/lib/dkms/iscsitarget/1.4.20.2/build/kernel/tio.o
  CC [M]  /var/lib/dkms/iscsitarget/1.4.20.2/build/kernel/iscsi.o
  CC [M]  /var/lib/dkms/iscsitarget/1.4.20.2/build/kernel/nthread.o
  CC [M]  /var/lib/dkms/iscsitarget/1.4.20.2/build/kernel/wthread.o
/var/lib/dkms/iscsitarget/1.4.20.2/build/kernel/wthread.c: In function ‘worker_thread’:
/var/lib/dkms/iscsitarget/1.4.20.2/build/kernel/wthread.c:76:3: error: implicit declaration of function ‘get_io_context’ [-Werror=implicit-function-declaration]
/var/lib/dkms/iscsitarget/1.4.20.2/build/kernel/wthread.c:76:21: warning: assignment makes pointer from integer without a cast [enabled by default]
cc1: some warnings being treated as errors
make[2]: *** [/var/lib/dkms/iscsitarget/1.4.20.2/build/kernel/wthread.o] Error 1
make[1]: *** [/var/lib/dkms/iscsitarget/1.4.20.2/build/kernel] Error 2
make: *** [_module_/var/lib/dkms/iscsitarget/1.4.20.2/build] Error 2
make: Leaving directory `/usr/src/linux-headers-3.10.0-4-pve'


versions
Code:
root@neon:~# pveversion -v
proxmox-ve-2.6.32: 3.1-114 (running kernel: 3.10.0-4-pve)
pve-manager: 3.1-21 (running version: 3.1-21/93bf03d4)
pve-kernel-3.10.0-4-pve: 3.10.0-15
pve-kernel-2.6.32-26-pve: 2.6.32-114
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-2
pve-cluster: 3.0-14
qemu-server: 3.1-8
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-7
libpve-storage-perl: 3.0-21
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-17
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.2-1
 
Last edited:
For me was this essential in the kernelline. So i was able to boot with the new kernel with an older HPserver (cciss)
Code:
hpsa.hpsa_simple_mode=1 hpsa.hpsa_allow_any=1
 
Is it normal behaviour that after live migrating a KVM instance from a host running kernel 3.10 to a host running 2.6.32 that the VM locks up and goes into "internal-error" state? The hardware is identical, CPU is set to KVM64, de guest is a Linux system running 3.13 kernel.

Here's the pveversion output:
Code:
[FONT=Andale Mono]proxmox-ve-2.6.32: 3.2-136 (running kernel: 3.10.0-4-pve)pve-manager: 3.3-1 (running version: 3.3-1/a06c9f73)[/FONT]
[FONT=Andale Mono]pve-kernel-3.10.0-4-pve: 3.10.0-17[/FONT]
[FONT=Andale Mono]pve-kernel-2.6.32-32-pve: 2.6.32-136[/FONT]
[FONT=Andale Mono]pve-kernel-2.6.32-28-pve: 2.6.32-124[/FONT]
[FONT=Andale Mono]pve-kernel-3.10.0-2-pve: 3.10.0-10[/FONT]
[FONT=Andale Mono]pve-kernel-2.6.32-30-pve: 2.6.32-130[/FONT]
[FONT=Andale Mono]pve-kernel-2.6.32-29-pve: 2.6.32-126[/FONT]
[FONT=Andale Mono]pve-kernel-3.10.0-3-pve: 3.10.0-11[/FONT]
[FONT=Andale Mono]pve-kernel-2.6.32-31-pve: 2.6.32-132[/FONT]
[FONT=Andale Mono]pve-kernel-2.6.32-26-pve: 2.6.32-114[/FONT]
[FONT=Andale Mono]lvm2: 2.02.98-pve4[/FONT]
[FONT=Andale Mono]clvm: 2.02.98-pve4[/FONT]
[FONT=Andale Mono]corosync-pve: 1.4.7-1[/FONT]
[FONT=Andale Mono]openais-pve: 1.1.4-3[/FONT]
[FONT=Andale Mono]libqb0: 0.11.1-2[/FONT]
[FONT=Andale Mono]redhat-cluster-pve: 3.2.0-2[/FONT]
[FONT=Andale Mono]resource-agents-pve: 3.9.2-4[/FONT]
[FONT=Andale Mono]fence-agents-pve: 4.0.10-1[/FONT]
[FONT=Andale Mono]pve-cluster: 3.0-15[/FONT]
[FONT=Andale Mono]qemu-server: 3.1-34[/FONT]
[FONT=Andale Mono]pve-firmware: 1.1-3[/FONT]
[FONT=Andale Mono]libpve-common-perl: 3.0-19[/FONT]
[FONT=Andale Mono]libpve-access-control: 3.0-15[/FONT]
[FONT=Andale Mono]libpve-storage-perl: 3.0-23[/FONT]
[FONT=Andale Mono]pve-libspice-server1: 0.12.4-3[/FONT]
[FONT=Andale Mono]vncterm: 1.1-8[/FONT]
[FONT=Andale Mono]vzctl: 4.0-1pve6[/FONT]
[FONT=Andale Mono]vzprocps: 2.0.11-2[/FONT]
[FONT=Andale Mono]vzquota: 3.1-2[/FONT]
[FONT=Andale Mono]pve-qemu-kvm: 2.1-5[/FONT]
[FONT=Andale Mono]ksm-control-daemon: 1.1-1[/FONT]
[FONT=Andale Mono]glusterfs-client: 3.5.2-1[/FONT]
 
Hi,
just installed 3.10.0-4 on a test system and the volume group was also not found.
An scsi_mod.scan=sync don't change anything, but with rootdelay=10 (both) it's work.

The system is installed on an ssd, which is simply connected to the motherboard.
Code:
lspci
...
00:11.0 SATA controller: Advanced Micro Devices, Inc. [AMD/ATI] SB7x0/SB8x0/SB9x0 SATA Controller [IDE mode]
...
Udo

This is the same for me, only with "scsi_mod.scan=sync" didn't work, must add "rootdelay=10".
My system info:
Server: HP ML350e Gen8 v2
Boot SATA disk on internal B120i controller (it should a PCH SATA port)
PVE-kernel: 3.10-5
 
Can you suggest how to upgrade drbd-utils to 8.4.3?

Yes, e100 gave a general way in an old thread (http://forum.proxmox.com/threads/9376-Bug-in-DRBD-causes-split-brain-already-patched-by-DRBD-devs).

You only need to build drbd8-utils package, from sources, then install it.
You don't need kernel module, it is bundled with kernel 3.10.0 now.
Here is what did the job for me :

Code:
mkdir drbd
cd drbd
apt-get install git-core git-buildpackage fakeroot debconf-utils docbook-xml docbook-xsl dpatch xsltproc autoconf flex
git clone [URL]http://git.drbd.org/drbd-8.4.git[/URL]
cd drbd-8.4
git checkout drbd-8.4.3   
dpkg-buildpackage -rfakeroot -b -uc
cd ..
dpkg -i drbd8-utils_8.4.3-0_amd64.deb
Keep your config files when a question is asked.
Reboot your 3.10.0 kernel.

Check your resources : cat /proc/drbd.
You probably will have to disconnect / connect your drbd resources, but no problem after that.


That's all!

Christophe.
 
hello

I'm testing latest PVE with kernel 3.10.0-5-pve on two nodes with DRBD without HA.
It was working fine until yesterday when I did apt-get update/upgrade/dist-upgrade.
On updates list, it showed a new version for kernel 3.10.0-5-pve and a new version of PVE (3.3-140), so I decided to move on.
After upgrading both nodes, I have issues when I live migrate vms between nodes.
Particulary the error seems to occur on the last step, just after vm migrates to the opposite node.
Sometimes the vm is migrated successfully,regardless the errors and sometimes it fails and the vm is stopped.
Below follows some info:

uname -a
Linux proxmox2 3.10.0-5-pve #1 SMP Tue Dec 16 14:47:36 CET 2014 x86_64 GNU/Linux

pveversion -vproxmox-ve-2.6.32: 3.3-140 (running kernel: 3.10.0-5-pve)
pve-manager: 3.3-7 (running version: 3.3-7/bc628e80)
pve-kernel-3.10.0-4-pve: 3.10.0-17
pve-kernel-3.10.0-5-pve: 3.10.0-21
pve-kernel-2.6.32-34-pve: 2.6.32-140
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-2
pve-cluster: 3.0-15
qemu-server: 3.3-6
pve-firmware: 1.1-3
libpve-common-perl: 3.0-21
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-26
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.2-5
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1

drbdadm --versionDRBDADM_BUILDTAG=GIT-hash:\ 599f286440bd633d15d5ff985204aff4bccffadd\ build\ by\ phil@fat-tyre\,\ 2013-10-11\ 16:42:48
DRBDADM_API_VERSION=1
DRBD_KERNEL_VERSION_CODE=0x080403
DRBDADM_VERSION_CODE=0x080404
DRBDADM_VERSION=8.4.4

qm migrate 114 proxmox --onlineDec 22 16:36:43 starting migration of VM 114 to node 'proxmox' (172.21.3.3)
Dec 22 16:36:43 copying disk images
Dec 22 16:36:43 starting VM 114 on remote node 'proxmox'
Dec 22 16:36:45 starting ssh migration tunnel
Dec 22 16:36:45 starting online/live migration on localhost:60000
Dec 22 16:36:45 migrate_set_speed: 8589934592
Dec 22 16:36:45 migrate_set_downtime: 0.1
Dec 22 16:36:47 migration speed: 512.00 MB/s - downtime 2 ms
Dec 22 16:36:47 migration status: completed
Dec 22 16:36:48 ERROR: stopping vm failed - no such VM ('114')
device-mapper: remove ioctl on failed: Device or resource busy
device-mapper: remove ioctl on failed: Device or resource busy
device-mapper: remove ioctl on failed: Device or resource busy
device-mapper: remove ioctl on failed: Device or resource busy
device-mapper: remove ioctl on failed: Device or resource busy
device-mapper: remove ioctl on failed: Device or resource busy
device-mapper: remove ioctl on failed: Device or resource busy
device-mapper: remove ioctl on failed: Device or resource busy
device-mapper: remove ioctl on failed: Device or resource busy
device-mapper: remove ioctl on failed: Device or resource busy
device-mapper: remove ioctl on failed: Device or resource busy
device-mapper: remove ioctl on failed: Device or resource busy
device-mapper: remove ioctl on failed: Device or resource busy
device-mapper: remove ioctl on failed: Device or resource busy
device-mapper: remove ioctl on failed: Device or resource busy
device-mapper: remove ioctl on failed: Device or resource busy
device-mapper: remove ioctl on failed: Device or resource busy
device-mapper: remove ioctl on failed: Device or resource busy
device-mapper: remove ioctl on failed: Device or resource busy
device-mapper: remove ioctl on failed: Device or resource busy
device-mapper: remove ioctl on failed: Device or resource busy
device-mapper: remove ioctl on failed: Device or resource busy
device-mapper: remove ioctl on failed: Device or resource busy
device-mapper: remove ioctl on failed: Device or resource busy
device-mapper: remove ioctl on failed: Device or resource busy
can't deactivate LV '/dev/drbd7vg/vm-114-disk-1': Unable to deactivate drbd7vg-vm--114--disk--1 (253:3)
Dec 22 16:36:54 ERROR: volume deativation failed: drbd7-fog:vm-114-disk-1 at /usr/share/perl5/PVE/Storage.pm line 820.
Dec 22 16:36:54 ERROR: migration finished with problems (duration 00:00:11)
migration problems

Tried reverting to kernel 3.10.0-4-pve, but the error occurs there too.

P.S
The error occurs on all vms which I try to live migrate.
I'm on pvetest repo.
Note the typo in this line: ERROR: volume deativation failed:
 
Last edited:
seems you run pvetest (as you have pve-qemu-kvm: 2.2-5)

live migration is currently not working in the pvetest repository (qemu 2.2)

but this is not related to the thread topic, so please open a new thread instead.
 
Jup, just found the same. I'm ok with this, i won't cry at night, if OpenVZ will be replaced with LXC. As long as containering works, i'm happy. Of course, thanks for all the fish to the OpenVZ project, kicking of all the prereqs (namespaces etc.) for LXC.
 
Ouch... I hope they could keep OpenVZ if they made it into the new kernel version soon. Seems like this is a main reason to migrate to LXC, because it lacks some features of OpenVZ like vSwap, ploop and live migration. At least it was last time I've checked.
 
Before Proxmox 4.0 is released live migration will be available in LXC. Regarding backing store LXC is much more advanced that OpenVZ. Current supported backing store for LXC: 'dir', 'lvm', 'loop', 'btrfs', 'zfs'
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!