New Proxmox VE Kernels (2.6.18 - 2.6.24 - 2.6.32) [UPDATE]

Re: New Proxmox VE Kernels (2.6.18 - 2.6.24 - 2.6.32)

Hi Garret,
the Nic isn't renamed in /etc/udev/rules.d/70-persistent-net.rules?

Udo

I checked the file, the nic that was not working was org. called eth0 and now it looks to be called eth1. This is after the nic was disabled from the web interface. I will try to re-enable it to see if it starts working again.

I also noticed that nfs-kernel-server is not working for 2.6.32. It states:

"Not starting NFS kernel daemon: no support in current kernel."

I would assume that nfs was not compiled in the kernel?

- Garrett
 
Re: New Proxmox VE Kernels (2.6.18 - 2.6.24 - 2.6.32)

All,

First, I must confess I am not a Linux guru so forgive me if these questions sound simple!

I tried upgrading to the 2.6.32 release on my home server and everything is working great except for one issue. It seems the .32 kernel does not have NFS enabled. When I try to start the NFS service to export a shared drive I get the following error:

Code:
vm1:~# /etc/init.d/nfs-kernel-server restart
Stopping NFS kernel daemon: mountd nfsd.
Unexporting directories for NFS kernel daemon....
Not starting NFS kernel daemon: no support in current kernel. (warning).

Any idea one how I can get this to work? Or can someone tell me tell proper way to downgrade back to the stable 2.6.24-9 release?

Thanks!

Daniel
 
Re: New Proxmox VE Kernels (2.6.18 - 2.6.24 - 2.6.32)

All,

First, I must confess I am not a Linux guru so forgive me if these questions sound simple!

I tried upgrading to the 2.6.32 release on my home server and everything is working great except for one issue. It seems the .32 kernel does not have NFS enabled. When I try to start the NFS service to export a shared drive I get the following error:

Code:
vm1:~# /etc/init.d/nfs-kernel-server restart
Stopping NFS kernel daemon: mountd nfsd.
Unexporting directories for NFS kernel daemon....
Not starting NFS kernel daemon: no support in current kernel. (warning).
Any idea one how I can get this to work? Or can someone tell me tell proper way to downgrade back to the stable 2.6.24-9 release?

Thanks!

Daniel

I had the same issue. Until they compile 2.6.32 with nfs support, you can remove nfs-kernel-server and install unfs3 for the time being.

- Garrett
 
Re: New Proxmox VE Kernels (2.6.18 - 2.6.24 - 2.6.32)

Garrett,

Thanks for the tip. I missed the last part of your previous post where you basically said the same thing, sorry for the double post!

I uninstalled nfs-kernel-server and installed unfs3 without much trouble. Didn't even have to change the exports file, so it is a relatively painless workaround for now.

With that said, can Dietmar (or any of the other Proxmox folks) confirm that NFS support will be added to the 2.6.32 release builds?

Thanks!

Daniel
 
Re: New Proxmox VE Kernels (2.6.18 - 2.6.24 - 2.6.32)

The NFS is actually running fine with 2.6.32 kernel, it is the start-up script checking routine failed to detect its existent. Edit file /etc/init.d/nfs-kernel-server, find the line
# See if our running kernel supports the NFS kernel server
change the following line
if [ -f /proc/kallsyms ] && ! grep -qE 'init_nf(sd| ' /proc/kallsyms; then
to
if [ -f /proc/kallsyms ] && ! grep -qE '(init_nf|init_nfsd | nfsd_serv )' /proc/kallsyms; then

then the nfs-kernel-server should be running fine. The symbol init_nf/init_nfsd is removed from kernel 2.6.32 and the symbol nfsd_serv has been in kernel since 2005.
 
Re: New Proxmox VE Kernels (2.6.18 - 2.6.24 - 2.6.32)

2.6.32 working great - Don't know if it's memory ballooning or KSM, but my environment went from 8GB memory used on 2.6.24 to about 3.5GB on the new kernel...

2 Win2k8, 3 Ubuntu 9.10, 3 Ubuntu 8.04.3 - all server OSs.
 
Re: New Proxmox VE Kernels (2.6.18 - 2.6.24 - 2.6.32)

I would assume that nfs was not compiled in the kernel?

Code:
CONFIG_NFSD=m
CONFIG_NFSD_V2_ACL=y
CONFIG_NFSD_V3=y
CONFIG_NFSD_V3_ACL=y
CONFIG_NFSD_V4=y
CONFIG_NFS_ACL_SUPPORT=m
CONFIG_NFS_COMMON=y
CONFIG_NFS_FS=m
# CONFIG_NFS_FSCACHE is not set
CONFIG_NFS_V3=y
CONFIG_NFS_V3_ACL=y
CONFIG_NFS_V4=y
 
Re: New Proxmox VE Kernels (2.6.18 - 2.6.24 - 2.6.32)

The NFS is actually running fine with 2.6.32 kernel, it is the start-up script checking routine failed to detect its existent. Edit file /etc/init.d/nfs-kernel-server, find the line
# See if our running kernel supports the NFS kernel server
change the following line
if [ -f /proc/kallsyms ] && ! grep -qE 'init_nf(sd| ' /proc/kallsyms; then
to
if [ -f /proc/kallsyms ] && ! grep -qE '(init_nf|init_nfsd | nfsd_serv )' /proc/kallsyms; then

then the nfs-kernel-server should be running fine. The symbol init_nf/init_nfsd is removed from kernel 2.6.32 and the symbol nfsd_serv has been in kernel since 2005.

I tried your suggestion and I still have the error stating that nfs kernel daemon could not start due to no support in the kernel.

Should I remove vzctl as I am using 2.6.32? Apt-get is complaining about installing vzctl as there is no support for vz. I would assume that it is safe to remove it?

- Garrett

Update: The fix for getting nfs working under 2.6.32 is to use the following:

Code:
[I]if [ -f /proc/kallsyms ] && ! grep -qE '(init_nf|init_nfsd|nfsd_serv)' /proc/kallsyms; then[/I]
There are spaces in the code that need to be removed for this to work correctly. NFS via the kernel is now working correctly.

- Garrett
 
Last edited by a moderator:
Re: New Proxmox VE Kernels (2.6.18 - 2.6.24 - 2.6.32)

2.6.32 working grea, but open-iscsi not connect to iscsi server. Error message "invalid ioctl cmd c070690d". On kernel 2.6.24 work ok.
 
Re: New Proxmox VE Kernels (2.6.18 - 2.6.24 - 2.6.32)

2.6.32 working grea, but open-iscsi not connect to iscsi server. Error message "invalid ioctl cmd c070690d". On kernel 2.6.24 work ok.

works here, so pls provide all detail how to reproduce the issue.
 
Re: New Proxmox VE Kernels (2.6.18 - 2.6.24 - 2.6.32)

I'm facing a strange issue with 2.6.32 kernel
I have two network interfaces,
one bridged to the public net (eth3)
and the other one to the private (eth2)
Code:
pve:~# mii-tool
eth2: negotiated 1000baseT-FD flow-control, link ok
eth3: negotiated 100baseTx-FD, link ok
I've deployed several VMs (Windows, RHEL/Fedora, Vyatta and so on)
When the network traffic is generated by a single VM everything works well
The problem appears when the network traffic is involving
two or three VMs concurrently
After a while, randomly, traffic stops flowing,
a transceiver seems to "disappear"
Code:
pve:~# mii-tool
eth2: negotiated 1000baseT-FD flow-control, link ok
  No MII transceiver present!.
and I can find this in dmesg
Code:
------------[ cut here ]------------                                                  
WARNING: at net/sched/sch_generic.c:261 dev_watchdog+0x297/0x2b0()                    
Hardware name: Unknow                                                                 
NETDEV WATCHDOG: eth3 (r8169): transmit queue 0 timed out                             
Modules linked in: tun kvm_amd kvm ib_iser rdma_cm ib_cm iw_cm ib_sa ib_mad ib_core ib_addr iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi bridge stp snd_pcm snd_timer snd soundcore snd_page_alloc serio_raw amd64_edac_mod edac_core edac_mce_amd pcspkr k8temp i2c_piix4 shpchp raid10 raid456 async_raid6_recov async_pq raid6_pq async_xor xor async_memcpy async_tx raid1 raid0 multipath linear usbhid r8169 mii [last unloaded: scsi_wait_scan]                                                                                                                                                                               
Pid: 2547, comm: kvm Not tainted 2.6.32-1-pve #1                                                                                                                                                               
Call Trace:                                                                                                                                                                                                    
 <IRQ>  [<ffffffff814848e7>] ? dev_watchdog+0x297/0x2b0                                                                                                                                                        
 [<ffffffff81060fa8>] warn_slowpath_common+0x78/0xd0                                                                                                                                                           
 [<ffffffff81061084>] warn_slowpath_fmt+0x64/0x70                                                                                                                                                              
 [<ffffffff8107efa1>] ? autoremove_wake_function+0x11/0x40                                                                                                                                                     
 [<ffffffff81046e4a>] ? __wake_up_common+0x5a/0x90                                                                                                                                                             
 [<ffffffff812a197a>] ? strlcpy+0x4a/0x60                                                                                                                                                                      
 [<ffffffff81469a83>] ? netdev_drivername+0x43/0x50                                                                                                                                                            
 [<ffffffff814848e7>] dev_watchdog+0x297/0x2b0                                                                                                                                                                 
 [<ffffffff8107aa18>] ? insert_work+0x98/0xb0                                                                                                                                                                  
 [<ffffffff81484650>] ? dev_watchdog+0x0/0x2b0                                                                                                                                                                 
 [<ffffffff810715f1>] run_timer_softirq+0x191/0x310                                                                                                                                                            
 [<ffffffff810685fa>] __do_softirq+0xfa/0x1d0                                                                                                                                                                  
 [<ffffffff810131ac>] call_softirq+0x1c/0x30                                                                                                                                                                   
 [<ffffffff81014be5>] do_softirq+0x65/0xa0                                                                                                                                                                     
 [<ffffffff81068395>] irq_exit+0x75/0xa0                                                                                                                                                                       
 [<ffffffff81014153>] do_IRQ+0x73/0xf0                                                                                                                                                                         
 [<ffffffff810129d3>] ret_from_intr+0x0/0x11                                                                                                                                                                   
 <EOI>                                                                                                                                                                                                         
---[ end trace 304a9c063950565b ]---                                                                                                                                                                           
r8169: eth3: link up                                                                                                  
r8169: eth3: link up
r8169: eth3: link up

then the only way to make eth3 work again, is to reboot the system

My setup:
Code:
pve:~# pveversion -v
pve-manager: 1.5-1 (pve-manager/1.5/4561)
running kernel: 2.6.32-1-pve             
proxmox-ve-2.6.32: 1.5-2                 
pve-kernel-2.6.32-1-pve: 2.6.32-2        
pve-kernel-2.6.24-8-pve: 2.6.24-16       
qemu-server: 1.1-10                      
pve-firmware: 1.0-3                      
libpve-storage-perl: 1.0-6               
vncterm: 0.9-2                           
vzctl: 3.0.23-1pve4                      
vzdump: 1.2-5                            
vzprocps: 2.0.11-1dso2                   
vzquota: 3.0.11-1                        
pve-qemu-kvm: 0.11.1-1                   
ksm-control-daemon: 1.0-2
Code:
AMD Athlon(tm) Dual Core Processor 5050e
...                                         
02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 02)
03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 02)
With 2.6.24 the situation is similar, but I don't have any trace in dmesg

I've tried several BIOS setting, but nothing changes, except when enabling
Cool&Quiet together with powernowd:
this combination makes the machine crash very soon after startup,
so I've disabled C&Q
I don't think this is a hardware related problem, because installing Windows the system is rock solid...

I know eth controllers are not excellent,
but this is a test env for evaluation, everything on the pcb is integrated
and cannot be changed

Maybe a kernel bug?
Should I open a bug upstream, or not?

Any advice is appreciated...
Thanks in advance
 
Last edited:
Re: New Proxmox VE Kernels (2.6.18 - 2.6.24 - 2.6.32)

I have had similar problems with r8168B + r8169 module. The trick was to download/build the r8168 driver from realtek, install it and blacklist the r8169 module. After you do this, do an update-initramfs -u -k `uname -r` and that should fix the issue (it did for me).

Rob
 
Re: New Proxmox VE Kernels (2.6.18 - 2.6.24 - 2.6.32)

I have had similar problems with r8168B + r8169 module. The trick was to download/build the r8168 driver from realtek, install it and blacklist the r8169 module. After you do this, do an update-initramfs -u -k `uname -r` and that should fix the issue (it did for me).

Rob

Rob, thank you very much for the suggestion
I've build latest version (r8168-8.015.00) using pve-kernel-2.6.24-10-pve and relative headers
because on 2.6.32 the compilation fails

blacklisted r8169, reboot, it's working
let's see how stable is...

thanks again
 
Re: New Proxmox VE Kernels (2.6.18 - 2.6.24 - 2.6.32)

Hi Dietmar,

We're in the middle of our migration from 2.6.24 to 2.6.18 due to the socket problems described in http://www.proxmox.com/forum/showthread.php?p=15976#post15976.

While the 2.6.18-1-pve kernel resolves this issue, the latest update (Debian revision -4 from yesterday) is creating a bit of mayhem in our cluster. After a mass reboot, all the nodes have died due to a kernel panic, apparently related to NFS (we are using NFSv4). This hadn't happened before with the previous -3 revision. Has anything changed in the kernel code besides the ocfs2 modules addition, as advertised in changelog.Debian?

More importantly, can we get the -3 binaries posted back somewhere in the download page so I can manually downgrade while we deal with this?

Many thanks in advance.
 
Re: New Proxmox VE Kernels (2.6.18 - 2.6.24 - 2.6.32)

This hadn't happened before with the previous -3 revision. Has anything changed in the kernel code besides the ocfs2 modules addition, as advertised in changelog.Debian?

No, its is 100% the same code.

More importantly, can we get the -3 binaries posted back somewhere in the download page so I can manually downgrade while we deal with this?

Sorry, we do not keep old test-only versions - especially when nothing changed.

But we really changed nothing, so it is quite likely that the bug is also in the previous version. Please can you/we try to debug the problem instead - maybe you can post the error, what happened exactly? Is it possible to reproduce?
 
Re: New Proxmox VE Kernels (2.6.18 - 2.6.24 - 2.6.32)

No, its is 100% the same code.
Weird, it failed in cascade on all nodes after upgrading the kernel to -4, and after no problems with -3 for days.

Sorry, we do not keep old test-only versions - especially when nothing changed.

But we really changed nothing, so it is quite likely that the bug is also in the previous version. Please can you/we try to debug the problem instead - maybe you can post the error, what happened exactly? Is it possible to reproduce?

Ok. For now, all I can do is post a screenshot of the ILOM from one of the crashed ndoes. The panic error isn't show entirely, and I can't scroll up, as it's totally dead.
 

Attachments

  • Captura-Sun ILOM Remote Console.png
    Captura-Sun ILOM Remote Console.png
    185.1 KB · Views: 34
Re: New Proxmox VE Kernels (2.6.18 - 2.6.24 - 2.6.32)

Ok, the good news is that I can very easily reproduce this just a few seconds after starting the container in that PVE install. The bad news is that I can't get any more kernel panic output than what I already posted.

If you need more than that, I can try to help, but I need to know how to capture full panic output. I'm afraid I don't know how and I'm not finding a way that doesn't involve a serial connection. :(
 
Re: New Proxmox VE Kernels (2.6.18 - 2.6.24 - 2.6.32)

Ok, the good news is that I can very easily reproduce this just a few seconds after starting the container in that PVE install. The bad news is that I can't get any more kernel panic output than what I already posted.

First, please can you start a new thread on that topic? You run NFS inside the container? Please can you describe more detailed so that I can try to reproduce the bug here.
 
Re: New Proxmox VE Kernels (2.6.18 - 2.6.24 - 2.6.32)

as kernel 2.6.32 has no openvz support, does this mean that other features (like vzdump for backup, or others...) are disabled? what are the limitations of using the latest kernel, as i only need kvm (i virtualize basically windows machines...), but still want to use backups, restore, and the rest of your great virtualization implementation?
 
Re: New Proxmox VE Kernels (2.6.18 - 2.6.24 - 2.6.32)

as kernel 2.6.32 has no openvz support, does this mean that other features (like vzdump for backup, or others...) are disabled? what are the limitations of using the latest kernel, as i only need kvm (i virtualize basically windows machines...), but still want to use backups, restore, and the rest of your great virtualization implementation?

no openvz support means no openvz support.

all other functionality (including vzdump backup and restore for KVM) is there, additional the quite interesting KSM.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!