Kernel crashes shortly after attempting IPv6 setup

mbt

New Member
Nov 9, 2009
4
0
1
Hello,

I've started using Proxmox, as it was recommended to me. I was using LXC, but the containers were absolutely unreliable, and the server that I'm working with hosts a variety of things that absolutely must work.

However, whereas before I had a functioning IPv6 setup (tunnel to HE), now, once I load the sit module to setup the tunnel the kernel eventually crashes. It doesn't save any log of the crash, so I have to find some creative way to try to get a dump here (it won't fit entirely on the screen, either, so maybe what I need to try to do is get some sort of framebuffer setup going so that I can take a picture of the crash).

What I would like to know is whether or not a recent kernel can be used, or if there are directions online somewhere to forward-port the customizations to a newer kernel. Previously, I was using a vanilla 2.6.31 and everything worked (well, except for LXC, but that's why I'm using this). The problem is that if I cannot set up the IPv6 tunnel and get that running, I'm not going to be able to do the things that I wanted the server to do, so that's of course a rather large problem for me.

Thanks for any help in advance, and I will try to get more information as I can; if anyone needs information that I haven't provided, please feel free to ask.

This is Proxmox VE 1.4.
 
Hello,

I've started using Proxmox, as it was recommended to me. I was using LXC, but the containers were absolutely unreliable, and the server that I'm working with hosts a variety of things that absolutely must work.

However, whereas before I had a functioning IPv6 setup (tunnel to HE), now, once I load the sit module to setup the tunnel the kernel eventually crashes. It doesn't save any log of the crash, so I have to find some creative way to try to get a dump here (it won't fit entirely on the screen, either, so maybe what I need to try to do is get some sort of framebuffer setup going so that I can take a picture of the crash).

What I would like to know is whether or not a recent kernel can be used, or if there are directions online somewhere to forward-port the customizations to a newer kernel. Previously, I was using a vanilla 2.6.31 and everything worked (well, except for LXC, but that's why I'm using this). The problem is that if I cannot set up the IPv6 tunnel and get that running, I'm not going to be able to do the things that I wanted the server to do, so that's of course a rather large problem for me.

Thanks for any help in advance, and I will try to get more information as I can; if anyone needs information that I haven't provided, please feel free to ask.

This is Proxmox VE 1.4.

due to openvz limitations we need to run 2.6.24 kernel. we do not know about a ipv6 issue, but we did not use/test this extensively here. so if you can describe an easy way to reproduce the issue we will take a look - or even better if you find the kernel patch for 2.6.24 solving the issue we can integrate it.
 
due to openvz limitations we need to run 2.6.24 kernel. we do not know about a ipv6 issue, but we did not use/test this extensively here. so if you can describe an easy way to reproduce the issue we will take a look - or even better if you find the kernel patch for 2.6.24 solving the issue we can integrate it.

For the moment, reproduction is as easy as "modprobe sit" and wait. The server has not lasted more than an hour or two after the modprobe, even if nothing is done with the module after it is loaded in memory.

I have no way to really bisect the problem either, since its in this setup and not, say, something that I could do on any other other machines that I have here. The kernel that is in use on Proxmox is too old for all of the other systems that I have running here.

If OpenVZ is the reason that the kernel is so old, is it known whether or not OpenVZ is going to be forward-ported to current kernels, or is that an unknown? I assume that your response means that it would be utterly non-trivial to forward-port OpenVZ since it appears that OpenVZ isn't maintained in lock-step with kernel releases?
 
For the moment, reproduction is as easy as "modprobe sit" and wait. The server has not lasted more than an hour or two after the modprobe, even if nothing is done with the module after it is loaded in memory.

I have no way to really bisect the problem either, since its in this setup and not, say, something that I could do on any other other machines that I have here. The kernel that is in use on Proxmox is too old for all of the other systems that I have running here.

If OpenVZ is the reason that the kernel is so old, is it known whether or not OpenVZ is going to be forward-ported to current kernels, or is that an unknown? I assume that your response means that it would be utterly non-trivial to forward-port OpenVZ since it appears that OpenVZ isn't maintained in lock-step with kernel releases?

Our 2.6.24 Kernel is not that "old", we have a lot of backports. just think of the latest redhat, they still have 2.6.18 but also newest feature backports.

OpenVZ: AFAIK the next OpenVZ development branch will be 2.6.32 based (the kernel of redhat6) - but nothing fixed yet.
 
Our 2.6.24 Kernel is not that "old", we have a lot of backports. just think of the latest redhat, they still have 2.6.18 but also newest feature backports.

OpenVZ: AFAIK the next OpenVZ development branch will be 2.6.32 based (the kernel of redhat6) - but nothing fixed yet.

Ahh. Alright, well, that makes it even harder for me I think to try to figure out how to determine what is wrong. (I'm used to running upstream vanilla kernels and if/when I find trouble, bisecting them to find where the "oops" comes from...)

Given my limited ability there, then, it seems that the only option I really have is to not run IPv6 for the time being. I haven't the slightest clue yet how to try to figure it out. I haven't even got a framebuffer running, though, for testing, since I only have permitted downtime on weekends.
 
For the moment, reproduction is as easy as "modprobe sit" and wait. The server has not lasted more than an hour or two after the modprobe, even if nothing is done with the module after it is loaded in memory.

'modprobe sit' does not have any effect on my machines.
 
'modprobe sit' does not have any effect on my machines.

You do not encounter a crash in Proxmox 1.4 after loading that module? I'll see if I can find a more deterministic thing than insert the module and wait-and-see, but I'm not even close to sure how to do that with this problem at the moment. I guess there must be some interaction that exists between that module and something else here that isn't being triggered there? Not sure.
 
You do not encounter a crash in Proxmox 1.4 after loading that module? I'll see if I can find a more deterministic thing than insert the module and wait-and-see, but I'm not even close to sure how to do that with this problem at the moment. I guess there must be some interaction that exists between that module and something else here that isn't being triggered there? Not sure.

yes, my machines just continue to run after loading the module. so it looks like there must be some interaction to trigger the issue.