Search results

  1. J

    Kernel panic with 2.6.32-6 and multi-cpu OpenVZ

    we did a run with both OpenVZ and qemu-server stopped - no containers were running (freeze after approx. 15 seconds of bench run) not sure if this helps, because we did not find any "abnormality", but here it is: 00:00.0 Host bridge: Intel Corporation 5520 I/O Hub to ESI Port (rev 13)...
  2. J

    Kernel panic with 2.6.32-6 and multi-cpu OpenVZ

    yes me, too ;-) - no issues with 2.6.32-5 here. and to be precise: we also have another host running 2.6.32-6-47, which does not have any issues so far. we just have not been able to find out what's wrong with this pilot server and 2.6.32.6...
  3. J

    Kernel panic with 2.6.32-6 and multi-cpu OpenVZ

    our issue with kernel 2.6.32-6-47 looks similar - meanwhile we are able to reproduce the freeze by running phoronix-test-suite benchmark build-linux-kernel - it never finishes, but freezes. Back on 2.6.32-5 it runs ok - and so did all other tests we tried so far, like memory etc.; it's always...
  4. J

    Phoronix Benchmark Results (build-linux-kernel)

    thx for these interesting numbers; here are some from our test pilot: Node 1: host: 2 x L5520 @ 2.27GHz (2 sockets, 4 cores, 16 threads (HT)), X8DTN, Adaptec 5805 (SAS 15k/RAID-1) kernel: 2.6.32-6-47 Proxmox VE 1.9, Host: 219 seconds Node 2: - same hardware - kernel: 2.6.32-5-36 Proxmox...
  5. J

    Live migration question/problem

    if a live migration failed, and you find a msg like "HZ mismatch: 250 != 1000" in dmesg, then it points to the two kernels involved being incompatible.
  6. J

    Suggestion: integrate block level SSD caching into the PVE kernel

    that was true for an older (first?) version, current version neither includes a SSD, nor does it have any restrictions on what SSD to include; we're running several of those with "cheap" OCZ SSDs... Though caching effects heavily depend on usage profile. But what is quite nice - you can add more...
  7. J

    proxmox 1.9 and kernel 2.6.32-6-pve + iptables = kernel panic

    Ok, then - "we don't deal with RHEL6 yet"... oO Thanks for pointing me to this one! That's something we have to check here RSN...
  8. J

    proxmox 1.9 and kernel 2.6.32-6-pve + iptables = kernel panic

    i agree - the *reason* for failing is the adaptec itself failing with a kernel panic (yes, they run their own "distro" on these cards :), resulting in linux kernel panicking. But exactly because this *might* happen (there is no "law" to let mainboard manufacturers force adaptec & Co to make them...
  9. J

    proxmox 1.9 and kernel 2.6.32-6-pve + iptables = kernel panic

    you'll find the complete story here: http://forum.proxmox.com/threads/6980-New-2.6.32-Kernel-with-stable-OpenVZ-%28pvetest%29?p=39679 in short: ASPM means "Active State Power Management", a kind of power-throtteling for PCIe, and this is something which our Adaptecs proved to "strongly dislike"...
  10. J

    proxmox 1.9 and kernel 2.6.32-6-pve + iptables = kernel panic

    Hello, unfortunately, we also have an issue with current 2.6.32-6 kernel. A little bit of history: - we installed the first pvetest kernel 2.6.32-6 on a machine, result was a freeze (adaptec problem) because the kernel activated ASPM - the kernel issued after this with deactivated ASPM (resp...
  11. J

    Update DRBD userland to 8.3.10 to match kernel in 1.9

    ...instead of dealing with "git", one might also simply get the tar package directly from http://oss.linbit.com/drbd/...
  12. J

    New 2.6.32 Kernel with stable OpenVZ (pvetest)

    great - now the 2.6.32-6 works without boot parm "pcie_aspm=off", thanks!
  13. J

    New 2.6.32 Kernel with stable OpenVZ (pvetest)

    in short: yes :-) i was just about to write the "progress made" message; i also found this - obviously non-resolved/non-commented bug msg above, which pointed to ASPM. And i knew that we once had problems with adaptec and ASPM in the past - we always get our servers pre-configured, means...
  14. J

    New 2.6.32 Kernel with stable OpenVZ (pvetest)

    Ok, i upgraded drbd8-tools to 8.3.10, and also did the BIOS update. Reboot into 2.6.32-6 - total freeze when first CT is started. I noticed that the igb driver included i relatively old (and DRBD activity after reboot involves network quite a bit) - and we once had some issues with that and...
  15. J

    New 2.6.32 Kernel with stable OpenVZ (pvetest)

    yes, i also had the feeling (nut no evidence at all) that it was related to DRBD, also because of having a (resolvable) split-brain afterwards (though in my experience this is not *that* surprinsing with DRBD ;-). Regarding the hardware - as already indicated, it's a X8DTN (Supermicro), using...
  16. J

    New 2.6.32 Kernel with stable OpenVZ (pvetest)

    Hello, we just tried the new kernel on our "proxmox cluster pilot", two identical servers running proxmox (pvetest) with DRBD (8.3.7). Unfortunately, this was the very first time that a proxmox kernel seem not to work at all - after reboot, we experience total system freezes; after the 1st...
  17. J

    FS over LVM over DRBD?

    uh, yes; no 'live' then...
  18. J

    FS over LVM over DRBD?

    obviously there are two possible sources for your problem: Either DRBD is not capable, or your setup has some issues. Since there are quite a few out there who are really confident with DRBD, i'd start with the 2nd choice... ;-) Perhaps you should start with some basic checks like pveperf (basic...
  19. J

    FS over LVM over DRBD?

    ...i'd say 'it's also safe in case it is only mounted on one node at a time'... ;-) in fact, we're running something like that, but not for KVM (we like the LV devices) - but each node gets a XFS fs on a LV (on DRBD) to also have OpenVZ containers with an easy fail-over mechanism. Runs quite...
  20. J

    Suggestion: Add a Cloning tab in Proxmox VE beside the migration tab. Please?

    thanks for the pointer, looks really interesting; but it does not seem to be a feasable option for proxmox environments ('higher' kernel requirements). There's also 'virt-clone' - but this is based on libvirt, and therefore probably also not practical on a proxmox system.