Search results

  1. O

    md raid 1 + lvm2 + snapshot => volumes hang

    Drives are on: 00:1f.2 SATA controller: Intel Corporation Ibex Peak 6 port SATA AHCI Controller (rev 05) 01:00.0 SCSI storage controller: Marvell Technology Group Ltd. 88SX7042 PCI-e 4-port SATA-II (rev 02) I have never seen the problem outside of a rebuild / check, but the vast majority of...
  2. O

    md raid 1 + lvm2 + snapshot => volumes hang

    Hrm... after doing further testing, this seems to occur only when the mdadm array is rebuilding or checking for consistency (monthly, the pattern was how I found this)... And occurs randomly, not only when snapshots exist.
  3. O

    md raid 1 + lvm2 + snapshot => volumes hang

    I've seen this same issue in many servers w/ proxmox, "random" lvm-on-mdadm hangs. 2.6.32 kernel does not solve.
  4. O

    vzdump, different compression schemes

    I would like to be able to specify, ideally (as pigz allows) - but yes, all available cores would be great. It would be unlikely to use 100% of all cores, of course. Also, WRT your last question in the previous response, I'm asking how to modify the command line where vzdump calls gzip, so I...
  5. O

    vzdump, different compression schemes

    OK, tried this the simple way by installing pigz from debian repositories (aptitude did not work, had to download sid 64-bit dpkg, probably don't have the right repository turned on or something). After install, I renamed /bin/gzip to /bin/gzip.old, and made a symbolic link from /bin/gzip to the...
  6. O

    vzdump, different compression schemes

    Specifically, pigz support would be awesome - it's even in the Debian Sid repositories.
  7. O

    vzdump speed

    Ah, I see. Thanks very much!
  8. O

    vzdump, different compression schemes

    Could vzdump be modified to allow for a different compression binary, other than gzip? Ideally, allow for any binary (as long as stdin is supported) and options in the vzdump config file. I'd really like to make use of some of the more modern / advanced compression schemes now available...
  9. O

    vzdump speed

    Got it, worked great, I see that the new bw limit is mentioned in the vzdump log e-mailed to me. However, the limit is still not being honored (ie MiB/s speed mentioned after "Total bytes written: xxxxx" still often exceeds bw limit by unreasonable amount). Thoughts?
  10. O

    vzdump speed

    A couple of questions about Proxmox 1.5 (2.6.32) kvm backups via vzdump: 1) While the log shows that vzdump is limited to 10240 KB/s, my lvm snapshot backups finish with speeds ranging from 10.19 MiB/s to 22.39 MiB/s. Given some flexibility to convert from KB to MiB, 10.19 seems about right...
  11. O

    New Proxmox VE Kernels (2.6.18 - 2.6.24 - 2.6.32) [UPDATE]

    Re: New Proxmox VE Kernels (2.6.18 - 2.6.24 - 2.6.32) 2.6.32 working great - Don't know if it's memory ballooning or KSM, but my environment went from 8GB memory used on 2.6.24 to about 3.5GB on the new kernel... 2 Win2k8, 3 Ubuntu 9.10, 3 Ubuntu 8.04.3 - all server OSs.
  12. O

    New Proxmox VE Kernels (2.6.18 - 2.6.24 - 2.6.32) [UPDATE]

    Re: New Proxmox VE Kernels (2.6.18 - 2.6.24 - 2.6.32) Fantastic! Looking forward to testing 2.6.32 on our staging whitebox... KSM would be huge for us (brings KVM memory footprint closer to OpenVZ territory).
  13. O

    Survey: Proxmox VE Kernel with or without OpenVZ?

    Great, really looking forward to testing!
  14. O

    Memory always fully allocated

    In our test lab virtual host (running 1.4, using KVM only), it seems that all VM memory is always fully allocated. When using kvm on updated ubuntu, or vmware esxi, I was able to have this same VM load (14gb total subscribed memory, but machines are very sparsely used) run WELL within the 8gb of...
  15. O

    New kernel release (pvetest repository) with kvm-kmod-2.6.31.6

    Ugh - worked just fine, and I am an idiot :) Apparently I need sleep.
  16. O

    New kernel release (pvetest repository) with kvm-kmod-2.6.31.6

    Hrm - I've changed the sources.list line, and run update/upgrade/dist-upgrade - but apt-get doesn't see any new packages. This is on a fully updated 1.4 install... vmserver:~# pveversion pve-manager/1.4/4403 vmserver:~# uname -r 2.6.24-8-pve
  17. O

    I/O stalls with one lvm machine, only restart solves

    It's very easy to reproduce - Start the VM in question after moving it back to the lvm volume for HD (from qcow2 or raw file), then just wait... It's not consistent, but usually in less than an hour any and all i/o on that pv/vg will completely stall, with 100% usage on the VM in question for...
  18. O

    I/O stalls with one lvm machine, only restart solves

    I have a single virtual machine in proxmox, which was originally a qcow2 file but was subsequently converted to an LVM volume (to a raw file first, the dd'd to the lvm volume). Ever since the conversion to lvm, the machine will stop responding. CPU stays super-low, but iostat shows 100% usage...
  19. O

    Kernel Oops!

    Oct 24 02:24:35 vmserver kernel: [<ffffffff804cba66>] do_page_fault+0x176/0x890 Oct 24 02:24:35 vmserver kernel: [<ffffffff802af62d>] handle_mm_fault+0x7bd/0xa50 Oct 24 02:24:35 vmserver kernel: [<ffffffff802def35>] do_path_lookup+0xe5/0x380 Oct 24 02:24:35 vmserver kernel...
  20. O

    Kernel Oops!

    After installing 1.4 on a new proxmox server (Athlon II X4 620, 790GX chipset, 8GB RAM, 120GB sata boot drive, lvm vg for vm images - a very similar setup to what we've used historically), I keep getting freezes. Here's the full "error" from the last time, any ideas? Oct 24 02:24:35 vmserver...