Search results

  1. O

    Proxmox backup with iothreads

    I've been reading great things about improvements in SSD performance in guests when iothreads is enabled, and after a bit of testing I can confirm that it helps quite a bit. As I understand it, Proxmox backup currently fails on iothreads-enabled virtual disks - is there a plan to resolve this soon?
  2. O

    Testing Kernel 3.10: Marvell SATA controller drops all drives on load

    In testing Kernel 3.10 on a new host spec, any time I run heavy I/O against disks on a Marvell 88SE9230 SATA adapter, all drives on that adapter drop (mine is not using mdraid of course, these are for individual backups volumes, but the issue is very similar)...
  3. O

    VZDump backup failed

    Interesting... Just got past VM 100 to 102, but now 102 backup is hung (dat file is 38 bytes, no i/o or cpu to speak of but vzdump process is alive). In syslog: Feb 8 00:10:50 vmserver vzdump[849709]: VM 102 qmp command failed - VM 102 qmp command 'backup_cancel' failed - The command...
  4. O

    VZDump backup failed

    Mount: tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755) proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys type sysfs (rw,noexec,nosuid,nodev) udev on /dev type tmpfs (rw,mode=0755) tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev) devpts on /dev/pts type devpts...
  5. O

    VZDump backup failed

    For the last few weeks, my VZDump backups have been failing: Feb 07 22:56:44 INFO: Starting Backup of VM 100 (qemu) Feb 07 22:56:44 INFO: status = running Feb 07 22:56:45 INFO: backup mode: snapshot Feb 07 22:56:45 INFO: ionice priority: 7 Feb 07 22:56:45 INFO: creating archive...
  6. O

    Areca 1882ix hang after kernel upgrade from 2.6.32-11-pve to 2.6.32-14-pve

    After 5 days of running -11, this issue has still not happened.
  7. O

    Areca 1882ix hang after kernel upgrade from 2.6.32-11-pve to 2.6.32-14-pve

    After upgrading between these kernels, I am getting 100% io wait and 0 throughput (an i/o 'hang') on my Areca 1882ix SAS/SATA RAID controller card, and any VMs touching that card's arrays also get i/o hangs. Immediately prior to the hang, I get these log messages: Aug 30 01:10:05 vmserver...
  8. O

    Backup to .bin / .cfg files in a folder

    I don't think I'm explaining well :) You're right in that tar has no overhead, so restoration is as fast as copying the file from one place to another. However, what I'm looking for is the ability to, rather than restore, simply turn off the vm and attach the backed up .bin straight out of...
  9. O

    Backup to .bin / .cfg files in a folder

    It is certainly faster than unpacking something with compression, but it can still take hours if you have many 100GB+ images in a VM, especially over a network. To me, having immediate access to a backup for mounting to the vm would be much more convenient and give us peace of mind for...
  10. O

    Backup to .bin / .cfg files in a folder

    Currently, recovering from backup can be extremely time consuming even if compression is not used, as untarring requires input/output of the entire archive. Would it be possible to have a backup mode that, rather than tarring images and config files, places them in a folder instead?
  11. O

    i/o problem

    FYI, seems somewhat similar to our issue in last 1/2 of this thread: http://forum.proxmox.com/threads/2649-KVM-machines-very-slow-unreachable-during-vmtar-backup
  12. O

    KVM machines very slow /unreachable during vmtar backup

    That's really interesting - I have no idea what this means, but a few of my disks have multiple entries: array-vm--107--disk--1: 0 419430400 linear 8:0 5460984320 array-vm--106--disk--1: 0 209715200 linear 8:0 4538237440 array-vm--105--disk--1: 0 209715200 linear 8:0 5251269120...
  13. O

    KVM machines very slow /unreachable during vmtar backup

    atran, any results? Any more ideas from the proxmox folks on this?
  14. O

    Multi-vm network issues (QoS, I guess?)

    We have a node with 7-8 VMs active at a given time, and a major network performance issue (I'm not a network guru, and have no clue where to start): The node's physical connection is 10Mbps to our ISP Each VM is linux and uses VirtIO When one VM is sending data to an internet host over ISP link...
  15. O

    KVM machines very slow /unreachable during vmtar backup

    Does everyone just use gzip and thus not have this problem?
  16. O

    KVM machines very slow /unreachable during vmtar backup

    Just tested on a few pve 2.0 beta boxes, completely stock (no vzdump tweaks), on same fast hardware, running backup w/ no gzip: pve-manager: 2.0-14 (pve-manager/2.0/6a150142) running kernel: 2.6.32-6-pve proxmox-ve-2.6.32: 2.0-54 pve-kernel-2.6.32-6-pve: 2.6.32-54 lvm2: 2.02.86-1pve2 clvm...
  17. O

    KVM machines very slow /unreachable during vmtar backup

    2.6.35 is not really an upgrade (2.6.32 proxmox kernel has lots of improvements / fixes over 2.6.35 proxmox kernel which has not been maintained for some time), but as a test it would be worthwhile to see if that helps.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!