Search results

  1. O

    Feature request - Option for vzdump to only e-mail if any errors occur

    This would be awesome. We're now managing a good number of proxmox servers, and while we could set an e-mail rule in our noc to do this, it would be great to be able to do so server-side...
  2. O

    Allow different compression binaries and configuration for backups

    Not sure other than pigz... Not an expert on this stuff, I just know there are tons of people out there who use Proxmox w/ 8+ cores, and given backups are usually done at off-hours, why not use more than one (if only 2 or 3)?
  3. O

    Allow different compression binaries and configuration for backups

    We had great success adding pigz (multithreaded gzip), symlinking gzip to pigz, then turning off --rsyncable in your backup scripts (not compatible with pigz) and adding a cores limit (in our case 2) in your backup scripts gzip command line. This sped up backups dramatically (almost linear...
  4. O

    Migrated Windows VM via qemu-nbd and selfimage, no boot

    Hrm, so I imaged the whole disk this time. However, now when I boot I get a BSOD and it reboots every time (about 3 seconds into loading Windows). I loaded the IDE drivers as displayed in the wiki!?
  5. O

    Migrated Windows VM via qemu-nbd and selfimage, no boot

    Proxmox 1.5 w/ latest updates, 2.6.32. Used the following guide: http://pve.proxmox.com/wiki/Migration_of_servers_to_Proxmox_VE, section "Physical server to Proxmox VE using SelfImage". However, when I try to boot the server, the SeaBIOS comes up, says booting from hard drive, then just sits...
  6. O

    md raid 1 + lvm2 + snapshot => volumes hang

    LVM2 don't perform nearly as well as mdadm, especially in large stripe+mirror arrays. And actually, because we boot off SSD and use LVM on top of mdadm for VM storage / backup, there are _no_ configuration changes required upon update. That's an odd statement. Other than Proxmox, we use mdadm...
  7. O

    Memory balloon device, other conf file uses

    I've been successfully using this via "args: -balloon virtio" in my qemu vm conf files, but I think it makes sense to add it to the proxmox web interface. With this, full storage cache options, and time-skew hacks available in the web interface, I wouldn't have to go into the conf files at...
  8. O

    Time drift in Windows 2008 R2 64-bit SMP

    Yeah, 3 different hosts showing these issues... The "bleed edge test" one in the first post: Core i7-860 Biostar TP55 And our two "test production servers": Intel S5000VSA each Dual Xeon E5320 each The two test production servers are the same hardware as our two production servers.
  9. O

    Time drift in Windows 2008 R2 64-bit SMP

    1 vCPU makes a big difference, there is minor clock instability but it is very close to real-time for the most part. However, we have 16 1.86ghz cores, 1 core on a fairly busy Windows 2008 R2 server makes things fairly slow :)
  10. O

    Time drift in Windows 2008 R2 64-bit SMP

    Have tried with -rtc-td-hack, -no-kvm-pit-reinjection, and nothing for args. Have tried with IDE and e1000 drivers instead of virtio. Nothing seems to help.
  11. O

    Slow clock - time drift in windows guests

    Having a similar problem: http://forum.proxmox.com/threads/3952-Time-drift-in-Windows-2008-R2-64-bit-SMP
  12. O

    Time drift in Windows 2008 R2 64-bit SMP

    And no luck. Things seem to equalize (or internet time sync just makes up for it), but seconds definitely don't 'tick' evenly on the system clock.
  13. O

    Time drift in Windows 2008 R2 64-bit SMP

    Err, looks to be back on the fritz - so I don't think this arg solved it. Trying rtc-td-hack arg now (as per http://www.linux-kvm.com/content/kvm-84-released-bug-fixes-and-qemu-merge).
  14. O

    Time drift in Windows 2008 R2 64-bit SMP

    Yup, seems to have solved it. Perhaps this should be default for 64-bit Windows guests?
  15. O

    Time drift in Windows 2008 R2 64-bit SMP

    I've been noticing pretty obvious time drift in Win2k8 R2 64-bit (also Win2k8 64-bit), especially when under i/o pressure. Happens w/ both IDE/e1000 and VirtIO/VirtIO driver sets. I see the following RH info on this: https://bugzilla.redhat.com/show_bug.cgi?id=577266...
  16. O

    Feature request: Tunnel VNC over HTTPS

    http://http-tunnel.sourceforge.net/ - written in perl, open source, might help?
  17. O

    Feature request: Tunnel VNC over HTTPS

    As of now, you need to forward both HTTPS and VNC ports through a firewall if you have a LAN Proxmox VE server and want to web manage from the outside. HTTPS for the web interface, then VNC for the guest GUIs. I'm wondering how difficult it would be to automatically tunnel the VNC traffic...
  18. O

    Request: Allow cache options to be set for storage

    Perhaps, but I don't know enough to say that's always optimal... It's just been my experience. Depending on different storage configurations, memory amount, memory bandwidth, etc, etc, etc - others may have different results.
  19. O

    Request: Allow cache options to be set for storage

    I've found when running lvm on hardware raid (at least in our specific environment), performance seems to be improved by setting cache=none in a vm's qemu conf file on all storage. Would it be possible to add the ability to set cache options in the web interface when setting up a vm?