Search results

  1. M

    Cpu Scaling - save energy...

    cpufreq should work fine with KVM, but on Proxmox it won't work reliably on some CPUs - i.e. those which don't have constant_tsc flag. On such CPUs, cpufreq will only work reliably with kvm-84 and later. There is also one catch here: if host's CPU frequency was low and a guest was started...
  2. M

    Changing/Replacing Master node

    It is available, as of kvm-85.
  3. M

    kvm-85 release

    kvm-85 was just released: http://article.gmane.org/gmane.comp.emulators.kvm.devel/31046 Would it be possible to build a "pvetest" release for each new kvm version and put it in ftp://pve.proxmox.com/debian/dists/etch/pvetest/binary-amd64/? This way, there would be more feedback from users...
  4. M

    Stability issues with KVM-83 is rollback to 75 available?

    BTW, this issue (slow network after some time) is a weird bug in virtio_net and it happens with both kvm-83 and kvm-84 (and prehaps older releases); virtio_blk is not affected.
  5. M

    "nosync": pveca -l -> Use of uninitialized value in string

    Indeed, this is where I made some changes lately. This is one of these rare systems which have its network configured in initrd, and network breaks if I have these (already configured) interfaces mentioned in /etc/network/interfaces. Therefore - bond0, vmbr0 - commented out, although up and...
  6. M

    "nosync": pveca -l -> Use of uninitialized value in string

    Something happened to my node and it is shown as "nosync" in the master interface. This is what the master shows: # pveca -l CID----IPADDRESS----ROLE-STATE--------UPTIME---LOAD----MEM---ROOT---DATA 1 : 192.168.15.67 M S 21 days 02:12 4.10 40% 10% 10% 2 : 192.168.15.68 N...
  7. M

    Ubuntu Apache KVM VM's very slow after 2 days up

    Most probably virtio_net was causing these problems.
  8. M

    no keyboard in VM vnc console?

    I'm seeing this sometimes, too. I.e. keyboard works in "BIOS" bootup and in the bootloader (i.e. GRUB), but after Linux guest boots, no keypresses work in VNC console. Sending ctrl+alt+del doesn't work as well. What helps, is stopping and starting the guest. I've no idea how to trigger it reliably.
  9. M

    Memory usage of the Proxmox itself

    How do you check host's memory usage?
  10. M

    Proxmox on centralized storage

    Use: scsi0: /dev/disk/by-path/ip-192.168.112.66:3260-iscsi-iqn.2008-12.net.....-lun-1 or: ide0: /dev/disk/by-path/ip-192.168.112.66:3260-iscsi-iqn.2008-12.net.....-lun-1 then...
  11. M

    IO-Error after life migration

    It seems like generally migration in KVM is not very safe at the moment - see this thread: http://thread.gmane.org/gmane.comp.emulators.kvm.devel/29822 http://thread.gmane.org/gmane.comp.emulators.kvm.devel/29822/focus=29829 "The LSI logic scsi device model doesn't implement device state...
  12. M

    Proxmox on centralized storage

    If you use iSCSI, just set a proper path to iSCSI disk in /etc/qemu/<VMID>.conf, i.e.: virtio0: /dev/disk/by-path/ip-192.168.112.66:3260-iscsi-iqn.2008-12.net.....-lun-1 Instead of pointing it to a file.
  13. M

    Extremly high Harddisk read/write speed in Win 2003 VM

    It is likely that everything was served from cache, either: - Proxmox VE host's cache, or - guest system's cache - if it's this one, your benchmarking program is really bad You can empty cache on the host with: echo 3 > /proc/sys/vm/drop_caches prior to doing any benchmark.
  14. M

    cluster master died - what now?

    I can't. VNC, stopping, starting - if I try to do it on other nodes, I get "You do not have write access". Do I have anything screwed in my config?
  15. M

    cluster master died - what now?

    That's just a hypothetical situation. Imagine that your Proxmox VE cluster master died and you can't start it any more. Without a cluster master, one can't start/stop/VNC guests on the other nodes (at least in the web interface). What is the recommended way to recover from such a situation...
  16. M

    KVM memory use

    Are you sure your XP uses just 65 MB, and exactly 0 MB for cache/buffers? Pretty unlikely situation.
  17. M

    Cluster Failover

    You would need two SANs :) DRBD for data replication, use multipath on Proxmox hosts... Or heartbeat for failover. Or...
  18. M

    Cluster Failover

    You can use shared storage right now - although you will have to configure bits and pieces on the command line - it's not very complicated. But thinking farther - do you know the answer to the question: "what happens if your shared storage (SAN) fail, not the Proxmox VE box"?
  19. M

    IO-Error after life migration

    Yes, save/restore memory could save some IO - here, we only copy memory to another host. When guests have 1 or 2 GB RAM, it can make a difference. But the advantage would be of course for shared storage. With "live migration", the guest is not "paused" at all. It still works as its pages are...
  20. M

    IO-Error after life migration

    I imagine that could work if the guest was paused before migration and "cont" (continued) after migration is done. Didn't test if it works though.