Search results

  1. A

    Can't live migrate after dist-upgrade

    Hi, my problem is: I have stopped and started the virtual machine I have double checked in qemu monitor with info version that my vm is 2.5.0pve-qemu-kvm_2.5-8 The Server I migrate to also has the same kvm version Still I cannot do live migrations. I receive no useful debug info, just: What...
  2. A

    Can't live migrate after dist-upgrade

    Hi, i am running the most current versions on all my servers and migration is not working. Not from 2.4 to 2.5 and not from 2.5 to 2.5. I have tried with and without migration_unsecure What packages should I Install ? The qemu-server 4.0-62 from the repo or the older private patched...
  3. A

    Can't live migrate after dist-upgrade

    Hi, i am running the most current version 4.1-15 of pve-no-subscription repo. Today I have installled the newest proxmox packages:pve-qemu-kvm_2.5-8_amd64.deb and qemu-server_4.0-62_amd64.deb I now cannot migrate running vms from kvm 2.4 to 2.5. Even if I restart the vms and they are running...
  4. A

    Some Cluster Nodes marked red in Web-Interface

    Yes, it was a stale NFS mount which made some pve processes hang. Thanks Dietmar
  5. A

    Some Cluster Nodes marked red in Web-Interface

    Hi there, i have a proxmox cluster with 10 machines. All machines same software proxmox 4.1-2 newest release from pve-no-subscription repository. 3 of the 10 nodes are red in the Proxmox Webinterface instead of green. When I restart pvestatd on these machines they become green for some minutes...
  6. A

    Anyone already running ceph-infernalis in proxmox ?

    Hi there, we are thinking about upgrading our proxmox hosts to ceph infernalis. Has anyone already done this using proxmox 4.1 ? Were there any problems ?
  7. A

    Migrations fail with HTB: quantum of class 10001 is big. Consider r2q change

    Hi there, i am running proxmox 4.0 in a cluster with kernel 4.2.2-1-pve My virtual machines use traffic shapping like: net0: virtio=96:89:34:1C:AC:C6,bridge=vmbr0,rate=12 The network is quite fast/10 Gigabit I see strange effects that live migrations sometimes fail, sometimes they work...
  8. A

    Hi spirit, I made a msitake upgrading a running proxmox 3 cluster to proxmox 4 using your todays...

    Hi spirit, I made a msitake upgrading a running proxmox 3 cluster to proxmox 4 using your todays guide in the forum. Now my 4-0 cluster is broken and i cannot add second node to it. Are you willing to support me ? You can write a bill for this Please E-Mail support@gatworks.de , thanks
  9. A

    Proxmox VE 4.0 released!

    Hi Spirit, did you have the possibility to write down how you did your upgrade ? I am also looking for the best method to upgrade a 5 node cluster from proxmox3 to proxmox 4 on the fly.
  10. A

    Linux VMS with virtio-scsi Driver are randomly crashing

    Hi, I am using a Cluster with proxmox 3.4.9 The kernel on the proxmox hosts is 2.6.32-37-pve I store the vm images in a ceph-hammer cluster Recently I changed some Linux Machines to virtio-scsi driver from pure virtio driver so I can use the fstrim feature from time to time on the vms. Now I...
  11. A

    ceph.com down?

    I succesfully used eu.ceph.com instead
  12. A

    Ceph 0.94 (Hammer) - anyone cared to update?

    Hi all, What is your experience with the updated ceph cluster ? Is ceph hammer in general faster or has less latency with vms compared to ceph giant ? Did you notice any differences ?
  13. A

    Ceph - High latency on VM before OSD maked down

    Hi, your main problem is: ceph is not really designed for 3 osd nodes with 3 disks each. Ceph begins to shine the more osd-nodes and the more disks you have. In your case you suddenly loose 33% of your disks. Use more hosts and more disks and you won't have problems in the size of suddenly...
  14. A

    Problems pve-kernel-3.10.0-5-pve and live migration

    Hi, we recently switched pve-kernel-3.10.0-5-pve from pve-no-subscription repository because ceph documentation states that more recent kernels are better for ceph-performance. Unfortunately live migration is not reliable when migrating vms between proxmox hosts running...
  15. A

    Use different kvm binaries / kvm versions for different VMs ?

    Hi, we still have big solaris 10 performance problems after upgrading proxmox kvm version On older KVM Binaries the machines perform much better. Is there an elegant way or workaround to run some vms under another kvm hypervisor version/binary than the rest in proxmox ? Thanks ado
  16. A

    Solaris 10 Guest very slow with proxmox 3.3

    Hi, we ran some solaris hosts since proxmox 3.0 with an ok performance for more than 1 year. We never rebootet them Yesterday we restarted the solaris boxes under proxmox 3.3 and since then we have a very bad performance. Booting takes 20 Minutes. What changed in the meantime might be the kvm...
  17. A

    Qemu Guest Agent Support ?

    Hi, I would like to use qemu guest-agent and commands like guest-fsfreeze-freeze to get more consistant snapshot backups with proxmox. What has to be done for guest agent support in proxmox ? Thanks
  18. A

    KVM Migration - 3.0 to 3.1

    I verified it is possible to upgrade a cluster from 3.0 to 3.1 with all machines running Just upgrade all machines in the cluster to 3.1 while the vms are running Live migration is then possible and we can reboot the cluster-members one by one.
  19. A

    KVM Migration - 3.0 to 3.1

    I experience the same problem I cannot upgrade from 3.0 to 3.1 without rebooting all vms because live migration from 3.0 to 3.1 does not work Any idea for a workaround so that i can migrate a cluster without rebooting the vms ?
  20. A

    Script to start a vm in a cluster, don't know on which cluster member the vm is

    Hi, I want to write a script that stops and starts a vm in a proxmox cluster When I use qm start I have to know on which of the cluster the vm xx is defined and running Is there a kind of cluster "qm" command that lets me stop and start a vm when i do not know on which cluster menber it is...