Search results

  1. A

    PVE 4 KVM live migration problem

    Dear proxmox developers please help to get to the bottom of this. I have this problem on dell servers PE630 and PE610. Is there a way to debug qemu and see what causes the errors? I also noticed that after reboot migration works once and then it doesn't. It really makes no sense at this...
  2. A

    PVE 4 KVM live migration problem

    While perhaps I would agree on the different nodes but I have 2 pairs of the servers each pair has the same type of CPU, i.e. to old er and two newer servers. So the problem I have to start with happens regardless it just differs a bit when I switch between old and new machines. I cannot run...
  3. A

    PVE 4 KVM live migration problem

    How is it a problem. I have no issues in 3.4, would you care to explain? Also what cpu would you use? Sent from my SM-G900V using Tapatalk
  4. A

    PVE 4 KVM live migration problem

    Seems to be somewhat different behavior on different cpus. 2 nodes have older cpus and there while error is the same resuming puts kvm into the error state and I have to reset kvm and then resuming works i.e. vm starts running again. I just tried the same setup on 3.4.11 and migration went...
  5. A

    PVE 4 KVM live migration problem

    Tried and it made no difference.
  6. A

    PVE 4 KVM live migration problem

    I am at loss. Can at least some one tell me what this error could mean? Oct 10 02:44:26 ERROR: unable to find configuration file for VM 100 - no such machine Oct 10 02:44:26 ERROR: command '/usr/bin/ssh -o 'BatchMode=yes' root@38.102.250.228 qm resume 100 --skiplock' failed: exit code 2
  7. A

    PVE 4 Another migration problem

    Some times after running migrate command I get this Executing HA migrate for VM 100 to node virt2n3-la unable to open file '/etc/pve/ha/crm_commands.tmp.19096' - No such file or directory TASK ERROR: command 'ha-manager migrate vm:100 virt2n3-la' failed: exit code 2 In syslog Oct 10...
  8. A

    PVE 4 KVM live migration problem

    Testing live migration on 4 node quorate cluster. It is not 100% of cases but it is reproducible. I migrate vm from one node to another and I get this task started by HA resource agent Oct 09 22:04:22 starting migration of VM 100 to node 'virt2n2-la' (38.102.250.229) Oct 09 22:04:22 copying...
  9. A

    New Ceph KRBD setting on PVE 4

    I would be interested in running the same tests on my cluster. Can you post them here? I also wonder if two different storage objects can be created against a single rbd pool. I.e. ceph-kvm and ceph-lxc. Sent from my SM-G900V using Tapatalk
  10. A

    Ceph crush map retreival on PVE 4

    Thank you Wolfgang, I see what happened. I assumed ceph was installed but the only thing that was installed was ceph-common package. When I enabled ceph source in /etc/apt I have seen some ceph packages updated among others. Sorry for the wrong assumption. Crush map tool is the part of...
  11. A

    whatchdog issues

    I had to use rescue CD to revive the node. Seems that IPMI watchdog doesn't work very well with proxmox and Dell PE. Had trouble even initializing the device. I am sticking with iTCO_wtd for now.
  12. A

    New Ceph KRBD setting on PVE 4

    I understand, but can I use storage for both KVM and LXC with KRBD enabled ?
  13. A

    New Ceph KRBD setting on PVE 4

    Coming from 3.4 I noticed a new check box KRBD on RBD storage form. Considering a mix of kvm and lxc on my new cluster nodes, what is the recommended settings on the RBD storage, i.e. should KRBD be checked or not?
  14. A

    Ceph crush map retreival on PVE 4

    I have a ceph cluster built on the separate set of hardware, so I use ceph client configuration on proxmox to access RBD storage. On web interface, entering into ceph tab on each node and selecting crush returns: Error command 'crushtool -d /var/tmp/ceph-crush.map.1930 -o...
  15. A

    whatchdog issues

    I enabled ipmi_watchdog per PVE 4 HA article and now my server cannot boot. I get to the network stage (no limit) and then server reboots. Disabling watchdog in bios doesn't work. I also noticed that there is no recovery kernel in PVE 4 (similar to ubuntu), booting with single option doesn't...
  16. A

    Cluster questions on PVE 4

    Since there is a corosync version 2 can you add ability to add redundant ring from the command line during cluster creation and adding the node? Also I think it is a good idea to document manual restart of cluster if needed. I know it is something to avoid but I am sure people may will needed...
  17. A

    Drbd9

    Can DRBD 9 volume limit redundancy to 2 specific nodes on 4 node cluster? I see that you can specify redundancy and it cannot be more then number of nodes in cluster but can it be less?
  18. A

    PVE 4.0 LXC on ZFS-Storage fails

    Hello, my English is unfortunately fallen asleep a little. I have upgraded to version 4.0. Now I wanted to see how the story looks with LXC. Have a container created through the web interface and get an error with respect to the mount points. As Storage ZFS have chosen and those released also in...
  19. A

    Proxmox VE 4.0 released!

    So you saying I cannot install 8.4 and use with LVM as I used in 3.4? Is anything in the new system preventing me from doing it?