Dear proxmox developers please help to get to the bottom of this. I have this problem on dell servers PE630 and PE610. Is there a way to debug qemu and see what causes the errors? I also noticed that after reboot migration works once and then it doesn't. It really makes no sense at this...
While perhaps I would agree on the different nodes but I have 2 pairs of the servers each pair has the same type of CPU, i.e. to old er and two newer servers. So the problem I have to start with happens regardless it just differs a bit when I switch between old and new machines. I cannot run...
Seems to be somewhat different behavior on different cpus. 2 nodes have older cpus and there while error is the same resuming puts kvm into the error state and I have to reset kvm and then resuming works i.e. vm starts running again.
I just tried the same setup on 3.4.11 and migration went...
I am at loss.
Can at least some one tell me what this error could mean?
Oct 10 02:44:26 ERROR: unable to find configuration file for VM 100 - no such machine
Oct 10 02:44:26 ERROR: command '/usr/bin/ssh -o 'BatchMode=yes' root@38.102.250.228 qm resume 100 --skiplock' failed: exit code 2
Some times after running migrate command I get this
Executing HA migrate for VM 100 to node virt2n3-la
unable to open file '/etc/pve/ha/crm_commands.tmp.19096' - No such file or directory
TASK ERROR: command 'ha-manager migrate vm:100 virt2n3-la' failed: exit code 2
In syslog
Oct 10...
Testing live migration on 4 node quorate cluster. It is not 100% of cases but it is reproducible. I migrate vm from one node to another and I get this
task started by HA resource agent
Oct 09 22:04:22 starting migration of VM 100 to node 'virt2n2-la' (38.102.250.229)
Oct 09 22:04:22 copying...
I would be interested in running the same tests on my cluster. Can you post them here? I also wonder if two different storage objects can be created against a single rbd pool. I.e. ceph-kvm and ceph-lxc.
Sent from my SM-G900V using Tapatalk
Thank you Wolfgang,
I see what happened. I assumed ceph was installed but the only thing that was installed was ceph-common package. When I enabled ceph source in /etc/apt I have seen some ceph packages updated among others. Sorry for the wrong assumption.
Crush map tool is the part of...
I had to use rescue CD to revive the node. Seems that IPMI watchdog doesn't work very well with proxmox and Dell PE. Had trouble even initializing the device. I am sticking with iTCO_wtd for now.
Coming from 3.4 I noticed a new check box KRBD on RBD storage form.
Considering a mix of kvm and lxc on my new cluster nodes, what is the recommended settings on the RBD storage, i.e. should KRBD be checked or not?
I have a ceph cluster built on the separate set of hardware, so I use ceph client configuration on proxmox to access RBD storage.
On web interface, entering into ceph tab on each node and selecting crush returns:
Error command 'crushtool -d /var/tmp/ceph-crush.map.1930 -o...
I enabled ipmi_watchdog per PVE 4 HA article and now my server cannot boot. I get to the network stage (no limit) and then server reboots. Disabling watchdog in bios doesn't work. I also noticed that there is no recovery kernel in PVE 4 (similar to ubuntu), booting with single option doesn't...
Since there is a corosync version 2 can you add ability to add redundant ring from the command line during cluster creation and adding the node?
Also I think it is a good idea to document manual restart of cluster if needed. I know it is something to avoid but I am sure people may will needed...
Can DRBD 9 volume limit redundancy to 2 specific nodes on 4 node cluster? I see that you can specify redundancy and it cannot be more then number of nodes in cluster but can it be less?
Hello, my English is unfortunately fallen asleep a little. I have upgraded to version 4.0. Now I wanted to see how the story looks with LXC. Have a container created through the web interface and get an error with respect to the mount points. As Storage ZFS have chosen and those released also in...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.