We are just starting to evaluate proxmox ve and testing the various features provided. The setup uses 3 Intel-based nodes with GlusterFS. Since we do not have an enterprise subscription (yet) we installed Debian 7.5 + GlusterFS 3.5, then performed the conversion to the Proxmox kernel using the pve-no-subscription repo. Perhaps it is important to point out that we did NOT first install proxmox ve 3.1 using the standard pve repo, as the wiki indicates, we instead went direct to 3.2 using pve-no-subscription, we assume this is supported.
So far we have only configured the pvecm cluster and joined the nodes. No fencing is enabled yet.
We created a simple CentOS 6.5 minimal VM (1GB ram, 1 cpu (kvm64), 32GB qcow2 disk virtio, virtio network), and verified we can perform _offline_ migrations between the various hosts to validate the GlusterFS shared storage was indeed working.
Then we tried Live Migration from proxmox2 to proxmox1, and the last line we get is:
Jun 13 08:19:42 starting VM 100 on remote node 'proxmox1'
On that proxmox1 system, I can see the kvm process running and the qm migrate process:
root 111359 111357 0 08:19 ? 00:00:00 /usr/bin/perl /usr/sbin/qm start 100 --stateuri tcp --skiplock --migratedfrom proxmox2 --machine pc-i440fx-1.7
root 111376 1 0 08:19 ? 00:00:00 /usr/bin/kvm -id 100 -chardev socket,id=qmp,path=/var/run/qemu-server/100.qmp,server,nowait -mon chardev=qmp,mode=control -vnc unix:/var/run/qemu-server/100.vnc,x509,password -pidfile /var/run/qemu-server/100.pid -daemonize -name centostest -smp sockets=1,cores=1 -nodefaults -boot menu=on -vga cirrus -cpu Westmere,+x2apic -k en-us -m 1024 -device piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2 -device usb-tablet,id=tablet,bus=uhci.0,port=1 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3 -drive if=none,id=drive-ide2,media=cdrom,aio=native -device ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200 -drive file=gluster://localhost/vms/images/100/vm-100-disk-1.qcow2,if=none,id=drive-virtio0,format=qcow2,aio=native,cache=none -device virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=100 -netdev type=tap,id=net0,ifname=tap100i0,script=/var/lib/qemu-server/pve-bridge -device virtio-net-pci,mac=DE:8E:BA:607:B4,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300 -machine type=pc-i440fx-1.7 -incoming tcp:localhost:60000 -S
I've tried scanning the various log files in /var/log, but not finding any errors on either server.
One last comment, on each of the hosts, under Services, it shows RGManager as stopped. I'm not sure what that service does, but perhaps it is relevant.
Any help would be much appreciated, Thanks!
-Brad
So far we have only configured the pvecm cluster and joined the nodes. No fencing is enabled yet.
We created a simple CentOS 6.5 minimal VM (1GB ram, 1 cpu (kvm64), 32GB qcow2 disk virtio, virtio network), and verified we can perform _offline_ migrations between the various hosts to validate the GlusterFS shared storage was indeed working.
Then we tried Live Migration from proxmox2 to proxmox1, and the last line we get is:
Jun 13 08:19:42 starting VM 100 on remote node 'proxmox1'
On that proxmox1 system, I can see the kvm process running and the qm migrate process:
root 111359 111357 0 08:19 ? 00:00:00 /usr/bin/perl /usr/sbin/qm start 100 --stateuri tcp --skiplock --migratedfrom proxmox2 --machine pc-i440fx-1.7
root 111376 1 0 08:19 ? 00:00:00 /usr/bin/kvm -id 100 -chardev socket,id=qmp,path=/var/run/qemu-server/100.qmp,server,nowait -mon chardev=qmp,mode=control -vnc unix:/var/run/qemu-server/100.vnc,x509,password -pidfile /var/run/qemu-server/100.pid -daemonize -name centostest -smp sockets=1,cores=1 -nodefaults -boot menu=on -vga cirrus -cpu Westmere,+x2apic -k en-us -m 1024 -device piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2 -device usb-tablet,id=tablet,bus=uhci.0,port=1 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3 -drive if=none,id=drive-ide2,media=cdrom,aio=native -device ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200 -drive file=gluster://localhost/vms/images/100/vm-100-disk-1.qcow2,if=none,id=drive-virtio0,format=qcow2,aio=native,cache=none -device virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=100 -netdev type=tap,id=net0,ifname=tap100i0,script=/var/lib/qemu-server/pve-bridge -device virtio-net-pci,mac=DE:8E:BA:607:B4,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300 -machine type=pc-i440fx-1.7 -incoming tcp:localhost:60000 -S
I've tried scanning the various log files in /var/log, but not finding any errors on either server.
One last comment, on each of the hosts, under Services, it shows RGManager as stopped. I'm not sure what that service does, but perhaps it is relevant.
Any help would be much appreciated, Thanks!
-Brad