Hello,
I'm a bit newbie in DRBD-backed cluster config, and face pretty strange behaviour of cluster that is build strictly by manual:
I set up two identical machines (these are desktop PCs, I took them for test purpose, but the hardware is good so I believe It should work with no problem):
- Phenom II X4
- 8 Gb of RAM
- 1 x 500 Gb HDD (Proxmox, iso storage)
- 1 x 1000 Gb HDD (KVM images storage)
- 2 x NICs (one is built-in, and another one is PCI 1 Gig card by D-Link)
So I installed Proxmox on both PCs, set up different IPs and hostnames, set up cluster as per this page, set up DRBD as per this page on 1 Tb disks (and allowed it to sync; it is seen as onetb LVM volume, that is named after 'one TB').
Now I set up VM (KVM, FreeNAS i386 8.0 instance in it, just to test), it run perfectly on 1st host. But as I try to 'migrate' it to 2nd host, the VM stops and log reads:
The line volume 'onetb:vm-101-disk-1' does not exist makes me nervios, as I see this onetb both on 1st and 2nd host admin interface. Looks like I need to somehow 'attach' or 'mount' onetb on host 2, but the admin page says it is already seen by system.
When I set up VM on 1st host, I see traffic on NIC that connects to 2nd host = looks like sync is working well.
Strange. May I ask you for the help on this? The wiki won't help me at this point, and I see pretty few info on such a case (DRBD + Proxmox + migration).
Thank you in advance for your time to help newbie such as me, the case is really serious for me.
I'm a bit newbie in DRBD-backed cluster config, and face pretty strange behaviour of cluster that is build strictly by manual:
I set up two identical machines (these are desktop PCs, I took them for test purpose, but the hardware is good so I believe It should work with no problem):
- Phenom II X4
- 8 Gb of RAM
- 1 x 500 Gb HDD (Proxmox, iso storage)
- 1 x 1000 Gb HDD (KVM images storage)
- 2 x NICs (one is built-in, and another one is PCI 1 Gig card by D-Link)
So I installed Proxmox on both PCs, set up different IPs and hostnames, set up cluster as per this page, set up DRBD as per this page on 1 Tb disks (and allowed it to sync; it is seen as onetb LVM volume, that is named after 'one TB').
Now I set up VM (KVM, FreeNAS i386 8.0 instance in it, just to test), it run perfectly on 1st host. But as I try to 'migrate' it to 2nd host, the VM stops and log reads:
/usr/sbin/qmigrate --online 192.168.161.59 101
Jun 17 17:42:35 starting migration of VM 101 to host '192.168.161.59'
Jun 17 17:42:35 copying disk images
Jun 17 17:42:35 starting VM on remote host '192.168.161.59'
device-mapper: create ioctl failed: Device or resource busy
device-mapper: create ioctl failed: Device or resource busy
device-mapper: create ioctl failed: Device or resource busy
command '/sbin/lvchange -aly /dev/drbdvg/vm-101-disk-1' failed with exit code 5
volume 'onetb:vm-101-disk-1' does not exist
Jun 17 17:42:36 online migrate failure - command '/usr/bin/ssh -c blowfish -o BatchMode=yes root@192.168.161.59 /usr/sbin/qm --skiplock start 101 --incoming tcp' failed with exit code 2
Jun 17 17:42:36 migration finished with problems (duration 00:00:01)
VM 101 migration failed -
The line volume 'onetb:vm-101-disk-1' does not exist makes me nervios, as I see this onetb both on 1st and 2nd host admin interface. Looks like I need to somehow 'attach' or 'mount' onetb on host 2, but the admin page says it is already seen by system.
When I set up VM on 1st host, I see traffic on NIC that connects to 2nd host = looks like sync is working well.
Strange. May I ask you for the help on this? The wiki won't help me at this point, and I see pretty few info on such a case (DRBD + Proxmox + migration).
Thank you in advance for your time to help newbie such as me, the case is really serious for me.