But my version on all nodes is:
pve-manager/4.1-1/2f9650d4 (running kernel: 4.2.6-1-pve)
I'm upraded nodes via GUI, without result.
VM does not automatically migrate on another node!
What else can be made?
I installed a Proxmox 4.1 cluster of 3 nodes, with iSCSI Nexenta storage.
Then add the VM:104 in HA group. All is ok.
But when I rebooting(or shutdown "ipmitool power off") node2(pve2) with VM:104, the VM does not automatically migrate on another node!
In HA status:
Yes, my bad! Thanks!
My error, name of pool - volume1:
root@nexenta:~# zpool status
scan: none requested
NAME STATE READ WRITE CKSUM
volume1 ONLINE 0 0 0
c0t1d0 ONLINE 0 0 0...
I made it in the beginning, according to the manual.
root@pve1:~# ssh -i /etc/pve/priv/zfs/172.16.2.150_id_rsa firstname.lastname@example.org
Connection to 172.16.2.150 closed.
What it is possible to make?
I added the ZFS over ISCSI as in instruction: https://pve.proxmox.com/wiki/Storage:_ZFS_over_iSCSI
I have setup ISCSI target on Nexentastor.
Created data store, zvol,.. etc as in the manual.
But in Proxmox I have problem.
When I connect to the ISCSI target, I see the LUN as image, and I can not do anything with it.
Option Use LUN directly - ineffectual.
For test, with MS iscsi...
Thank you for answer,
When I pulled the power cable of node, VM is fenced, and migrated after 1.5 min.
But how I can reduce migration time, up to 15 seconds?
Is it possible?
And whether I understand that there is a built-in Proxmox, who understands how the shutdown.
Via IPMI or pulled the power...
I installed a cluster of 3 nodes, with iSCSI shared storage.
Then add the VM:100 in HA group. All is ok:
root@pve:~# ha-manager status
master pve (active, Thu Dec 17 22:59:03 2015)
lrm pve (active, Thu Dec 17 22:59:04 2015)
lrm pve2 (active, Thu Dec 17 22:59:03 2015)