Migration Error of VM.

sohaib

Well-Known Member
May 14, 2011
124
0
56
Mar 31 17:11:18 # /usr/bin/ssh -c blowfish -o 'BatchMode=yes' root@192.168.1.3 /bin/true
Mar 31 17:11:18 @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
Mar 31 17:11:18 @ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @
Mar 31 17:11:18 @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
Mar 31 17:11:18 IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Mar 31 17:11:18 Someone could be eavesdropping on you right now (man-in-the-middle attack)!
Mar 31 17:11:18 It is also possible that the RSA host key has just been changed.
Mar 31 17:11:18 The fingerprint for the RSA key sent by the remote host is
Mar 31 17:11:18 6c:a6:31:28:bb:ac:0e:a4:f5:c8:ae:e7:da:2c:73:5f.
Mar 31 17:11:18 Please contact your system administrator.
Mar 31 17:11:18 Add correct host key in /root/.ssh/known_hosts to get rid of this message.
Mar 31 17:11:18 Offending key in /root/.ssh/known_hosts:3
Mar 31 17:11:18 RSA host key for 192.168.1.3 has changed and you have requested strict checking.
Mar 31 17:11:18 Host key verification failed.
Mar 31 17:11:18 ERROR: migration aborted (duration 00:00:00): Can't connect to destination address using public key
TASK ERROR: migration aborted

But I just created the Cluster and I can see the nodes in the list so I dont get it why I cant migrate the VM and its saying key is not the same am I doing something wrong.
 
I fix that by running pvecm updatecerts - now I got this error

Mar 31 17:39:09 starting migration of CT 103 to node 'NOD2' (192.168.1.3)
Mar 31 17:39:09 container data is on shared storage 'local'
Mar 31 17:39:09 dump 2nd level quota
Mar 31 17:39:09 initialize container on remote node 'NOD2'
Mar 31 17:39:09 initializing remote quota
Mar 31 17:39:10 # /usr/bin/ssh -c blowfish -o 'BatchMode=yes' root@192.168.1.3 vzctl quotainit 103
Mar 31 17:39:10 vzquota : (error) quota check : stat /var/lib/vz/private/103: No such file or directory
Mar 31 17:39:10 ERROR: Failed to initialize quota: vzquota init failed [1]
Mar 31 17:39:10 start final cleanup
Mar 31 17:39:10 ERROR: migration finished with problems (duration 00:00:01)
TASK ERROR: migration problems
 
Its fixed - I have uncheck shared storage and everything is working. but do I need a shared storage for HA ?
 
I installed today and recieved the same message during a test migration of a vm.

Jan 14 16:10:21 # /usr/bin/ssh -o 'BatchMode=yes' root@192.168.202.3 /bin/true

Jan 14 16:10:21 @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
Jan 14 16:10:21 @ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @
Jan 14 16:10:21 @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
Jan 14 16:10:21 IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Jan 14 16:10:21 Someone could be eavesdropping on you right now (man-in-the-middle attack)!
Jan 14 16:10:21 It is also possible that the RSA host key has just been changed.
Jan 14 16:10:21 The fingerprint for the RSA key sent by the remote host is
Jan 14 16:10:21 60:b5:64:12:bd:44:91:b7:79:0d:58:a8:f9:56:97:d0.
Jan 14 16:10:21 Please contact your system administrator.
Jan 14 16:10:21 Add correct host key in /root/.ssh/known_hosts to get rid of this message.
Jan 14 16:10:21 Offending key in /root/.ssh/known_hosts:1
Jan 14 16:10:21 RSA host key for 192.168.202.3 has changed and you have requested strict checking.
Jan 14 16:10:21 Host key verification failed.
Jan 14 16:10:21 ERROR: migration aborted (duration 00:00:00): Can't connect to destination address using public key
TASK ERROR: migration aborted

How do I update these keys to the correct ones?
 
I moved the old known_hosts file as follows,

mv /root/.ssh/known_hosts /root/.ssh/known_hosts.not

Then I logged in to the second node from the offending node that gave the error again as root. After confirming the new key during the ssh login, it's working fine now.