C
Chris Rivera
Guest
With us having to take some nodes offline due to bad hardware and getting new nodes online to replace the old ones we are having issues migrating from 1 node to another.
some nodes complain:
Dec 05 10:24:16 # /usr/bin/ssh -c blowfish -o 'BatchMode=yes' root@63.***.***.153 /bin/true
Dec 05 10:24:16 Host key verification failed.
Dec 05 10:24:16 ERROR: migration aborted (duration 00:00:01): Can't connect to destination address using public key
TASK ERROR: migration aborted
#################
Dec 05 10:23:45 starting migration of CT 440 to node 'proxmox6' (63.***.***.156)
Dec 05 10:23:45 starting rsync phase 1
Dec 05 10:23:45 # /usr/bin/rsync -aHAX --delete --numeric-ids --sparse /var/lib/vz/private/440 root@63.***.***.156:/var/lib/vz/private
Dec 05 10:23:46 dump 2nd level quota
Dec 05 10:23:46 # vzdqdump 440 -U -G -T > /var/lib/vz/dump/quotadump.440
Dec 05 10:23:46 ERROR: Failed to dump 2nd level quota: sh: cannot create /var/lib/vz/dump/quotadump.440: Directory nonexistent
Dec 05 10:23:46 aborting phase 1 - cleanup resources
Dec 05 10:23:46 removing copied files on target node
Dec 05 10:23:46 start final cleanup
Dec 05 10:23:46 ERROR: migration aborted (duration 00:00:02): Failed to dump 2nd level quota: sh: cannot create /var/lib/vz/dump/quotadump.440: Directory nonexistent
[FONT=tahoma, arial, verdana, sans-serif]TASK ERROR: migration aborted[/FONT]
[FONT=tahoma, arial, verdana, sans-serif]####################[/FONT]
[FONT=tahoma, arial, verdana, sans-serif]Is it possible to run a command to remove all old keys and have the cluster regenerate the ssh keys to clear this up?[/FONT]
[FONT=tahoma, arial, verdana, sans-serif]This is holding up back from migrating and alleviating client issues.[/FONT]
some nodes complain:
Dec 05 10:24:16 # /usr/bin/ssh -c blowfish -o 'BatchMode=yes' root@63.***.***.153 /bin/true
Dec 05 10:24:16 Host key verification failed.
Dec 05 10:24:16 ERROR: migration aborted (duration 00:00:01): Can't connect to destination address using public key
TASK ERROR: migration aborted
#################
Dec 05 10:23:45 starting migration of CT 440 to node 'proxmox6' (63.***.***.156)
Dec 05 10:23:45 starting rsync phase 1
Dec 05 10:23:45 # /usr/bin/rsync -aHAX --delete --numeric-ids --sparse /var/lib/vz/private/440 root@63.***.***.156:/var/lib/vz/private
Dec 05 10:23:46 dump 2nd level quota
Dec 05 10:23:46 # vzdqdump 440 -U -G -T > /var/lib/vz/dump/quotadump.440
Dec 05 10:23:46 ERROR: Failed to dump 2nd level quota: sh: cannot create /var/lib/vz/dump/quotadump.440: Directory nonexistent
Dec 05 10:23:46 aborting phase 1 - cleanup resources
Dec 05 10:23:46 removing copied files on target node
Dec 05 10:23:46 start final cleanup
Dec 05 10:23:46 ERROR: migration aborted (duration 00:00:02): Failed to dump 2nd level quota: sh: cannot create /var/lib/vz/dump/quotadump.440: Directory nonexistent
[FONT=tahoma, arial, verdana, sans-serif]TASK ERROR: migration aborted[/FONT]
[FONT=tahoma, arial, verdana, sans-serif]####################[/FONT]
[FONT=tahoma, arial, verdana, sans-serif]Is it possible to run a command to remove all old keys and have the cluster regenerate the ssh keys to clear this up?[/FONT]
[FONT=tahoma, arial, verdana, sans-serif]This is holding up back from migrating and alleviating client issues.[/FONT]