Reviewed a bit and went for the Thunderbolt networking setup provided here for my MS-01 Cluster.
https://gist.github.com/scyto/67fdc9a517faefa68f730f82d7fa3570
&
https://gist.github.com/scyto/4c664734535da122f4ab2951b22b2085
Differences which are giving me grief is that I already had the base up and running in latest version on the 2.5g interfaces - no ring just plain + Ceph on the same network. (to get things up and running)
The isolated network is the 10.99.99.21-23 IPs
However if I change the Migration network in DataCenter > Options to the 10.99.99.0/24 network I get these errors on normal migrations
98% sure I missed something! ;p
Switching Migration back to 10.22.20.0/24 does work temporarily back to the slower Net
Ran these on each host to each host:
ssh -o 'HostKeyAlias=pve01' root@10.99.99.21
ssh -o 'HostKeyAlias=pve02' root@10.99.99.22
ssh -o 'HostKeyAlias=pve03' root@10.99.99.23
and they will connect without issue using keys. (just have to accept the fingerprint one time for each)
Just wondering what I need to correct to get it to correctly use that network for migrations?
Then additionally It looks like I'd follow these steps to move the Ceph network over to that same 10G+ Net as well as smoothly as possible with limited outage which one at a time seems possible if reading correctly:
https://forum.proxmox.com/threads/ceph-changing-public-network.119116/
https://gist.github.com/scyto/67fdc9a517faefa68f730f82d7fa3570
&
https://gist.github.com/scyto/4c664734535da122f4ab2951b22b2085
Differences which are giving me grief is that I already had the base up and running in latest version on the 2.5g interfaces - no ring just plain + Ceph on the same network. (to get things up and running)
The isolated network is the 10.99.99.21-23 IPs
However if I change the Migration network in DataCenter > Options to the 10.99.99.0/24 network I get these errors on normal migrations
98% sure I missed something! ;p
Code:
could not get migration ip: no IP address configured on local node for network '10.99.99.21/32'
TASK ERROR: command '/usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=pve03' -o 'UserKnownHostsFile=/etc/pve/nodes/pve03/ssh_known_hosts' -o 'GlobalKnownHostsFile=none' root@10.22.20.23 pvecm mtunnel -migration_network 10.99.99.21/32 -get_migration_ip' failed: exit code 255
Switching Migration back to 10.22.20.0/24 does work temporarily back to the slower Net
Ran these on each host to each host:
ssh -o 'HostKeyAlias=pve01' root@10.99.99.21
ssh -o 'HostKeyAlias=pve02' root@10.99.99.22
ssh -o 'HostKeyAlias=pve03' root@10.99.99.23
and they will connect without issue using keys. (just have to accept the fingerprint one time for each)
Just wondering what I need to correct to get it to correctly use that network for migrations?
Then additionally It looks like I'd follow these steps to move the Ceph network over to that same 10G+ Net as well as smoothly as possible with limited outage which one at a time seems possible if reading correctly:
https://forum.proxmox.com/threads/ceph-changing-public-network.119116/
Last edited: