Migration between ipv4-only and dual-stack hosts

Discussion in 'Proxmox VE: Networking and Firewall' started by Altinea, Apr 16, 2018.

  1. Altinea

    Altinea New Member

    Joined:
    Jan 12, 2018
    Messages:
    4
    Likes Received:
    0
    Hello,
    We just added 2 PVE 5.1 hosts into our 'old' 4.2 cluster in order to migrate and upgrade old nodes.

    This old nodes are ipv4 only and new ones are dual-stacked (ipv4 & ipv6). We have setup a dedicated 10Gbps network between all nodes for DRBD replication. It would be great to use it for migration.

    I tried the first online migration failed with :
    'ssh: connect to host 2a06:ac00:xxxx:xxxx::1 port 22: Network is unreachable'

    So, it seems, the migration is trying to use the ipv6 address on public interface.

    I tried to change datacenter.cfg with :
    migration: type=secure, network=10.152.12.0/24
    migration_unsecure: 0

    But first arg is not used by PVE 4.2 and second doesn't seems to have any effect.

    Proxmox cluster is create using private IP subnet :
    Membership information
    ----------------------
    Nodeid Votes Name
    0x00000002 1 10.152.12.57
    0x00000001 1 10.152.12.58
    0x00000004 1 10.152.12.60
    0x00000003 1 10.152.12.61
    0x00000005 1 10.152.12.62 (local)

    And server names are all defined in /etc/hosts with 10.152.12.0/24 subnet.

    Any idea of how I can force the migration process to use this subnet between 4.2 and 5.1 ?

    Thanks for your help,
    Julien
     
  2. dcsapak

    dcsapak Proxmox Staff Member
    Staff Member

    Joined:
    Feb 1, 2016
    Messages:
    2,524
    Likes Received:
    226
    if you read https://pve.proxmox.com/wiki/Upgrade_from_4.x_to_5.0
    it states that you have to upgrade to the latest 4.4 before upgrading in place

    the migration network was only introduced later in the 4.x lifetime, so if you upgrade those nodes, this should work
     
  3. Altinea

    Altinea New Member

    Joined:
    Jan 12, 2018
    Messages:
    4
    Likes Received:
    0
    Hello,
    So I upgraded one of old nodes to 4.4. Still the migration process need to connect to the ipv6 address but once I added an ipv6 address, the migration worked flawlessly on the private subnet.

    So if a node is identified on its ipv6 address, nevermind the subnet used for migration, it still need first a connection over ipv6 to start migration.

    Nevermind, i'm not anymore stuck.

    Best regards,
    Julien Escario
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice