Hello,
Has anyone achieved a dual stack of having the ability to use both ipv4 and ipv6 for provisioning VMs? I want to be able to provision vms with both and need to use tunnelbroker as my datacenter currently does not support ipv6 networking..
I added the configuration for ipv6 to the host (he-ipv6) however vmbr0 is configured for ipv4... is this correct?
ipv6.google.com is pingable.
What are the next steps?
This is my current configuration: (I have remitted IPs for security reasons)
auto lo
iface lo inet loopback
auto eno4
iface eno4 inet manual
post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up echo 1 > /proc/sys/net/ipv4/conf/eno1/proxy_arp
iface eno1 inet manual
iface eno2 inet manual
iface eno3 inet manual
auto vmbr0
iface vmbr0 inet static
address xxx.xxx.xx.xxx
netmask 29
gateway xxx.xxx.xx.xxx
bridge-ports eno4
bridge-stp off
bridge-fd 0
auto he-ipv6
iface he-ipv6 inet6 v4tunnel
address 2001:xx:xxxx:xx::2
netmask 64
endpoint xx.xxx.x.xx
local xxx.xxx.xx.xxx
ttl 255
gateway 2001:xx:xxxx:xx::1
Has anyone achieved a dual stack of having the ability to use both ipv4 and ipv6 for provisioning VMs? I want to be able to provision vms with both and need to use tunnelbroker as my datacenter currently does not support ipv6 networking..
I added the configuration for ipv6 to the host (he-ipv6) however vmbr0 is configured for ipv4... is this correct?
ipv6.google.com is pingable.
What are the next steps?
This is my current configuration: (I have remitted IPs for security reasons)
auto lo
iface lo inet loopback
auto eno4
iface eno4 inet manual
post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up echo 1 > /proc/sys/net/ipv4/conf/eno1/proxy_arp
iface eno1 inet manual
iface eno2 inet manual
iface eno3 inet manual
auto vmbr0
iface vmbr0 inet static
address xxx.xxx.xx.xxx
netmask 29
gateway xxx.xxx.xx.xxx
bridge-ports eno4
bridge-stp off
bridge-fd 0
auto he-ipv6
iface he-ipv6 inet6 v4tunnel
address 2001:xx:xxxx:xx::2
netmask 64
endpoint xx.xxx.x.xx
local xxx.xxx.xx.xxx
ttl 255
gateway 2001:xx:xxxx:xx::1
Last edited: