IPv6 issues with Letsencrypt and activating license

SpaceJelly

New Member
Jan 11, 2024
5
0
1
Ver: Mail Gateway 8.1.2

I have a strange issue with a newly built PMG with IPv6 and certain web interface admin pages not operating correctly. I've done plenty of searching and while there are plenty of threads about IPv6, that's more about it not working at all where here it's only the web interface.

Yesterday I tried to activate the subscription and that was timing out, also adding an ACME account for certificates was timing out too.

It's dual stack with static IPv4 address and the inet6 is auto, see interfaces file:


auto lo
iface lo inet loopback

auto ens192
iface ens192 inet static
address 172.16.10.51/24
gateway 172.16.10.254

iface ens192 inet6 auto

source /etc/network/interfaces.d/*

ip address returns:
ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:50:56:84:47:d4 brd ff:ff:ff:ff:ff:ff
altname enp11s0
inet 172.16.10.51/24 scope global ens192
valid_lft forever preferred_lft forever
inet6 dead:beef:a001:0:250:56ff:dead:beef/64 scope global dynamic mngtmpaddr
valid_lft 86329sec preferred_lft 14329sec
inet6 dead:beef:a001:100:250:56ff:dead:beef/64 scope global dynamic mngtmpaddr
valid_lft 86180sec preferred_lft 14180sec
inet6 fe80::250:56ff:fe84:47d4/64 scope link
valid_lft forever preferred_lft forever

ip -6 route returns:
dead:beef:a001::/64 dev ens192 proto kernel metric 256 expires 86179sec pref medium
dead:beef:a001:100::/64 dev ens192 proto kernel metric 256 expires 86292sec pref medium
fe80::/64 dev ens192 proto kernel metric 256 pref medium
default via fe80::250:56ff:fe84:a23a dev ens192 proto ra metric 1024 expires 1692sec hoplimit 64 pref medium
default via fe80::250:56ff:fe84:4cfa dev ens192 proto ra metric 1024 expires 1579sec hoplimit 64 pref medium

The boxes (there are two in a cluster and both are affected by the same issue, but just focusing on one) are behind two pfSense firewalls with WAN getting ipv6 via DHCP6 with prefix delegation of 56. The LAN interface is tracking WAN. The firewalls RA was configured as Assisted (Will advertise this router with configuration through a DHCPv6 server and/or SLAAC.) and the interfaces file was 'iface ens192 inet6 dhcp'

However I have just switched to Stateless DHCP (Will advertise this router with SLAAC and other configuration information available via DHCPv6. ) and adjusted the interfaces file to 'iface ens192 inet6 auto' to see if that made any difference (rebooting each time just to make sure) but sadly no.

Now, general IPv6 operations are fine, email works great over IPv6, I can ping the shop fine:
root@mg1:~# ping shop.proxmox.com
PING shop.proxmox.com(shop.proxmox.com (2a01:7e0:0:424::2)) 56 data bytes
64 bytes from shop.proxmox.com (2a01:7e0:0:424::2): icmp_seq=1 ttl=57 time=11.5 ms
64 bytes from shop.proxmox.com (2a01:7e0:0:424::2): icmp_seq=2 ttl=57 time=11.6 ms

but trying to activate the license was having none of it. Just kept coming up with timeout errors. Also adding ACME accounts does the same. It'll sit there loading before it'll time out.

I've sent emails from my Google account through to the email server behind PMG and it's IPv6 all the way, so it's routing absolutely fine. It just seems to be certain admin web elements that misbehave with IPv6.

To get ACME configured, I disabled IPv6 temporarily, added the account setup the certs then re-enabled IPv6. Still waiting to see if the auto renew runs ok.

For the shop, I just added shop.proxmox.com in the hosts file resolving to the IPv4 address so I could get the subscriptions active.

Any further info required to help or anything else I can check?
 
Hi Stoiko,

Thanks for the reply. Curl does return data, but it does appear to be IPv4

root@mg1:~# curl -v https://shop.proxmox.com
* Trying [2a01:7e0:0:424::2]:443...
* Trying 79.133.36.249:443...
* Connected to shop.proxmox.com (79.133.36.249) port 443 (#0)
* ALPN: offers h2,http/1.1
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* CAfile: /etc/ssl/certs/ca-certificates.crt
* CApath: /etc/ssl/certs
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
* ALPN: server accepted http/1.1
* Server certificate:
* subject: CN=shop.proxmox.com
* start date: May 12 21:00:10 2024 GMT
* expire date: Aug 10 21:00:09 2024 GMT
* subjectAltName: host "shop.proxmox.com" matched cert's "shop.proxmox.com"
* issuer: C=US; O=Let's Encrypt; CN=R3
* SSL certificate verify ok.
* using HTTP/1.1
> GET / HTTP/1.1
> Host: shop.proxmox.com
> User-Agent: curl/7.88.1
> Accept: */*
>
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* old SSL session ID is stale, removing
< HTTP/1.1 200 OK
< Date: Mon, 08 Jul 2024 15:30:22 GMT
< Server: Apache
< X-Frame-Options: SAMEORIGIN
< Content-Security-Policy: frame-ancestors 'self'
< Set-Cookie: WHMCSvVl9CFfEzwuY=7sdfngkauud9vu50f21q44if2a; path=/; secure; HttpOnly
< Expires: Thu, 19 Nov 1981 08:52:00 GMT
< Cache-Control: no-store, no-cache, must-revalidate
< Pragma: no-cache
< Vary: Accept-Encoding
< Transfer-Encoding: chunked
< Content-Type: text/html; charset=utf-8
<

As for firewall, outbound traffic is allowed for these boxes, inbound, restricted to required ports. It's odd that IPv6 is fine for sending and receiving emails, but accessing the services via the web front end was timing out for shop and acme.

Anything else I can check? Could it be, because the ip's are auto assigned via DHCP/SLAAC and the web interface doesn't show any IPv6 address in the Configuration tab under Interfaces, it's causing an issue?

Thank you
 
Curl does return data, but it does appear to be IPv4
That seems to indicate that the PMG cannot connect to port 443 on our shop - as the shop works over ipv6 in general (I tried it just now, and we'd have quite a few more reports if this wasn't the case) - I think this might be the root-cause - usually this boils down to some difference in routing/firewall policies on the network of the PMG - especially if icmp6 (ping) works!

It's odd that IPv6 is fine for sending and receiving emails, but accessing the services via the web front end was timing out for shop and acme.
check the logs if there's indeed outbound ipv6 traffic (postfix/smtp would be the service that connects to the internet (and should also log the addresses it connects to) - and if that's the case I'd verify the firewall policies again
 
ok, here's the strange thing with this. Reboot the MG host. Wait for IP addressing to complete with IPv6. Test CURL:

root@mg2:~# curl -v https://shop.proxmox.com
* Trying [2a01:7e0:0:424::2]:443...
* Connected to shop.proxmox.com (2a01:7e0:0:424::2) port 443 (#0)
* ALPN: offers h2,http/1.1
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* CAfile: /etc/ssl/certs/ca-certificates.crt
* CApath: /etc/ssl/certs

Connects via IPv6 fine!

However wait a few minutes, then it's back to failing over IPv4

root@mg2:~# curl -v https://shop.proxmox.com
* Trying [2a01:7e0:0:424::2]:443...
* Trying 79.133.36.249:443...
* Connected to shop.proxmox.com (79.133.36.249) port 443 (#0)
* ALPN: offers h2,http/1.1
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* CAfile: /etc/ssl/certs/ca-certificates.crt
* CApath: /etc/ssl/certs


I am checking the firewall settings, but it's odd that it works at first, then it stops. Just updating in case any more ideas out there!
 
I am checking the firewall settings, but it's odd that it works at first, then it stops. Just updating in case any more ideas out there!
hm - IPv6 networking failing after a short amount of minutes could as well be that you're not getting any router-advertisements anymore, thus your node does not have a valid route to the internet

* make sure nothing in your network blocks icmp6 or ndp packets
* make sure that you've configured all relevant aspects (e.g. the /etc/network/interfaces file and the accept_ra sysctls correctly)
* make sure you don't have another services on your PMG that tries to configure your network (systemd-networkd, NetworkManager, ....)

I hope this helps!
 
Routes are good:
root@mg1:~# ip -6 route
dead:beef:a001::/64 dev ens192 proto kernel metric 256 expires 86003sec pref medium
dead:beef:a001:100::/64 dev ens192 proto kernel metric 256 expires 86299sec pref medium
fe80::/64 dev ens192 proto kernel metric 256 pref medium
default via fe80::250:beef:dead:4cfa dev ens192 proto ra metric 1024 expires 1403sec hoplimit 64 pref medium
default via fe80::250:beef:dead:a23a dev ens192 proto ra metric 1024 expires 1699sec hoplimit 64 pref medium

pinging is also fine as before, it really does have me scratching my head as by all accounts, everything looks great. Nothing else is configuring the network that I'm aware, it's a default install from the ISO.

root@mg1:~# ping shop.proxmox.com
PING shop.proxmox.com(shop.proxmox.com (2a01:7e0:0:424::2)) 56 data bytes
64 bytes from shop.proxmox.com (2a01:7e0:0:424::2): icmp_seq=1 ttl=57 time=11.6 ms
64 bytes from shop.proxmox.com (2a01:7e0:0:424::2): icmp_seq=2 ttl=57 time=11.5 ms
 
ok, parking this for now because possibly... https://forum.netgate.com/topic/184...after-2-7-2-update?_=1720879134633&lang=en-GB

ULA routing broke after 2.7.2 update​

I'm reading to see if there's a patch and will update in due course but could help others if they use pfSense!

So, not the above. However I have found out what it is. There are two pfSense firewalls and they are in HA mode, so RA is sending two default routes, one normal one low.

However it seems that the OS still decides to use both routes, rather than just the normal one. Windows doesn't have that problem and IPv6 connectivity is fine.

If I shutdown one firewall, it all works great. Bring the firewall back, stops working.

I've added state syncronisation to the firewalls but that's not resolved it. The only way to fix this is to turn off RA on the second firewall and just run with one ipv6 gateway, which really defeats the object of having two firewalls for resilience.

I've got an ubuntu dev box that I use, so moved that onto the same LAN network and checked that. ip -6 route shows just the pref medium route, which is great and what I would expect.
Same command on the proxmox boxes shows both the low and medium routes.

Anyway, resolved now, the default IPv6 rules for each subnet allows traffic from itself only. However as firewall 1 subnet is one /64 and firewall 2 subnet is another /64 traffic can get blocked if it goes out via one firewall and back with the other, so added an alias for both subnets and set that rule up to allow from the alias, seems to be working nicely however will monitor!
 
Last edited: