proxmox v7.1 disable ipv6

bars

New Member
Dec 19, 2021
13
0
1
39
Hello.
How to disable ipv6 in proxmox v7.1?
On the proxmox server itself, ipv6 is completely disabled.
Code:
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
But when creating a VPS, I see the IPV6 address in the settings of the VPS /etc/hosts.
Code:
# cat /etc/hosts
127.0.0.1       localhost
10.10.2.11 serv1c.mydom.lan serv1c
# --- BEGIN PVE ---
::1 localhost.localnet localhost
127.0.1.1 serv1c.mydom.lan serv1c
# --- END PVE ---
 
But when creating a VPS, I see the IPV6 address in the settings of the VPS /etc/hosts.
This line gets added unconditionally - but an entry in /etc/hosts does not mean that the container gets an ipv6 address or can use one.

Does the container have any ipv6 address configured (`ip -6 addr` ) or a routing entry (`ip -6 route`) ?


I hope this helps!
 
proxmox->datacentr->syslog
I see a lot of such records.
How to remove / disable it?
Code:
Dec 28 12:36:37 testvirt1c pve-firewall[923]: status update error: iptables_restore_cmdlist: Try `ip6tables-restore -h' or 'ip6tables-restore --help' for more information.

Dec 28 12:36:47 testvirt1c pve-firewall[923]: status update error: iptables_restore_cmdlist: Try `ip6tables-restore -h' or 'ip6tables-restore --help' for more information.

Dec 28 12:36:57 testvirt1c pve-firewall[923]: status update error: iptables_restore_cmdlist: Try `ip6tables-restore -h' or 'ip6tables-restore --help' for more information.

Dec 28 12:37:07 testvirt1c pve-firewall[923]: status update error: iptables_restore_cmdlist: Try `ip6tables-restore -h' or 'ip6tables-restore --help' for more information.
 
proxmox->datacentr->syslog
I see a lot of such records.
How to remove / disable it?
this was fixed a while ago - what's your `pveversion -v`?
make sure you've updated to the latest available versions
I hope this helps!
 
this was fixed a while ago - what's your `pveversion -v`?
make sure you've updated to the latest available versions
I hope this helps!
Code:
pveversion -v
proxmox-ve: 7.1-1 (running kernel: 5.13.19-2-pve)
pve-manager: 7.1-8 (running version: 7.1-8/5b267f33)
pve-kernel-helper: 7.1-6
pve-kernel-5.13: 7.1-5
pve-kernel-5.13.19-2-pve: 5.13.19-4
ceph-fuse: 14.2.21-1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: residual config
ifupdown2: 3.1.0-1+pmx3
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.0
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.1-5
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-14
libpve-guest-common-perl: 4.0-3
libpve-http-server-perl: 4.0-4
libpve-storage-perl: 7.0-15
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.11-1
lxcfs: 4.0.11-pve1
novnc-pve: 1.2.0-3
proxmox-backup-client: 2.1.2-1
proxmox-backup-file-restore: 2.1.2-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.4-4
pve-cluster: 7.1-2
pve-container: 4.1-3
pve-docs: 7.1-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.3-3
pve-ha-manager: 3.3-1
pve-i18n: 2.6-2
pve-qemu-kvm: 6.1.0-3
pve-xtermjs: 4.12.0-1
qemu-server: 7.1-4
smartmontools: 7.2-1
spiceterm: 3.2-2
swtpm: 0.7.0~rc1+2
vncterm: 1.7-1
zfsutils-linux: 2.1.1-pve3
 
* any other changes on your system regarding ipv6 disablement?
* I cannot reproduce the issue here on a system with pve-firewall enabled (default ruleset) and the sysctl.conf settings you posted above
 
* any other changes on your system regarding ipv6 disablement?
* I cannot reproduce the issue here on a system with pve-firewall enabled (default ruleset) and the sysctl.conf settings you posted above
Setting system.
/etc/default/grub -> GRUB_CMDLINE_LINUX="ipv6.disable=1"
and grub-mkconfig -o /boot/grub/grub.cfg, and reboot
In /etc/sysctl.conf
Code:
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
Updated the packages in the system and rebooted the server.
My system Linux 5.13.19-2-pve x86_64
Code:
pveversion -v
proxmox-ve: 7.1-1 (running kernel: 5.13.19-2-pve)
pve-manager: 7.1-8 (running version: 7.1-8/5b267f33)
pve-kernel-helper: 7.1-6
pve-kernel-5.13: 7.1-5
pve-kernel-5.13.19-2-pve: 5.13.19-4
ceph-fuse: 14.2.21-1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: residual config
ifupdown2: 3.1.0-1+pmx3
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.0
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.1-5
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-14
libpve-guest-common-perl: 4.0-3
libpve-http-server-perl: 4.0-4
libpve-storage-perl: 7.0-15
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.11-1
lxcfs: 4.0.11-pve1
novnc-pve: 1.3.0-1
proxmox-backup-client: 2.1.2-1
proxmox-backup-file-restore: 2.1.2-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.4-4
pve-cluster: 7.1-3
pve-container: 4.1-3
pve-docs: 7.1-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.3-4
pve-ha-manager: 3.3-1
pve-i18n: 2.6-2
pve-qemu-kvm: 6.1.0-3
pve-xtermjs: 4.12.0-1
qemu-server: 7.1-4
smartmontools: 7.2-1
spiceterm: 3.2-2
swtpm: 0.7.0~rc1+2
vncterm: 1.7-1
zfsutils-linux: 2.1.1-pve3
I still see the records
Code:
Dec 28 15:20:11 testvirt1c pve-firewall[922]: status update error: iptables_restore_cmdlist: Try `ip6tables-restore -h' or 'ip6tables-restore --help' for more information.
Dec 28 15:20:21 testvirt1c pve-firewall[922]: status update error: iptables_restore_cmdlist: Try `ip6tables-restore -h' or 'ip6tables-restore --help' for more information.
Dec 28 15:20:31 testvirt1c pve-firewall[922]: status update error: iptables_restore_cmdlist: Try `ip6tables-restore -h' or 'ip6tables-restore --help' for more information.
Dec 28 15:20:41 testvirt1c pve-firewall[922]: status update error: iptables_restore_cmdlist: Try `ip6tables-restore -h' or 'ip6tables-restore --help' for more information.
 
Last edited:
Setting system.
/etc/default/grub -> GRUB_CMDLINE_LINUX="ipv6.disable=1"
I think this is at fault here - if you completely disable ipv6 - then ip6tables cannot run - and this is hardcoded
The recommended way for disabling ipv6 is in the reference documentation:
https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#_disabling_ipv6_on_the_node

and grub-mkconfig -o /boot/grub/grub.cfg, and reboot
on debian based systems (as PVE) `update-grub` is preferred (though it should not make too much of a difference)

I hope this helps
 
  • Like
Reactions: Tmanok
I think this is at fault here - if you completely disable ipv6 - then ip6tables cannot run - and this is hardcoded
The recommended way for disabling ipv6 is in the reference documentation:
https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#_disabling_ipv6_on_the_node


on debian based systems (as PVE) `update-grub` is preferred (though it should not make too much of a difference)

I hope this helps
If I remove ipv6 from grub, then it will appear in proxmox.
I DO NOT NEED ipv6 AT ALL !!!
Why are these rigid, incomprehensible frameworks for activating unnecessary things on the server !?
Disconnected the firewall from the proxmox, I don't use it.
pve-firewall stop
systemctl disable pve-firewall

I use iptables on the host system itself.
Now the entry no longer appears.
Code:
Dec 28 15:45:25 testvirt1c pve-firewall[925]: status update error: iptables_restore_cmdlist: Try `ip6tables-restore -h' or 'ip6tables-restore --help' for more information.
 
Weaned it off, it still loads when the server is restarted.
Code:
 service pve-firewall status
● pve-firewall.service - Proxmox VE firewall
     Loaded: loaded (/lib/systemd/system/pve-firewall.service; disabled; vendor preset: enabled)
     Active: active (running) since Tue 2021-12-28 16:00:09 MSK; 7min ago
   Main PID: 927 (pve-firewall)

Снимок экрана_2021-12-28_16-16-02.png
 
It turned out to turn off only with the help of masking.
systemctl mask pve-firewall.service
 
did the messages in the log still show up?
Yes.
The pve-firewall.service service started automatically after the system reboot
Code:
systemctl mask pve-firewall.service
After masking the pve-firewall.service service, the pve-firewall.serviceservice is no longer loaded and the log entry no longer appears (status update error: iptables_restore_cmdlist: Try `ip6tables-restore -h').
 
yes - that's correct - if the firewall is disabled - the daemon still wants to read the current rule-set (in order to remove its rules if they were still present) - and this does not work for ip6tables without the ipv6 module loaded

this is one of the reasons why we really suggest to disable ipv6 via sysctl
 
  • Like
Reactions: Tmanok
this is one of the reasons why we really suggest to disable ipv6 via sysctl

FYI: Unfortunately, disabling via the proscribed method doesn't disable it in LXC containers that existed prior to when ipv6 was disabled via sysctl on the proxmox host...

-Proxmox host set to static IPV4 address via netplan
-Moved Proxmox host from network that supported DHCP ipv6 to network that does not.
-Proxmox host DNS continues to function via IPV4 as expected.
-LXC container set to IPV6 DHCP continues to have old IPV6 DHCP DNS hosts present in resolve.conf so DNS does not function within the LXC container.
-Performed procedure to disable IPV6 on host via sysctl:
https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#_disabling_ipv6_on_the_node
-On host IPV6 is indeed disabled afterward:
root@richie:~# ip -4 a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 3: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 inet 10.0.111.12/24 brd 10.0.111.255 scope global vmbr0 valid_lft forever preferred_lft forever root@richie:~# ip -6 a root@richie:~#

-But inside a privileged LXC container it is still enabled:
root@portainer:/etc# ip -6 a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 state UNKNOWN qlen 1000 inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0@br-bffbd40cd223: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP qlen 1000 inet6 2601:640:8900:3dc0::e2ac/128 scope global valid_lft forever preferred_lft forever inet6 fe80::d4c9:6fff:fe79:ca63/64 scope link valid_lft forever preferred_lft forever 3: br-beaa20a1e139: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP inet6 fe80::42:a5ff:fe10:8f0b/64 scope link valid_lft forever preferred_lft forever 4: br-bffbd40cd223: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP inet6 fe80::42:14ff:fef8:d040/64 scope link valid_lft forever preferred_lft forever 5: br-c61c728f64eb: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP inet6 fe80::42:e7ff:febf:4b52/64 scope link valid_lft forever preferred_lft forever 6: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP inet6 fe80::42:5cff:fe9b:eba6/64 scope link valid_lft forever preferred_lft forever 7: br-ebb8e9d0917b: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP inet6 fe80::42:cff:fe2b:8bec/64 scope link valid_lft forever preferred_lft forever 8: br-057f3acf439d: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP inet6 fe80::42:d3ff:fe44:24eb/64 scope link valid_lft forever preferred_lft forever 9: br-1ec39feb68f4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP inet6 fe80::42:30ff:fe8b:d696/64 scope link valid_lft forever preferred_lft forever 10: br-b0022bc99256: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP inet6 fe80::42:deff:fe4a:2f/64 scope link valid_lft forever preferred_lft forever 12: veth8491b30@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP inet6 fe80::c0b7:5dff:fe89:c25c/64 scope link valid_lft forever preferred_lft forever 14: vetha3de996@if13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP inet6 fe80::40f8:10ff:fe83:fb09/64 scope link valid_lft forever preferred_lft forever 16: vethd586943@if15: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP inet6 fe80::e08e:96ff:fe77:4b1b/64 scope link valid_lft forever preferred_lft forever 18: veth4bd6221@if17: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP inet6 fe80::5886:7ff:fe85:e168/64 scope link valid_lft forever preferred_lft forever 20: veth9064199@if19: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP inet6 fe80::543b:89ff:fe7e:e743/64 scope link valid_lft forever preferred_lft forever 22: veth5afed1a@if21: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP inet6 fe80::ace4:f0ff:fef7:7efd/64 scope link valid_lft forever preferred_lft forever 24: veth5c625d5@if23: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP inet6 fe80::9095:35ff:fe94:fa0d/64 scope link valid_lft forever preferred_lft forever 26: veth75cf84f@if25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP inet6 fe80::5e:c7ff:fe40:d604/64 scope link valid_lft forever preferred_lft forever 28: veth9d45475@if27: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP inet6 fe80::3c67:5dff:fe7b:77a9/64 scope link valid_lft forever preferred_lft forever 30: veth1be098b@if29: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP inet6 fe80::505a:44ff:fe0a:b514/64 scope link valid_lft forever preferred_lft forever 32: veth8c84b5b@if31: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP inet6 fe80::88a3:67ff:fef6:79b5/64 scope link valid_lft forever preferred_lft forever root@portainer:/etc

-And DNS resolution continues to fail:
root@portainer:/etc# cat /etc/resolv.conf nameserver 2001:558:feed::1 nameserver 2001:558:feed::2 root@portainer:/etc#

So the "approved" method to disable IPV6 doesn't propagate to existing LXC containers

Not sure what the "approved" method to disable IPV6 is for existing LXC containers but still looking...

-=dave
 
Update:

In addition to procedure to disable IPV6 on host, any LXC containers set to use DHCP for IPV6 on their network interface must be changed to STATIC and CIDR of default "NONE", else the LXC will continue to use IPV6. This seems like a bug to me, since there is no IPV6 DHCP server to answer to the LXC IPV6 DHCP request as it never gets out of the host.

-=dave
 
I am encountering exactly the same issue - IPv6 disabled on the node as per the recommended sysctl.d configuration, and yet still my LXC containers fire up with IPv6 enabled, the interface is already set to STATIC and the address "None".

Has anyone else experienced this, and if so do you have a solution?

Note: Replicating the sysctl.d approach within the container does little to help.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!