[SOLVED] Error building Wireguard module on latest Proxmox update

reckless

Well-Known Member
Feb 5, 2019
79
4
48
This popped out to me when doing a standard apt update and apt dist-upgrade -y:

Code:
Examining /etc/kernel/postinst.d.
run-parts: executing /etc/kernel/postinst.d/apt-auto-removal 5.3.18-2-pve /boot/vmlinuz-5.3.18-2-pve
run-parts: executing /etc/kernel/postinst.d/dkms 5.3.18-2-pve /boot/vmlinuz-5.3.18-2-pve
Error! Bad return status for module build on kernel: 5.3.18-2-pve (x86_64)
Consult /var/lib/dkms/wireguard/0.0.20190905/build/make.log for more information.

I do have Wireguard installed on the host Proxmox server, because I have an LXC container using Wireguard, so I need to keep the Wireguard module.

This is the full log:
Code:
root@proxmox:~# apt dist-upgrade -y
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Done
The following packages were automatically installed and are no longer required:
  pve-headers-5.3.13-2-pve pve-headers-5.3.13-3-pve pve-headers-5.3.18-1-pve pve-kernel-5.3.13-1-pve pve-kernel-5.3.13-2-pve
Use 'apt autoremove' to remove them.
The following NEW packages will be installed:
  pve-headers-5.3.18-2-pve pve-kernel-5.3.18-2-pve
The following packages will be upgraded:
  pve-docs pve-headers-5.3 pve-kernel-5.3 pve-kernel-helper pve-qemu-kvm qemu-server
6 upgraded, 2 newly installed, 0 to remove and 0 not upgraded.
Need to get 102 MB of archives.
After this operation, 352 MB of additional disk space will be used.
Get:1 http://download.proxmox.com/debian/pve buster/pve-no-subscription amd64 pve-docs all 6.1-6 [10.6 MB]
Get:2 http://download.proxmox.com/debian/pve buster/pve-no-subscription amd64 pve-headers-5.3.18-2-pve amd64 5.3.18-2 [9,873 kB]
Get:3 http://download.proxmox.com/debian/pve buster/pve-no-subscription amd64 pve-headers-5.3 all 6.1-5 [2,948 B]
Get:4 http://download.proxmox.com/debian/pve buster/pve-no-subscription amd64 pve-kernel-5.3.18-2-pve amd64 5.3.18-2 [59.6 MB]
Get:5 http://download.proxmox.com/debian/pve buster/pve-no-subscription amd64 pve-kernel-5.3 all 6.1-5 [3,280 B]
Get:6 http://download.proxmox.com/debian/pve buster/pve-no-subscription amd64 pve-kernel-helper all 6.1-5 [9,156 B]
Get:7 http://download.proxmox.com/debian/pve buster/pve-no-subscription amd64 pve-qemu-kvm amd64 4.1.1-3 [21.4 MB]
Get:8 http://download.proxmox.com/debian/pve buster/pve-no-subscription amd64 qemu-server amd64 6.1-6 [206 kB]
Fetched 102 MB in 2s (61.1 MB/s)
apt-listchanges: Reading changelogs...
(Reading database ... 163478 files and directories currently installed.)
Preparing to unpack .../0-pve-docs_6.1-6_all.deb ...
Unpacking pve-docs (6.1-6) over (6.1-4) ...
Selecting previously unselected package pve-headers-5.3.18-2-pve.
Preparing to unpack .../1-pve-headers-5.3.18-2-pve_5.3.18-2_amd64.deb ...
Unpacking pve-headers-5.3.18-2-pve (5.3.18-2) ...
Preparing to unpack .../2-pve-headers-5.3_6.1-5_all.deb ...
Unpacking pve-headers-5.3 (6.1-5) over (6.1-4) ...
Selecting previously unselected package pve-kernel-5.3.18-2-pve.
Preparing to unpack .../3-pve-kernel-5.3.18-2-pve_5.3.18-2_amd64.deb ...
Unpacking pve-kernel-5.3.18-2-pve (5.3.18-2) ...
Preparing to unpack .../4-pve-kernel-5.3_6.1-5_all.deb ...
Unpacking pve-kernel-5.3 (6.1-5) over (6.1-4) ...
Preparing to unpack .../5-pve-kernel-helper_6.1-5_all.deb ...
Unpacking pve-kernel-helper (6.1-5) over (6.1-4) ...
Preparing to unpack .../6-pve-qemu-kvm_4.1.1-3_amd64.deb ...
Unpacking pve-qemu-kvm (4.1.1-3) over (4.1.1-2) ...
Preparing to unpack .../7-qemu-server_6.1-6_amd64.deb ...
Unpacking qemu-server (6.1-6) over (6.1-5) ...
Setting up pve-headers-5.3.18-2-pve (5.3.18-2) ...
Setting up pve-qemu-kvm (4.1.1-3) ...
Setting up pve-kernel-5.3.18-2-pve (5.3.18-2) ...
Examining /etc/kernel/postinst.d.
run-parts: executing /etc/kernel/postinst.d/apt-auto-removal 5.3.18-2-pve /boot/vmlinuz-5.3.18-2-pve
run-parts: executing /etc/kernel/postinst.d/dkms 5.3.18-2-pve /boot/vmlinuz-5.3.18-2-pve
Error! Bad return status for module build on kernel: 5.3.18-2-pve (x86_64)
Consult /var/lib/dkms/wireguard/0.0.20190905/build/make.log for more information.
run-parts: executing /etc/kernel/postinst.d/initramfs-tools 5.3.18-2-pve /boot/vmlinuz-5.3.18-2-pve
update-initramfs: Generating /boot/initrd.img-5.3.18-2-pve
run-parts: executing /etc/kernel/postinst.d/pve-auto-removal 5.3.18-2-pve /boot/vmlinuz-5.3.18-2-pve
run-parts: executing /etc/kernel/postinst.d/zz-pve-efiboot 5.3.18-2-pve /boot/vmlinuz-5.3.18-2-pve
Re-executing '/etc/kernel/postinst.d/zz-pve-efiboot' in new private mount namespace..
No /etc/kernel/pve-efiboot-uuids found, skipping ESP sync.
run-parts: executing /etc/kernel/postinst.d/zz-update-grub 5.3.18-2-pve /boot/vmlinuz-5.3.18-2-pve
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-5.3.18-2-pve
Found initrd image: /boot/initrd.img-5.3.18-2-pve
Found linux image: /boot/vmlinuz-5.3.18-1-pve
Found initrd image: /boot/initrd.img-5.3.18-1-pve
Found linux image: /boot/vmlinuz-5.3.13-3-pve
Found initrd image: /boot/initrd.img-5.3.13-3-pve
Found linux image: /boot/vmlinuz-5.3.13-2-pve
Found initrd image: /boot/initrd.img-5.3.13-2-pve
Found linux image: /boot/vmlinuz-5.3.13-1-pve
Found initrd image: /boot/initrd.img-5.3.13-1-pve
Found linux image: /boot/vmlinuz-5.0.21-5-pve
Found initrd image: /boot/initrd.img-5.0.21-5-pve
Found linux image: /boot/vmlinuz-5.0.15-1-pve
Found initrd image: /boot/initrd.img-5.0.15-1-pve
Found memtest86+ image: /boot/memtest86+.bin
Found memtest86+ multiboot image: /boot/memtest86+_multiboot.bin
Adding boot menu entry for EFI firmware configuration
done
Setting up pve-kernel-helper (6.1-5) ...
Setting up qemu-server (6.1-6) ...
Setting up pve-headers-5.3 (6.1-5) ...
Setting up pve-docs (6.1-6) ...
Setting up pve-kernel-5.3 (6.1-5) ...
Processing triggers for mime-support (3.62) ...
Processing triggers for pve-manager (6.1-7) ...
Processing triggers for man-db (2.8.5-2) ...
Processing triggers for pve-ha-manager (3.0-8) ...
root@proxmox:~#

Can this be ignored or is this going to be an issue down the line?

This is the full output of /var/lib/dkms/wireguard/0.0.20190905/build/make.log:

Code:
DKMS make.log for wireguard-0.0.20190905 for kernel 5.3.18-2-pve (x86_64)
Fri 21 Feb 2020 02:41:01 AM CST
make: Entering directory '/usr/src/linux-headers-5.3.18-2-pve'
  CC [M]  /var/lib/dkms/wireguard/0.0.20190905/build/main.o
  CC [M]  /var/lib/dkms/wireguard/0.0.20190905/build/noise.o
  CC [M]  /var/lib/dkms/wireguard/0.0.20190905/build/device.o
  CC [M]  /var/lib/dkms/wireguard/0.0.20190905/build/peer.o
  CC [M]  /var/lib/dkms/wireguard/0.0.20190905/build/timers.o
  CC [M]  /var/lib/dkms/wireguard/0.0.20190905/build/queueing.o
  CC [M]  /var/lib/dkms/wireguard/0.0.20190905/build/send.o
  CC [M]  /var/lib/dkms/wireguard/0.0.20190905/build/receive.o
  CC [M]  /var/lib/dkms/wireguard/0.0.20190905/build/socket.o
  CC [M]  /var/lib/dkms/wireguard/0.0.20190905/build/peerlookup.o
  CC [M]  /var/lib/dkms/wireguard/0.0.20190905/build/allowedips.o
  CC [M]  /var/lib/dkms/wireguard/0.0.20190905/build/ratelimiter.o
  CC [M]  /var/lib/dkms/wireguard/0.0.20190905/build/cookie.o
  PERLASM /var/lib/dkms/wireguard/0.0.20190905/build/crypto/zinc/chacha20/chacha20-x86_64.S
  CC [M]  /var/lib/dkms/wireguard/0.0.20190905/build/netlink.o
  PERLASM /var/lib/dkms/wireguard/0.0.20190905/build/crypto/zinc/poly1305/poly1305-x86_64.S
  CC [M]  /var/lib/dkms/wireguard/0.0.20190905/build/crypto/zinc/chacha20/chacha20.o
  CC [M]  /var/lib/dkms/wireguard/0.0.20190905/build/crypto/zinc/poly1305/poly1305.o
  CC [M]  /var/lib/dkms/wireguard/0.0.20190905/build/crypto/zinc/chacha20poly1305.o
  AS [M]  /var/lib/dkms/wireguard/0.0.20190905/build/crypto/zinc/blake2s/blake2s-x86_64.o
  CC [M]  /var/lib/dkms/wireguard/0.0.20190905/build/crypto/zinc/blake2s/blake2s.o
  CC [M]  /var/lib/dkms/wireguard/0.0.20190905/build/crypto/zinc/curve25519/curve25519.o
  AS [M]  /var/lib/dkms/wireguard/0.0.20190905/build/crypto/zinc/poly1305/poly1305-x86_64.o
  AS [M]  /var/lib/dkms/wireguard/0.0.20190905/build/crypto/zinc/chacha20/chacha20-x86_64.o
/var/lib/dkms/wireguard/0.0.20190905/build/socket.c: In function ‘send6’:
/var/lib/dkms/wireguard/0.0.20190905/build/socket.c:145:20: error: ‘const struct ipv6_stub’ has no member named ‘ipv6_dst_lookup’; did you mean ‘ipv6_dst_lookup_flow’?
   ret = ipv6_stub->ipv6_dst_lookup(sock_net(sock), sock, &dst,
                    ^~~~~~~~~~~~~~~
                    ipv6_dst_lookup_flow
make[1]: *** [scripts/Makefile.build:290: /var/lib/dkms/wireguard/0.0.20190905/build/socket.o] Error 1
make[1]: *** Waiting for unfinished jobs....
/var/lib/dkms/wireguard/0.0.20190905/build/allowedips.c: In function ‘root_remove_peer_lists’:
/var/lib/dkms/wireguard/0.0.20190905/build/allowedips.c:72:1: warning: the frame size of 1040 bytes is larger than 1024 bytes [-Wframe-larger-than=]
 }
 ^
/var/lib/dkms/wireguard/0.0.20190905/build/allowedips.c: In function ‘root_free_rcu’:
/var/lib/dkms/wireguard/0.0.20190905/build/allowedips.c:59:1: warning: the frame size of 1040 bytes is larger than 1024 bytes [-Wframe-larger-than=]
 }
 ^
/var/lib/dkms/wireguard/0.0.20190905/build/allowedips.c: In function ‘walk_remove_by_peer.isra.5’:
/var/lib/dkms/wireguard/0.0.20190905/build/allowedips.c:126:1: warning: the frame size of 1032 bytes is larger than 1024 bytes [-Wframe-larger-than=]
 }
 ^
make: *** [Makefile:1655: _module_/var/lib/dkms/wireguard/0.0.20190905/build] Error 2
make: Leaving directory '/usr/src/linux-headers-5.3.18-2-pve'
 
wireguard-0.0.20190905
That's an old version of wireguard-dkms which cannot cope with changes in newer kernels (5.3 was released after 05.09.2019, update yours.
I'm using a priority pinned package from sid just fine, it's in version "0.0.20200128-1" at time of writing though.
 
Last edited:
EDIT: Since regular apt update wasn't properly updating wireguard to the latest version, I successfully updated it using apt install -t unstable wireguard.
 
Last edited:
I am also interested in running a wireguard container on pve. Did you initially install wireguard as instructed on its web page?
 
So since the new Proxmox update (6.2), wireguard doesn't work anymore in LXC containers. I haven't changed anything else in either the containers nor the host. It has been working flawlessly for months before this update.

On the Proxmox host, I have installed the latest version form the buster backports (as of writing, 1.0.20200506-1) and rebooted the proxmox host after a new version was installed. Once rebooted I typed modprobe wireguard which returned nothing, meaning it works.
In the containers, I can still connect to the Wireguard tunnels, but I can't ping any outside IP address nor do any DNS requests return. Putting the tunnel down, everything works. Again, before I updated to the new kernel everything worked just fine.

Not sure where I can get more logs because the tunnels do go up but it doesn't even ping 1.1.1.1 anymore with the tunnel up. Any ideas on what broke in the latest Proxmox update?
 
Doesn't seem like it's related directly to the kernel if the module can be loaded and interfaces go up.

Possibly issues:
  • The other side of your wireguard tunnel is not forwarding anything. Maybe it worked earlier because you setup the ip_forward, NAT, routing, ... manually and a reboot to the new kernel dropped that configuration?
  • The CTs wireguard managing tools package is outdated, cannot work with the module from the host. What distro/version do you use here?
 
Doesn't seem like it's related directly to the kernel if the module can be loaded and interfaces go up.

Possibly issues:
  • The other side of your wireguard tunnel is not forwarding anything. Maybe it worked earlier because you setup the ip_forward, NAT, routing, ... manually and a reboot to the new kernel dropped that configuration?
  • The CTs wireguard managing tools package is outdated, cannot work with the module from the host. What distro/version do you use here?

The container is using the latest version of Debian. I'm also on the most recent version of Proxmox as of today.

Code:
root@ctbox:~# apt install -t unstable wireguard-tools
Reading package lists... Done
Building dependency tree
Reading state information... Done
wireguard-tools is already the newest version (1.0.20200513-1).
0 upgraded, 0 newly installed, 0 to remove and 369 not upgraded.
root@ctbox:~#

The wireguard tunnel has worked fine with multiple reboots for months. Only since updating to latest Proxmox version has it stopped working.
What else can I try here?

Here's what happens when the tunnel goes up, starting from no tunnel up:

Code:
root@ctbox:~# ping 1.1.1.1
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
64 bytes from 1.1.1.1: icmp_seq=1 ttl=56 time=7.14 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=56 time=6.76 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=56 time=6.89 ms
^C
--- 1.1.1.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 5ms
rtt min/avg/max/mdev = 6.755/6.926/7.136/0.171 ms
root@ctbox:~# wg-quick up tunnel1
[#] ip link add tunnel1 type wireguard
[#] wg setconf tunnel1 /dev/fd/63
[#] ip -4 address add 10.64.27.41/32 dev tunnel1
[#] ip -6 address add fc00:bbbb:bbbb:bb01::1:1b28/128 dev tunnel1
[#] ip link set mtu 8920 up dev tunnel1
[#] resolvconf -a tun.tunnel1 -m 0 -x
Too few arguments.
Too few arguments.
[#] wg set tunnel1 fwmark 51820
[#] ip -6 route add ::/0 dev tunnel1 table 51820
[#] ip -6 rule add not fwmark 51820 table 51820
[#] ip -6 rule add table main suppress_prefixlength 0
[#] ip6tables-restore -n
[#] ip -4 route add 0.0.0.0/0 dev tunnel1 table 51820
[#] ip -4 rule add not fwmark 51820 table 51820
[#] ip -4 rule add table main suppress_prefixlength 0
[#] sysctl -q net.ipv4.conf.all.src_valid_mark=1
[#] iptables-restore -n
root@ctbox:~# ping 1.1.1.1
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
^C
--- 1.1.1.1 ping statistics ---
6 packets transmitted, 0 received, 100% packet loss, time 120ms

root@ctbox:~#
 
It solved after another reboot and re-doing the configuration, which tells me it's not Proxmox after all! All working fine now with the latest Proxmox version.
 
  • Like
Reactions: t.lamprecht

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!