ACME nsupdate seems to be broken in VE 9.1.9?

CRCinAU

Renowned Member
May 4, 2020
199
59
68
crc.id.au
Trying to get a Let's Encrypt cert via the ACME nsupdate plugin and it's failing with FORMERR. TSIG key works fine if I run nsupdate by hand against the same server, so it's not an auth or ACL issue.

Versions:

Code:
    libproxmox-acme-perl       1.7.1
    libproxmox-acme-plugins    1.7.1

Proxmox task log:

Code:
    The validation for host.example.com is pending!
    [Fri May  1 17:44:43 AEST 2026] adding _acme-challenge.host.example.com. 60 in txt "REDACTED"
    ; Communication with 2001:db8::53#53 failed: timed out
    update failed: FORMERR
    [Fri May  1 17:44:55 AEST 2026] error updating domain
    [Fri May  1 17:44:55 AEST 2026] Error add txt for domain:_acme-challenge.host.example.com
    TASK ERROR: command '...proxmox-acme setup nsupdate host.example.com' failed: exit code 1

BIND on the other end logs:

Code:
    update: info: client 198.51.100.2#39996/key tsig-key-name: update failed: update zone section empty (FORMERR)

tcpdump of the actual packet:

Code:
    IP 198.51.100.2.35599 > 198.51.100.82.53: 26452 update [0q] [1n] [1au] (237)
    IP 198.51.100.82.53 > 198.51.100.2.35599: 26452 update FormErr- [0q] 0/0/1 (141)

The `[0q]` is the problem - in an UPDATE message the question slot holds the zone, and there's nothing in it. So the plugin is signing and sending a packet that has the TXT record and a valid TSIG but never says which zone the update belongs to, and BIND quite rightly throws it back.

Worth noting the IPv6 SOA lookup times out first - I suspect that's related, like nsupdate loses its zone context after the v6 attempt fails and then sends the v4 packet without it. But I haven't proven that.

Has anyone else seen this on the acme version 1.7.1? Is there a way to pin the zone explicitly in the plugin config, or force v4 only for the SOA probe?

(hostnames, IPs, key name and token all redacted)
 
Last edited:
Ah - I should note that when I unblocked the IPv6 access to the master DNS server, everything worked as it should - so this issue is purely around the fallback to IPv4 address.