proxmox-firewall (nftables) and conntrack

Nov 23, 2023
19
1
8
Hi,

the topic of "conntrack state migration not supported or disabled, active connections might get dropped" has been discussed multiple times and there are a few posts regarding fixes in the qemu-server, last fix is in 9.1.3 if i understood correctly - i read alot of those. This issue/question is NOT related to those bugs.

I understood that i have to use "nftables" in Proxmox 9 to properly support conntrack, which is enabled on each host in Firewall -> Option -> nftables = yes.

What else do i need to do

- Restart all VMs ? Or does the conntract work for each VM that was restarted ?
- Do i need to enable the firewall on each virtual machine in Firewall -> Options ?
- Do i need to enable firewalling on the respective interface of a VM in the interface settings ?

What i confirmed already is

- nftables enabled
- the proxmox-firewall systemd unit is active and running

What i tested is

- enable firewall on one VM
- set INPUT and OUTPUT to default ACCEPT
- enabled the firewall on the interface
- tried a live migration with the "conntrack" ticketmark on

and i still get the conntrack error notification.

---

I am not sure how to proceed - maybe someone can point me to the right direction

Thanks already

Soeren
 
Restart all VMs ? Or does the conntract work for each VM that was restarted ?
Restarting VMs should not be necessary.

- Do i need to enable the firewall on each virtual machine in Firewall -> Options ?
- Do i need to enable firewalling on the respective interface of a VM in the interface settings ?
Yes & yes. Conntrack migration depends on the firewall being active, as that sets up traffic marking required for state migration.
(This isn't documented as such in the admin guide, will improve that.)

You can check whether the corresonding nftables are created using nft list ruleset | grep -i mark and if connections are marked correctly by running conntrack --dump --mark <vmid>, e.g. conntrack --dump --mark 100.

You may want to also check the log of the firewall daemon using journalctl -b -u proxmox-firewall, just in case there are any errors.
 
Hi,

thanks for the reply, ok i am testing on one machine, if it is on one PVE server, i execute nft list ruleset | grep -i mark i can see 2 entries with ct mark set both with the same number behind it and some entries with source and destination IPs when executing conntrack --dump --mark before doing that i see nothing on the second machine.

When migrating i still get the warning "conntrack state migration not supported or disabled, active connections might get dropped" with not warnings in the journal.

After the migration the "ct mark set" output is only on the second machine (as expected) while for the command of entries per machine does reveal some leftovers on the first (source) machine - IPv6 link local entries and a multicast entry.

When sshing into the machine and then migrating, i see the entry from conntrack --dump --mark 100 on both machines, it seems to be synchonized, but i can not verify 100% and if that happens in all situations, what worries me though is that i still get the warning in the task that it can not conntrack

Any idea ?

Cheers
Soeren
 
Hello,

I have the exact same issue running a 3-nodes PVE cluster running v9.1.4, all running qemu-server v9.1.3:

Code:
$ dpkg -l | grep qemu-server
ii  qemu-server                          9.1.3                                amd64        Qemu Server Tools

I also have the error message "conntrack state migration not supported or disabled, active connections might get dropped" while migrating a VM. It does not matter how the migration was triggerred: host reboot, manual migration, migration due to maintenance mode.
When checking with the 'nft list' and 'conntrack --dump' commands, I see pretty much what @smalchow describes as well.

Anybody else having this issue? Any way to actually fix this?


Regards.
 
Hi,

you need to have the "proxmox-firewall" packages installed and the service enabled.

On each node in "Firewall -> Options" enable the nftables (tech preview).

And we enabled the firewall on each virtual machine - make sure you have necessary (default) rules in place

That should do the trick

Cheers
Soeren
 
Thank you. This is the case for me:

  • proxmox-firewall is installed and enabled/started on all nodes
  • nftables (tech preview) is enabled on every node
  • The firewall is enabled at the datacenter level, on every node and on every VM

I am using host based firewall and not the rules or PVE, so I set the input/output policy to ALLOW by default. I have the warning that still appear but I just tested with some load balancer VMs I have (constant connections) and they remain online.
I don't really understand why this warning appear though?

Regards.
 
Yes I did. It made no difference whatsoever... I am fine feeling I can just "safely" ignore the error message because everything seems to work well but this is an annoyance and it also triggers a lot of support calls from clients.