Good question why it happened. In my case I just help a friend to manage his system, remotely ... Don't know exact details of hardware. Friend says he did nothing and backups stopped working. So neither Proxmox itself nor the Backup host were updated (he is just not capable of doing it himself...
I still run Proxmox 6. On the mentioned host it's only one VM running with the following configuration of the firewall:
SMALL UPDATE: For the sake of completeness... The host has two VMs that have Firewall disabled, but those VMs are being used as a template for other hosts and were never up...
It doesn't. Well at least it never worked for me. I applied this and msged DC support - they said it's fine and they don't see any wrong MAC traffic, but after a few days - same story. So the issue comes and goes.
Today they even locked my server (!!!). I applied a firewall rule as suggested...
I also have same problem. Proxmox VE 6.3-3
Firewall INPUT policy was on DROP as per default:
I also altered the sys value:
Just received another abuse message :(
I have recently updated a cluster with a few nodes having pretty similar network setup. Each node is connected with a few external networks over ipsec.
And just one node behaves crazy (this is really strange). I can't ping any of the networks that are tunneled through the ipsec. Tunnels are...
For more than a week, I am trying to determine the reason for the following IO performance degradation between proxmox host and a Windows Server 2019 VM(s).
I have to ask for your help guys because I've run out of ideas.
Single proxmox host, no cluster, pve 6.1-8 with ZFS
So just a short update: I have managed to load an old kernel and running 5.0.21-5-pve. It has no problems.
Actual kernel 5.3.18-2-pve produces for me continuous crashes and blue screens with all possible types of messages under WS2019 on two different nodes. Different CPUs, different systems...