Good question why it happened. In my case I just help a friend to manage his system, remotely ... Don't know exact details of hardware. Friend says he did nothing and backups stopped working. So neither Proxmox itself nor the Backup host were updated (he is just not capable of doing it himself...
Am in the same boat now suddenly. Any solutions here?
UPDATE: I mounted CIFS folder on the server manually and added is a Directory storage. That way it works.
I still run Proxmox 6. On the mentioned host it's only one VM running with the following configuration of the firewall:
SMALL UPDATE: For the sake of completeness... The host has two VMs that have Firewall disabled, but those VMs are being used as a template for other hosts and were never up...
1. Never used REJECT at all.
2. I have this as the default rule on datacenter level:
3. Done long time ago
It seems to be random different MACs all the time, here are examples of abuse messages:
It doesn't. Well at least it never worked for me. I applied this and msged DC support - they said it's fine and they don't see any wrong MAC traffic, but after a few days - same story. So the issue comes and goes.
Today they even locked my server (!!!). I applied a firewall rule as suggested...
I also have same problem. Proxmox VE 6.3-3
Firewall INPUT policy was on DROP as per default:
I also altered the sys value:
cat /proc/sys/net/ipv4/igmp_link_local_mcast_reports
0
Just received another abuse message :(
I have recently updated a cluster with a few nodes having pretty similar network setup. Each node is connected with a few external networks over ipsec.
And just one node behaves crazy (this is really strange). I can't ping any of the networks that are tunneled through the ipsec. Tunnels are...
Well, I guess you have to read the documentation, because the questions you ask do not make much sense for me now... Especially if this is the service some customer will get...
Sorry to resurrect an old thread, but I am experiencing the very same behavior nowadays with all the latest ZFS 0.8 and PVE 6.1.
Described here. Does anybody have a clue?
For more than a week, I am trying to determine the reason for the following IO performance degradation between proxmox host and a Windows Server 2019 VM(s).
I have to ask for your help guys because I've run out of ideas.
Environment data:
Single proxmox host, no cluster, pve 6.1-8 with ZFS
A...
So just a short update: I have managed to load an old kernel and running 5.0.21-5-pve. It has no problems.
Actual kernel 5.3.18-2-pve produces for me continuous crashes and blue screens with all possible types of messages under WS2019 on two different nodes. Different CPUs, different systems...
Experiencing exactly the same on all WS2019 machines on 2 different nodes since the update yesterday :( Anybody has a clue how can I load the previous kernel on a headless machine?
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.