Thanks, Hannes. I had something similar set up that kept refusing to start.
I could run tcpdump via the command line with the following attributes:
tcpdump -i ens18 -K -n port 53 -s 0 -w /var/log/tcpdump_$(hostname -s)_port53_$(date +%Y%m%d-%H%M%S).pcap
I had to tweak this in the...
Has anyone here attempted to run tcpdump as a systemd service on Proxmox hosts? Can you please share your tcpdump.service file and any other configuration info?
I'm struggling to get it running and would like tcpdump to start on boot to monitor some DNS issues I'm logging on port 53
Painless upgrade from 6.4 to 7.0 for me on four clusters and clean install on a fifth cluster. Thanks for the great work!
One bug I've found is with Task History:
- Cluster-wide Task History is correct.
- Node Task history is correct for the web GUI node I'm logged into, but Task History for...
I can confirm the problem solved in latest release:
Kernel Version: Linux 5.4.101-1-pve #1 SMP PVE 5.4.101-1 (Fri, 26 Feb 2021 13:13:09 +0100)
PVE Manager Version: pve-manager/6.3-4/0a38c56f
The test repo deb http://download.proxmox.com/debian/pve buster pvetest has qemu-server 6.3-5 today, which I can happily say has fixed my reported problem. Once this is on the production repo, I will test and confirm the fix on my live environment.
Thanks again
Thanks for your swift action on this issue.
I have setup a test server with your test repo deb http://download.proxmox.com/debian/pve buster pvetest that currently has qemu-server 6.3-4, which still exhibits the problem. I will retest when upcoming versions become available
On this host, running 100 VMs increased lsof | grep qmevent | wc -l from 20 to 120. Then after shutting these down, it was still at 120. Output of lsof | grep qmevent after this event is:
# lsof | grep qmevent
qmeventd 9626 root cwd DIR 253,1...
I cannot shutdown VMs. This happens after 1032 iterations of VMs being started and shutdown and occurs on both PVE 6.2 and 6.3.
The client OS shuts down but the VM fails to stop. The VNC Console for Ubuntu VMs shows "systemd-shutdown[1]: Failed to finalize DM devices, ignoring" while the VM...
After poking around in Syslog I see thousands of 'qmeventd[962]: accept: Too many open files' entries and also 'qmeventd[962]: error opening /proc/20196/cmdline: Too many open files'.
The upgrade to PVE 6.3.3 also coincided with installation of extra RAM on the servers to allow hosting many more...
I'm having trouble with qm shutdown on PVE 6.3.3, where the client operating system shuts down but the VM does not stop. I then have to wait for the qm shutdown command to timeout (or cancel it via the web GUI) and issue a further qm stop command to switch off the VM. This wasn't happening on...
I have seen this many times on our clusters and have to use "swapoff -a" to avoid having to hard reset nodes when they get stuck at "Failed deactivating swap /dev/mapper/pve-swap" or waiting for the 30 minute timeout, which forcibly reboots.
This was still happened when upgrading the cluster...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.