Hello
I'm getting the following error on proxmox 5, when trying to access rrd.
pvesh get /nodes/jx213-s20/lxc/105/rrd -ds cpu -timeframe day
RRD error: Could not save png to ''
It's working well on proxmox 4.
I've just dist-upgrade today, but did not solve the issue.
Anyone's got an idea?
Hello
I've seen some vulnerabilities in qemu-kvm, that were recently patched.
For ex, CVE-2017-7980
In the redhat announcements, i saw they require a stop of all VMs for the update to take effect.
Do we need to follow the same procedure when proxmox updates the qemu? Or it's patched in...
Hello
Is there any way to filter the ARP replies?
Ex: 09:45:12.141931 ARP, Reply xx.xx.xx.xxis-at b2:cb:9f:21:38:a8, length 46
I've had today a customer attempting to use another user's IP. The firewall blocked tcp/udp etc, but he still managed to answer ARP requests making the other...
Very hard.
1. The solution is to be connected on the node, and when the first nmi_watchdog error apears (usualy by KVM), to copy the PID from it and check /proc/PID/cgroup to see to which container it belongs before the node dies. It's not 100% foolproof, but in most cases it provides the real...
I've set 500 by default, 3k seems very large for a container (personal opinion)
But i had some containers that were able to crash the node with the nmi watchdog issue, with more than 150pids. I've limited 2-3 such containers manually to 150.
Since i've set it to 500, i've had the nmiwatchdog...
Weirdly, if i start it using lxc-start -n ID it starts.
However, pct fails:
Job for lxc@1000000.service failed. See 'systemctl status lxc@1000000.service' and 'journalctl -xn' for details.
root@dx411-s19:/etc/pve/lxc# systemctl status lxc@1000000.service
â lxc@1000000.service - LXC Container...
On one of my nodes, i'm unable to start few containers (that used to work).
The logs show:
lxc-start 20170413215245.766 ERROR lxc_conf - conf.c:send_fd:3794 - Too many references: cannot splice - Error sending tty fd to parent
lxc-start 20170413215245.766 ERROR lxc_conf -...
I'm periodically having issues with the lxc containers crashing the host node.
The errors on the node are the classic nmi_watchdog stuck and i believe so far i was treating the symptom instead of the cause.
Today, i had a very interesting "customer". His container was using 100% of his cpu (1...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.