This issue has been resolved. Our solution is to restart the server, which is probably your bug. This kind of problem can occur after more than a year of operation. Earlier versions of 2.1 and the like have had similar issues. So the restart is done.
:eek:
We want to back up the data in the current failed LXC and then restart the server or upgrade the PvE version. How can I do that? Because there are some risky situations. We don't dare to do that.
Here are the questions:
It has been running for more than 400 days.
Today, KVM is normal and LXC cannot be started. After logging in the Web management terminal, the display is all "?"
Restarted the relevant service through the following command, logged in the Web management interface shows...
sorry. my system Proxmox VE 4.3 .
I would like to establish a need to use vpn LXC to iptables_nat But suggested that the system does not support.
I need to do so can allow system support?thanks.
LXC CT system : centos 7
version: proxmox 2.1
i have 7 server create Cluster . one day 6 nodes is over . only left server is no master.
i link this server is ok. but i can't modify this server .
Example:
add ip address ,
prompt: ovz: command 'vzctl --skiplock set 105 --ipadd 1.1.1.11 --save' failed: exit code...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.