This issue has been resolved. Our solution is to restart the server, which is probably your bug. This kind of problem can occur after more than a year of operation. Earlier versions of 2.1 and the like have had similar issues. So the restart is done.
:eek:
We want to back up the data in the current failed LXC and then restart the server or upgrade the PvE version. How can I do that? Because there are some risky situations. We don't dare to do that.
Here are the questions:
It has been running for more than 400 days.
Today, KVM is normal and LXC cannot be started. After logging in the Web management terminal, the display is all "?"
Restarted the relevant service through the following command, logged in the Web management interface shows...
sorry. my system Proxmox VE 4.3 .
I would like to establish a need to use vpn LXC to iptables_nat But suggested that the system does not support.
I need to do so can allow system support?thanks.
LXC CT system : centos 7
version: proxmox 2.1
i have 7 server create Cluster . one day 6 nodes is over . only left server is no master.
i link this server is ok. but i can't modify this server .
Example:
add ip address ,
prompt: ovz: command 'vzctl --skiplock set 105 --ipadd 1.1.1.11 --save' failed: exit code...
Proxmox VE version 3.2-1/1933730b
my new create CT . but i can't not console shell.
starting udev:cp:cannot create special file '/dev/console'
cp: cannot create special file '/dev/core':
....
....
INIT: no more processes left in this runlevel
can't not command shell why?
but i can...
vm Console :
Error starting worker failed: unable to parse worker upid 'UPID:hkvps01:0009A363:100487742:525CD10F:vncproxy:115:root@pam:' (500)
why?
How to solve
my proxmox v2.1
hello,my node hvm009 is running ok. this node kvm running ok. but i migrated from other nodes kvm to this node error.
ERROR: Failed to sync data - mount error: mount.nfs: Unknown error 16384
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.