Yes, i did not wait enough.. Connection timed out. But ping and telnet to server 8007 port succeed from CLI.
~# time pvesm status --storage bu1
proxmox-backup-client failed: Error: error trying to connect: the handshake failed: Connection timed out (os error 110)
Name Type Status...
I have similar problem. I had 3 dockers on this VM running java application and all went well, but after starting 4th java-docker all cpu:s jumped to 100% (this one process uses all). Strace for this process "hangs" in FUTEX_WAIT. Normally this java-process just idles.
I have a vague memory that...
Thank you for taking me to tracks.. This is how i fixed this problem;
mv /var/lib/dpkg/info/pve-manager.postinst /tmp/
mv /var/lib/dpkg/info/proxmox-ve.postinst /tmp/
apt-get install proxmox-ve
mv /tmp/*postinst //var/lib/dpkg/info/
Same problem still exists.. Trying to clone running VM to another node and lvm target..
proxmox-ve-2.6.32: 3.4-156 (running kernel: 2.6.32-39-pve)
pve-manager: 3.4-6 (running version: 3.4-6/102d4547)
pve-kernel-2.6.32-32-pve: 2.6.32-136
pve-kernel-2.6.32-39-pve: 2.6.32-156
lvm2: 2.02.98-pve4...
My Proxmox cluster uses nfs storage. Sometimes nfs server fails and VM remounts disks as readonly.
Trying to avoid this by adding werror=stop,rerror=stop to vm.conf like this:
ide0: nfs:101/vm-101-disk-1.qcow2,format=qcow2,rerror=stop,werror=stop,size=10G
After this when nfs fails, it tryes to...
Yes, i tryed this allready, but it does work like that in proxmox 2.6 version.. https page is working well, but when java and vnc console starts it loads self-signed "default" www.proxmox.com certificate somehow from somewhere...
Just changed offical GoDaddy certificate to my nodes yesterday and today it took a while to figure out what is wrong. Thanks to Java update :) GUi is working great with GoDaddy certificate but Java somehow finds self-signed "www.proxmox.com" sertificate. Can't figure out how to change it to use...
Having this very same problem on all my 4 nodes. It just become after I lost my backup and templates storage nfs mountpoint.
Rebooting node is not very good option.
:~# /etc/init.d/rgmanager start
Starting Cluster Service Manager: [FAILED]
:~# tail /var/log/syslog
Jan 8 09:46:06 PXX kernel...
Problem solved..
In Ubuntu nfs-kernel-server there is default option: RPCMOUNTDOPTS=--manage-gids
Thanks to https://xkyle.com/solving-the-nfs-16-group-limit-problem/comment-page-1/#comment-5294
When running containers on NFS share user permissions wont work as it shuld. Users sub groups dont have any effect when chechking permissions. Dont know is this nfs client or server issue? Google does not help me with this one at all.
On container there is file:
drwxrwx--- 2 clamav clamav...
Folder that allready exists was problem. Reson why it exists was that all containers are on separate LVM volumes (main idea was separate snapshots for all containers). All worked fine with restoring from backup. We have to re-think file structure now.
I have used nfs storage for my vz containers and they are working just fine, but few days ago there became one problem. I can't create new containers anymore.
All I got is this error message:
Private area already exists in /mnt/pve/nfs-server/private/1086
Creation of container private area...
I was migrating openvz container (it was stopped) to other node. Log shows what happens..
Both systems:
Linux NODE2 2.6.32-11-pve #1 SMP Wed Apr 11 07:17:05 CEST 2012 x86_64 GNU/Linux
May 03 19:20:55 starting migration of CT 129 to node 'NODE2' (10.10.10.2)
May 03 19:20:55 starting rsync phase...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.