I had to reboot the last node, too.And everything is ok now.
TWIMC:
after rebooting one node it seems that the remaining vzdump jobs starts to execute the remaining vzdump processes. . There was a job, which did not complete as expected. Identified by
root@node2:~# pct list
VMID Status...
I've rebooted on node. /etc/pve is available, but it contains not the last change, which I've done.
it currently does not start the VMs
They did not start, because a stotage, which I've removed via the frontend was still available in the
/etc/pve/storage.cfg
I've removed the Stoage from...
node3 has 2 quorum_votes, because I started with a two nodes cluster some time ago.
the remaining 3 nodes are still quorate. But only one of the has an /etc/pve, which is read-writeable.
On 2 nodes the /etc/pve is not accessable anymore. The container are running. But eg. pct list hangs
I've...
I don't think reinstalling the missing node will help. The missing node was not detached (via pvecm remove). It jiust went down. The bindnetaddr: 10.11.12.1 is the IP of the offline node. I thought it would be easiest to reboot node3. But it might be that you have to make a bios configuration...
I have a problem with a 4 Node Proxmox 4.4 cluster. At a node the board was changed and since then the system doesn't boot from a ZFS root partition anymore. The remaining nodes can only read from /etc/pve.
I don't need the cluster feature. I wanted to break the cluster and build a new one with...
you have to use the lates pve-container.
I'm using pve-container 2.0-29.
To use neasted feature within the container, I had to make sure that the neasted feature is available:
echo "options kvm-intel nested=Y" > /etc/modprobe.d/kvm-intel.conf
modprobe -r kvm_intel
modprobe kvm_intel (or...
Yes, vmbr3 cannot talk to the outside world.
I assume, that I have to use routing, if I have only one physical device. I've added an IP of my /27 net to the vmbr3. I can ping a container using the vmbr3 from the host. I can see a Ping from the outside world to the container on the eth1 of the...
I'm a running proxmox 4.4 cluster in an OVH vRack. Public IPs are assigned to the nodes at eth0, an IP block is assigned to the vRack which reaches the nodes on eth1.
Everythings works fine.
I've added a second network to the vRack, which reaches eth1 and I have the problem, that I can't use...
I really double-checked everything befor opening the issue. But in the end it was a typo in a firewall rule.
NFS share ist now woking as expected
Regards
Carsten
the showmount command takes 2 Minutes.
root@node2:~# time showmount -e node3
Export list for node3:
/var/lib/vz/nfs node3,node2
real 2m7.323s
user 0m0.000s
sys 0m0.004s
If I strace the command it seems to hang connecting the nfs server using the Port, which can be configured in...
Hi,
Im running a 2 node Proxmox 4.4 Cluster at OVH. NFS Server is running on node3. The share can be mountet on both nodes. But the storage is only available on the node3 (where the nfs server is running locally)
I noticed, that a "pvesm nfsscan" on node2 takes much more time, than on node3...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.