Finally I've solved my problem, I can modify /etc/pve/storage.cfg file directly in local servers, I removed all references to difufunct NFSs and everything is working again, this modification must be done directly on server, can not be done using SSH.
Thanks for you help
The hardware is OK, all the problem starts when I try to connect to a new NFS storage that had problems, now that NFS storage is broken and I think that produce the problem with the Proxmox cluster.
After a Forced reboot one guest don want to start, I have the following error:
Failed to create message: Input/output error
TASK ERROR: start failed: command '/usr/bin/systemd-run --scope --slice qemu --unit 108 -p 'KillMode=none' -p 'CPUShares=1000' /usr/bin/kvm -id 108 -chardev...
BIG PROBLEM - Now I lost connection to guest machines
# systemctl status pve-cluster.service
● pve-cluster.service - The Proxmox VE cluster filesystem
Loaded: loaded (/lib/systemd/system/pve-cluster.service; enabled)
Active: failed (Result: signal) since Fri 2016-08-05 11:34:56 CLT; 5s...
Hi Thanks for your answer, I mean no proxmox command for example:
pveproxy --debug ---> wait forever no answer
pvecm status ---> wait forever no answer
pvecm nodes ---> wait forever no answer
apt install -f works
Now...
Hi,
I've lost web gui, I've access to ssh, but any pve command I try to run never get an answer. All guest machines are working, but can not do anything else.
I'm working on pve version 4.1
Thanks for your help
HI I've the same problem. I just install Proxmox in 7 IBM flex x240 Nodes, 5 of them works perefectlly but two of the that are a little older I have this problem, please forgive my ignorance, but as I can boot I don't know ow to modify grub.cfg
The switches support LACP and I will not use IPV6, in that case, do you think is safe to use active-active? or still not safe.
Thanks again for your valuable help.
Hi Mr Holmes,
Many thanks for your comments, so for storage active-active bonding is not safe configuration? as my interfaces are only 1G and The Switches support LACP and layer 3, I thought could be a good idea to improve bandwith loadbalance bonding two or more interfaces could help, but if...
I've 7 physical servers limited to 4 1G NICs on each, besides I have received a Storage that supports iSCSI or NFS, the storage has 2 controllers with 4 1G NICs on eache controller, to create a HA cluster with Proxmox 4, I understand that I`ll need three different networks:
Network for...
Thanks for your comments.
So you recommned to not bonding all NICs, is not safe?, and use for example one NIC1 for cluster communication, NIC2 for production (users access) and could BOND NIC3 and NIC4 for storage, for higher speed.
For Bonding is a good idea to use active-active...
Each node x240 has 4 1gb nics, is a good idea to create a bond with these 4 nics?, and create two different vlans on this bond, one vlan for storage, and the other for production? I'll need another vlan for intercluster communicaton?
So what's the recommended configuration for networking for...
I've receive a donation of a Lenovo Flex Chasis with 7 nodes, each node has 4 1GB nics, connected to 2 EN2092 (Layer 3) switches, besides we receive a VNX5200 storage with 2 controllers with 4 1GB nics each. I've being using Proxmox for some time, but never use HA, with this little monster I'd...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.