Thanks for the answer :).
But for all the time, no cluster node has been rebooted. "Uptime" of all blades remained the same. The cluster recovered itself! After the powered on of a meaningless neighboring blade.
I see that the fencing configuration file ("Datacenter/HA") knows about the...
Any ideas? There may be secret configuration files for the cluster, somehow related to fencing or something of the sort ?... Which do not update when the node is removed from the cluster normally.
PS: config "Datacenter/HA" is correct.
It's an amazing story.
I had problems with the cluster: https://forum.proxmox.com/threads/quourum-dissolved.34572/
But suddenly, two weeks later the cluster recovered itself. Is that possible?! The only thing I did was not long before the restoration of the cluster enabled a blade...
Updating the packages will also require a restart of the node. Therefore, without rebooting and stopping virtual machines, I can not do without. How sad...
Prepared a plan for stopping virtual machines and launched the procedure for its negotiation. I hope in a week or two I will be given such...
Thanks for the advice. But after carefully reading the article https://pve.proxmox.com/wiki/Upgrade_from_3.x_to_4.0 on migration from version 3 to version 4, I found that the necessary conditions are:
1. healthy cluster
2. no VM or CT running (note: VM live migration from 3.4 to 4.x node or...
Hooray! Someone became interested!
So in detail:
IBM BladeCenter H
4 - BladeServers HS23 (LACP to Nortel-IBM Switch)
Fibre Channel Storage IBM DS 3520 (LVM Shared Storage)
pveversion -v output :
problem node (cman can't restart)
no problem node (cman restart correctly)
problem node (cman...
I have 4 node cluster with FC LVM shared storage. After rebooting one of the nodes, I lost the cluster.
/var/log/cluster/corosync.log and /var/log/cluster/rgmanager.log says 'Quourum dissolved'.
In accordance with recommendations...
For security reasons I have connected a free physical network interface server to a virtual machine. How can i do this?
I believe this can be done using the "qm set". But so far I could not figure out how.
Perhaps someone has already encountered this problem.
There is storage IBM Storage DS 3521, which is multipath FC (8G) is connected to the IBM BladeCenter H, and it is given to the blades as a shared drive.
The task - to prepare the shared storage for the blades which will LVM and hard drives virtual machine (Proxmox).
Thank you, udo :-)! Unnecessary disk has been successfully removed and the space was released :-)!
Here the output of command 'ls -lsa /dev/mapper/':
root@vnode3:~# root@vnode3:~# ls -lsa /dev/mapper/
0 drwxr-xr-x 2 root root 620 Jul 16 16:45 .
0 drwxr-xr-x 18 root root 7040 Jul...
Thank you for having responded, udo.
Here the requested output of these commands:
root@vnode3:~# root@vnode3:~# pvs
/dev/sdd: read failed after 0 of 4096 at 0: Input/output error
/dev/sdd: read failed after 0 of 4096 at 4804611866624: Input/output error
/dev/sdd: read failed after 0 of...
Thank you, manu, for your replies:).
Unfortunately, I can not now remove the virtual machine 138, it is almost "production".
Maybe there is some other way? After all, the machine 138 don`t use vm-138 disk-disk-1.raw.
I interrupted cloning operation somewhere in the 70%. And the virtual machine 138 did not exist. However, the disc vm-138-disk-1.raw appeared and it was impossible to remove. I left the situation for a while and forget about it.
I recently created a virtual machine 138, and found that...