pve-4.2 add node fails

Hello Friend wolfang.
Cluster, usually when I add a new node other nodes are restarting.
The current storage is FreeNAS via chanel fiber, but will be changing this scenario for Proxmox + DRBD + Fibre Channel.

root@prox-r1-s1:~# pvecm status
Quorum information
------------------
Date: Thu May 19 11:31:10 2016
Quorum provider: corosync_votequorum
Nodes: 6
Node ID: 0x00000001
Ring ID: 15324
Quorate: Yes

Votequorum information
----------------------
Expected votes: 7
Highest expected: 7
Total votes: 6
Quorum: 4
Flags: Quorate

Membership information
----------------------
Nodeid Votes Name
0x00000001 1 192.168.1.1 (local)
0x00000002 1 192.168.1.2
0x00000003 1 192.168.1.3
0x00000004 1 192.168.1.4
0x00000005 1 1192.168.1.5
0x00000006 1 192.168.1.6

I see have new updates today. I will update this now.
 
Nodeid Votes Name
0x00000001 1 192.168.1.1 (local)
0x00000002 1 192.168.1.2
0x00000003 1 192.168.1.3
0x00000004 1 192.168.1.4
0x00000005 1 1192.168.1.5
0x00000006 1 192.168.1.6

locks like you have an miss conf
pleas send the /etc/pve/corosync.conf
 
I found the problem.
When vm loses connection with the storage it loops in time to boot, not finding the disk. So he takes it off.

The vm being in full operation it off fast.

I manage to set some kind of timeout using virtio?

Thanks master.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!