after upgrade to Promox 6, I have probem with noVNC console from webbrowser (Chrome,Firefox tested). After some time I cannot connect noVNC console to VM with error "failed to connect to server". After shutdown and start of VM everything work ok - for some time. Same problem is on VM...
I have to replace quorum disk which is connected throught ISCSI.
Is ok do on running system:
- disconnect iscsi mapped old qdisk - quorum disk to offline
- attach new iscsi location - new disk
- mkqdisk with same name
Is any better/safer solution ?
Thanks for reply
I installed Proxmox cluster - like https://pve.proxmox.com/wiki/Two-Node_High_Availability_Cluster
All works great.
But I have problem with network comunication between guests on another node (on same node is all ok)
I am using bridged interface
iface vmbr0 inet static...
I try new kernel pve-kernel-3.10.0-1-pve_3.10.0-5_amd64.deb with Areca ARC-1680.
But kernel freeze when booting, find controller but no partition on it (append photo - https://www.dropbox.com/s/auf9p1mbb5x2vkf/IMG_20140228_205506.jpg). Than kernel write HW timeout.
Same Areca driver...
I upgraded to 3.0 rc2 like http://pve.proxmox.com/wiki/Upgrade_from_2.3_to_3.0
All works but when host shutdown, VM's don't get shutdown process and are hardly killed (like power down)
I can do shutdown from Web access, directly from OS on guest, or from console (system_powerdown) - works OK...
I want using like this: If rebooting Proxmox host all kvm virtuals are saved and after Proxmox host resume all kvm hosts are resumed (without reboot guest's OS).
Is any solution for suspend KVM guest, reboot Proxmox and resume KVM guest ?
Thanks for reply
I plan use Proxmox with HA + DRBD + cLVM on two nodes in PRIMARY/PRIMARY mode. Each node have only one LAN port.
Thanks for great manual here http://pve.proxmox.com/wiki/DRBD
But how work cluster in this situation ?
Situation: Both nodes will run correctly but communication link between...