Hi, I have a two node proxmox cluster in OVH with sheepdog. Everything works as expected but I got twice a network problem inside my vRack in OVH which lead to a disconnection between the nodes.
Corosync reports the new cluster configuration with only one member and the same does sheepdog but...
I think is better to have a dual nic server on OVH, one public and one inside vrack. We configured a three node with 10Gb/s nic on vrack and works perfectly with sheepdog too.
Hi, we did that. We configured vrack on second proxomox nic, configured corosync to use this second interface changing /etc/hosts to map proxmox hostnames to vrack ips. As cross datacenter networking has latency we decided to use udp instead of multicast for cluster communication.
Hi, in previous version (1.9) iso were replicated between servers. I think it would be useful to have the same in latest version, perhaps discriminating using the flag shared already present.
Hi, I had a look to vagrant tool (www.vagrantup.com) which is an interesting project to deploy VM in a very easy fashion. I think it would be interesting to build a proxmox provider plugin which could use the new API.
anyone interested?
Question has been asked on sheepdog ml and answer is to modify corosync configuration to have totem listening on desired ip address/interface but on proxmox this is not possible as cluster configuration in managed differently. Also changing cluster config I could not find a way to have corosync...
Hi, Im experimenting with sheepdog and I would like to know if is it possible to redirect sheepdog traffic over dedicated nic. Sheepdog relies on corosync and I think all cluster traffic should go on same nic, or not?
Hi, I have a CentOS 6.2 with RT support ticket in it installed with LVM inside proxmox and it suffers of filesystem corruption after proxmox backup (wichi uses LVM snapshots backed by iSCSI). No other VM on same node suffers of corruption, only the one using LVM. No idea why bu I've disabled...
That's for HA but would be useful to have a maintenance mode where all VM/CT, also not HA enabled, get migrated. I think something like the actual dialog but once selected more than one VM a loop which will create a task per VM, this way should be simple to implement, I think.
Hi,
I think it would be useful to have the possibility to live migrate more than one VM in one step, just select multiple VMs on one node and click migrate to another one.
One little issue I found on live migration is that select box of available nodes does not consider the running one, this...
Hi,
I found that if I open multiple VNC console on different VM only last opened get keyboard. If I switch focus on a previously opened one I can only use mouse but no keypress are sent to VM.
This has been tested on Mac OSX 10.7 and Safari/Firefox
One more problem raised today, backup. vzdump reported:
INFO: Starting Backup of VM 123 (qemu)
INFO: status = running
ERROR: Backup of VM 123 failed - no such volume 'SD:vm-123-disk-1'
INFO: Backup job finished with errors
where SD is a sheepdog storage
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.