Separate names with a comma.
Discussion in 'Proxmox VE: Installation and configuration' started by martin, Jun 23, 2015.
You can simply run that inside a VM.
I know about the risk of "two_node" with fence devices by Hardware, and for it, instead of configure a fence device by Hardware, I enable manual fence (fence_ack_manual), of this mode, i can see previously what's going on, and then finally take the decision more wise about of my next actions to perform.
A example of this scenery exist in my minilab, also i know of mini companies that have only 2 PVE nodes (i think that only just look in this forum the amount of questions that the people have made about of to have only 2 PVE nodes and configure it in HA).
So i would like to order you that PVE can support two nodes in HA mode with fence devices, or in the worse of the case, only with manual fence (i am very sure that many people also will be grateful, i included).
Re-edit: What abounds,not damage.
But his would add all the virtualization overhead I just wanted to avoid.
I think I'll try to give a spin to an lxc container with Kubernetes in the 4.0 beta
From the missing answers to my other questions I conclude that there is no planning to implement any type of awareness for docker containers in Proxmox VE for now. Is this true?
I have an error to start lxc container after full server restart.
I use debian 7.8/8.1 i686 and ubuntu 14.10 i686 that was created from templates of already exisiting openvz containers from Proxmox 3.
Workaround for me is:
mount /var/lib/vz/images/110/vm-110-rootfs.raw /mnt/tmp; umount /mnt/tmp
Container start-stop work till next full server restart.
Also on the screen i have when container successfuly start
That makes HA simply useless - instead, you can start the VM manually.
AFAIK docker people are working on that (see lxc devel list for details).
Same error while restoring lxc backup or template from openvz backup.
That's right, but when only is need execute a single command line, the task is easier to run.
Moreover, since no one can tell when a PVE host will die or will decompose, a small group of people in the company should be enabled to perform such task, and inclusive, ready to execute it at any time. And in a small business, not always these persons are part of computer department.
So if I have to explain to a group of people that aren't part of computer department, the commands that must be executed (always checking previously that the PVE host decomposed is power off), and wait that they can remember all these tasks at moment that be necessary apply it, i believe that is much to ask for them.
So i will be very thankful if in PVE ver. 4.x with only two nodes, we have some manner of execute HA manually (obviously with only a single command line), it will simplify the live of many people and mine too.
Moreover, many small businesses can not afford to buy a third server to get the three votes of a quorum required by PVE ver. 4.x.
Moreover, if you accept add this feature to PVE ver. 4.x, i pledge to be a beta tester.
Did a deployment of the beta this weekend on top of jessie 8. Because A the installer CD of proxmox doesn't seem to be able to install a native UEFI install and B lacks flexibility to partition my disks. I did use the Proxmox installer first and maybe a small gliltch in the configuration but I choose belgian-french as the keyboard layout and after finishing the install the setting was QWERTY (didn't check which keymap).
Points to look after:
LXC container -> did a restore from a openVZ backup as mentioned in the wiki. But it fails on the debian version. The system was an ubuntu 12.04 installed from an openVZ template and apparently they add a debian version file with wheezy/sid instead of a number. Changed that to just '7' and then it ran further. the machine starts and works fine but it doesn't run the init scripts and doesn't know in what runlevel it is. init 2 fixes some problems but I still need to manual run /etc/init.d/networking to get my interface up.
If someone could help met to get around this problem I can start to deploy more CT's;
Apart from that I have one VM running (Sophos UTM) that wouldn't start. On 3.4 I had changed all my nic's from virtio to E1000 because my nic's froze up after a time. And now on beta 1 of V4 the machine wouldn't even start with the E1000, now there running with virtio again and so far that is without any problem;
I am just working on ubuntu support - first patches are in git.
Ok great! Any tips on where to look to get the init sequence working on this CT?
Should work out of the box. Any hint in syslog?
I finally found that bug - fix is here:
So this will be fixed with next upload.
Nothing except records of named/bind9 and in an older log from when this machine was running just fine on proxmox 3.4 has the same content.
I close this announcement thread, as this one becomes confusing and off topic. if you have question or problems with the beta1, please open a new thread with a suitable topic.