hi to all,
Is there a way to start/stop a group of CT in "one click" from the proxmox interface ?
This could be handy as we have sometimes to do it on a lot of CT.
thanks for your help
My mistake. Adding a line in /etc/vz/vz.conf with FEATURES="nfs: on" can solve the problem.
This file is parsed on CT startup and the nfs feature will be enabled for the CT, even if it does not appears in CTID.conf
I'm not sure of what happen if CTID.conf already set this variable, but I guess...
Online migration fails with local CT using nfs and autofs inside.
If I stop autofs inside the CT, online migration works OK (with independant nfs mounts inside).
Please help me find a solution (soon in production) !
Many thanks
The log :
Oct 25 00:23:12 starting migration of CT 111 to node...
Simple advice but check the CD or reader.
The same error happends to me once with an old RW CD (booting was ok but then failed to load some drivers).
Using an USB stick solved my problem.
Regards,
In my case it is 8, but I understand what you meant ;).
...
Nodes: 8
Expected votes: 8
Total votes: 8
Node votes: 1
Quorum: 5
...
Anyway, from what I've read, changing the 'expected votes' value won't lower the 'quorum' value (I'll do the test).
So the quorum disk seems the best solution to...
Hi everyone,
Is it possible to configure a quorum disk in a 8-nodes HA cluster (as described for 2 nodes in http://pve.proxmox.com/wiki/Two-Node_High_Availability_Cluster) ?
Would you say it's a good idea for keeping the cluster quorate when loosing half of the nodes (problem in a computer...
Thanks. I looked in man vz.conf and man ctid.conf.
There is a lot of options but unfortunately there is nothing for the --features option.
Anyway, I've tried a few things, among them:
1) using the CONFIGFILE variable in vz.conf (equivalent to vzctl create --config option)
=> got an error...
Hi everyone,
I did a full-upgrade on a 4-nodes cluster (from 2.6.32-11 to 2.6.32-14) and just issue a problem very similar to http://forum.proxmox.com/threads/8624-How-to-remove-zombie-OpenVZ-container.
My problem was with CT using nfs mounts inside: they where not able to shutdown any...
Hi,
I'm running a 2.1 cluster where I need to activate the "nfs: on" feature by defaut on most of the openvz CT I create.
This operation cannot be done via the webgui (not yet!) so I use a vzctl command (via SSH or secure console).
It works very fine, but I wish I could avoid this...
Hi,
I need to activate feature "nfs : on" for every CT based on a given template (if not, the CT won't be able to start all its services).
This operation cannot be done via the webgui (unless I missed something).
So today, I use SSH to log into the HN and update the config for the CT, using...
Hi,
First of all, thanks for the great 2.0 job, I'm so impatient to use it in production.
To test it, I've set up a two node-cluster as described on http://pve.proxmox.com/wiki/Proxmox_VE_2.0_Cluster .
Everything seems to work great.
But I found an issue when I tried to migrate VE 101 from...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.