Hello,
One of our customers with an OpenVZ-Container wishes to enable Core Dumps for php-fpm.
They write that they need to perform the following command:
# echo '/var/tmp/coredump-%e.%p' > /proc/sys/kernel/core_pattern
which results in the following:
bash: /proc/sys/kernel/core_pattern...
I got it working.
I executed the command on all nodes and after that 'pvecm status' reported that the cluster was not ready, but clustat reported everything as normal.
After rebooting one of the nodes, it suddenly all worked again and the messages regarding 'rgp_join' in the syslog...
Hello,
We have a 7 node cluster, with 4 servers on Proxmox2 and 3 servers on Proxmox3.3.
We are in the progress of migrating to Promox3.3, thus the mixed environment. So far everything worked fine, but this morning I shut down one of the Proxmox2-nodes and since then /etc/pve is read-only. I...
Hello,
We have a cluster of four servers: s01, s02, s03 and s04. s01 - s03 were setup and taken online at the same time and are almost identical. The only difference being that s03 has a larger SATA-Disk and s01/s02 have smaller SAS-Disks.
Now since a couple of weeks, the routing to external...
Hi Geejay,
Yes, unfortunately, it is necessary IMHO, because each DRBD-Partition is its own ressource in the cluster. If a container needs to be started on another node, the DRBD-ressource also needs to be moved and only the partition for the particular container, not affecting all others.
I...
Just for the record:
I voted both, but we mainly use OpenVZ.
KVM is used only for the PBX, because Asterisk apparently has issues in OpenVZ.
regards
-Stephan
Hello,
I had the same problem after updating a VZ-Container from Centos 6.2 to 6.3 (with Plesk).
After the update I shutdown the container and it would not restart: Can't umount /var/lib/vz/root/150: Device or resource busy
Proxmox showed the Container as still mounted. Unmounting failed...
Thanks for the reply. I guess it is not possible then with my current setup?
If I configure the pvevm outside the service, migration fails (also when I add 'depend="service:ha_host"' to the <pvevm..>).
Hello,
I have a VZ-Container running Centos6 with Plesk10. I also installed ASL3 from atomicorp.com for added security.
Now I get reports of various kernel vulnerabilities:
Trusted Path Execution(TPE): not available [CRITICAL]
Disable Privileged I/O: not available...
I didn't change any scripts, but here is our cluster.conf, with an example service. (three nodes, two of which share drbd/containers and provide failover for eachother). Each VZ-Container gets its own DRBD-partition (pve-lvm->DRBD-Partition->ext4->VZ) and is restarted on the other node if...
To answer myself:
I have figured it out in the meantime. I edited /etc/pve/cluster.conf with the appropriate resources and can now failover manually (via shell) and automatically. Reading the Red Hat Cluster Docs helped.
Migration via Web-Interface does not work, unfortunately, but I guess we...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.