Sorry but I've a question concerning the old PVE 4.4 version. I've a PVE cluster still using this version.
The PVE cluster of 5 nodes. On the node one, corosync don't want to start. It say that the config is not in sync.
corosync: [CMAP ] Received config version (7) is different...
Sorry if this question have already asked but I've not found the solution.
I've a LXC container with Debian Wheezy (converted from OpenVZ).
The NFS shared directory don't want to mount at container startup.
I've added to /etc/apparmor.d/lxc/lxc-default-cgns:
I'm currently testing a server with Proxmox 3.4 with an iSCSI device.
The solution is provided by ONLINE.NET, a French provider.
The iSCSI mount / target is ok.
But the LVM part don't want to work.
The "Base volume" menu don't show...
It seems that my proxmox config is broken somewhere.
When I create or try to restore (with another CT ID), I got this message:
TASK ERROR: unable to create CT 120 - directory '/var/lib/vz/root/120' already exists
ID 120 is the first available number. I curently do not have a CT...
I upgraded one of my PVE 3.3 node to last packages and Xymon monitoring stopped to works for https for this node with the following error:
red Sun Nov 2 11:24:43 2014: SSL error
&red https://thePVEnode:8006/ - SSL error
The Xymon server is working on a Debian...
I tried to remove a crashed node from a 3 nodes PVE 3.2 PVE cluster. I used the command:
pvecm delnode vs-03
"pvecm nodes" no longer show the node but it still be show on the GUI list.
How to remove it completely?
Thank for your help.
I've a setup a config using multiple gateway as in this web page. All is working fine on the PVE node.
On top of this, I mounted a CT with 2 IPs (with venet). One on the left...
I'll like to understand the mistake.
I've a 2 socket server with an active license and the last apt-get dist-upgrade goes the node to 3.1. Another testing server, only have the open source apt setup. This node don't want to go to 3.1.
Kernel, is –114 on the licensed node. Only –109 on...
I've a cluster of 3 servers running PVE 3.0 up to date. The server are Dell R420. I've 13 openvz CT distributed on the 3 servers.
I've setup ONE backup rule to backup each night all CT from the 3 nodes (SNAPHOST + LZO on a NAS using NFS). Some day, the backup crash during the night, the...
I have 2 nodes installed with PVE 2.3 for few days. All was fine until this morning. One of the node became red (the second installed node).
I've found this in sylog (plenty) in both node.
Mar 12 15:38:53 aweb-vs003 pmxcfs: [status] crit: cpg_send_message failed: 9
I'm pretty lost. If someone have an idea on this problem?
I've a Proxmox VE 1.8 cluster of 3 nodes (not updated, sorry). The servers are HP DL360 G7 with SAS disk.
On one node ONLY, the openvz CT backup generate errors during scheduled backup (each time). The openvz CT is 40Go...
Seems Lenny is no longer online on debian.org.
When I tried to upgrade an 1.8 pve node to 1.9, I got this:
Err http://http.us.debian.org lenny/main libglib2.0-0 2.16.6-3
404 Not Found [IP: 184.108.40.206 80]
Err http://http.us.debian.org lenny/main...
Seems difficult to upload a large file in pve (using an ADSL line). I tried 3 times to upload a 3.2 Go file.
Each time the job ended with "Error 401: permission denied - invalid ticket.
pve-manager: 2.0-59 (pve-manager/2.0/18400f07)
running kernel: 2.6.32-11-pve
I tried the upgrade script on a testing 1.9 node.
The script failed because the script configure /etc/apt/sources.list with non available repository.
W: Failed to fetch http://volatile.debian.org/debian-volatile/dists/squeeze/volatile/main/binary-amd64/Packages 404 Not Found [IP...
With PVE 1.9, I can have an openvz container with an IP on a different subnet than the PVE node. This doesn't work with PVE 2.0.
Any idea of the problem?
Example (not the real IPs):
PVE node on eth0 : 10.10.0.1
IP 10.50.0.1 is an ipfailover mapped on the PVE node card eth0
I got this error when trying to create an user in PVE2.
create user failed: command '/usr/sbin/usermod -p '$5$oa5jUS8w$b2h7PyHoGCmrWmPzln6UrNGmBjpJr060k9BNMGmj1B0' manager' failed: exit code 6 (500)
Any idea of the problem?
pve-manager: 2.0-33 (pve-manager/2.0/c598d9e1)
I'm running pve 1.8 on a Dell R510 with 2 raid array. One with promox, and a second array mounted on a VM.
I've a KMV host running Debian 6 with 2 disk. One qcow2 and one special mount of a dedicated local partition (/dev/sdb1 on the server running proxmox). Both using VIRTUO driver...
Do you plan to add menu and features in pve 2.0 GUI to resize KVM raw, qcow2 disk from a VM?
And also to convert them from a format to another?
Sorry if this question has been already asked.
A qemu-img tired boy! ;-)