I recently setup three 6.1 nodes in a cluster and added some existing NFS exports from my storage server i use since a couple years (Debian 7).
As i do not have an internal DNS server i've added an entry to the hosts file which i'm using in the PVE Storage config for the NFS Server but NFS...
In the PVE GUI there are different View's defined.
Server View contains all Containers, VM's, Templates, Storages where there is also Storage View which contains the same hierarchical listing for Storages only.
Would it be possible to remove the Storages from Server View as there is a explicit...
I setup a new 5.3 two-node cluster and created some CT on a local LVM volume.
When i backup a CT to a NFS storage it starts, says it cannot do snapshot and continues with suspend mode but never finishes.
The node or better the PVE gui gets unresponsive, all NFS mounts hang and also a reboot is...
I have an cluster running in multicast mode and the provider stopped supporting multicast - turned off during switch firmware upgrade and they don't want to enable it again.
Is there an procedure to switch from multicast to unicast without braking things and possibly no or minimum...
Seems there is a GUI Bug:
The GUI in PVE 3.1 shows always 'Containers' as content option on local storage but not in storage.cfg;
Tried to disable it in GUI, storage.cfg is written correctly but GUI still shows Containers;
It's also not possible to disable local storage - but not sure if...
I'm just testing PVE 3.1 (single node setup) on Debian 7 based on the wiki install guide (http://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Wheezy)
So far everything went fine but when i start a newly created w2k8 vm and want to open the console i'm getting this error:
I have an productive PVE 1.9 cluster and thinking about setting up a new PVE 3.1 cluster and moving the VM's over to the new 3.1 version;
On PVE 1.9 i'm using LVM's on DRBD with 2x 10GBit/s between the Nodes for the replication because i do not have an external Storage like NFS or iSCSI...
i updated to test-system to 1.8 and found some smaller issues, but have to say that is running on squeeze with the lenny pvetest repository;
one of the strange thing is drbd, after powering on i get a split-brain where drbd find no neighbor;
after detaching one node, reboot the node which cames...
looks like there is a problem supporting vlans on bonding devices with kernel version 2.6.24-11-pve which is working with kernel 2.6.32-1-pve
boot error message:
vlan_check_real_dev: VLANs not supported on bond0
is there a patch for the 2.6.24 availiable to get this working?
because 2.6.32 has...
when i understand it right - for a pve cluster with live migration i need a drbd active/active+gfs2 or a shared mounted nfs on both nodes where the vm's are stored, correct?
or is there a additional way for doing this? without shared storage?
and what is the recommendation for such setup...
i have pve 1.4 and 1.5 running linux and windows vm's virtualized with kvm;
when i am rebooting or shutting down the pve server, the linux vm's will be shutdown sucessfully, the windows vm's not - the windows vm's were stopped after some waiting time which results in an 'unexpected...
i have setup a pve cluster with the version: pve-manager/1.4/4390
yesterdy i created an kvm host and it works, now i created an openvz host (template debian-5.0-standard_5.0-1_i386.tar.gz) but cannot start it;
i get these entries in /var/log/syslog:
Oct 23 21:23:49 vServer01 pvedaemon...
the default local storage is /var/lib/vz
the default stoarge directory is defined in /etc/pve/pve.cfg;
when the default storage is moved to another location and /etc/pve/pve.cfg is configured to use this new location the webinterface shows already the default as /var/lib/vz;
when pe 1.4b2 is fresh installed on an existing debian lenny from the pvetest repository then /etc/vz/vz.conf is missing;
interesting is that the file is in the .deb file but will not installed, also not after an dpkg-reconfigure....
upgrading from 1.3 is not a problem;