not really played with openfiler - does openfiler support redundancy with a second server (red. shared storage)?
i see drbd on the feature list, so i guess it can....
and does anyone know the hardware requirements for a 12disc red. shared storage with s-ata disc's serving 25 kvm systems?
i know...
ok, than you can only assign 1 cpu with 4 cores;
swappiness 60 is ok, the default on the most linux systems, try to use up all your memory and see if your system is using the swap partition;
did you really have 4 cpu's in your physical server? or only 1 quadcore?
you should not assign more cpu's than you really have;
the linux kernel will not waste memory in not using it - availiable free memory will be used for disk caching - this is what you see in cached;
your output shows...
i had this also when i used vmware converter to migrate physical machines to virtual machines - it automatically splitted all the data into 2gb files;
i did a system-state backup from the windows, created a new big imagefile and restored the windows machine;
hth
hi,
i have pve 1.4 and 1.5 running linux and windows vm's virtualized with kvm;
when i am rebooting or shutting down the pve server, the linux vm's will be shutdown sucessfully, the windows vm's not - the windows vm's were stopped after some waiting time which results in an 'unexpected...
hummm......i am not sure.....
in the vzctl from the debian repository, you cannot specify the bridge interface within the --netif_add option, so vlan's with additional bridge interfaces are not usable....
maybe this is also the problem that the openvz code in the debian standard kernel does...
found the problem - vzctl 3.0.23-1pve3 does not work with the debian standard kernel;
using the vzctl from the standard debian repository and it works....
i tried an another machine the baremetal installer and it works;
then on the same hardware with debian lenny and pve and it don't work....
looks like there is something different from the baremetal to the standard debian...
is there a way for a more detailed troubleshooting?
i am runninig pve on a standard debian lenny instead the bare metal version because it is an 1he server and hardware raid is not possible, so md-devices are a must to have;
maybe there is something different on the bare metal pve version to...
i have setup a pve cluster with the version: pve-manager/1.4/4390
yesterdy i created an kvm host and it works, now i created an openvz host (template debian-5.0-standard_5.0-1_i386.tar.gz) but cannot start it;
i get these entries in /var/log/syslog:
Oct 23 21:23:49 vServer01 pvedaemon[20640]...
the default local storage is /var/lib/vz
the default stoarge directory is defined in /etc/pve/pve.cfg;
when the default storage is moved to another location and /etc/pve/pve.cfg is configured to use this new location the webinterface shows already the default as /var/lib/vz;
hello,
when pe 1.4b2 is fresh installed on an existing debian lenny from the pvetest repository then /etc/vz/vz.conf is missing;
interesting is that the file is in the .deb file but will not installed, also not after an dpkg-reconfigure....
upgrading from 1.3 is not a problem;
greets
i tried some of the virtualization like vmware, xen, virtualbox, vserver, kvm, openvz and i evaluated openvz and kvm for me;
actually openvz and kvm are running on debian, managed via console and i searched around for gui's to simplify management;
there are some tools out there for openvz...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.