compression, encryption and what you probably not want in an production environment - write cache;
raw can be restored to iscsi or lvm - qcow2 only on a fileystem like nfs, also qcow2 is slower then raw;
if you want a encrypted or compressed image you need to use qcow2 - raw gives you more data...
there is no real difference except your interface settings are a little bit inconsistent to each other....
vmbr0 has stp on, the others off
both vmbr0 and vmbr1 has a gateway configured where the gateway from vmbr1 is the same as on vmbr0
anyway, whatever i try i can't get rid of these...
hmmm.....
can you please post an example you setup your interfaces?
this is what i have configured:
auto bond0
iface bond0 inet manual
slaves eth0 eth1
auto vmbr0
iface vmbr0 inet static
address 192.168.100.55
netmask 255.255.255.0
gateway 192.168.100.1
network...
in your harddisk setup you have no fault tolerance, so if you have 4 identical drives and a hardware raid-controller the proxmox recommendation is a raid10;
anyway, back to your question - you can mount the lvol directly into /var/lib/vz/template or to another location via fstab and define it as...
seems that is not used very often....so i need to install temporary the kernel from the backports repository until openvz is in the latest pve-kernel....
screenie
the only possible and useful scenario for clustering across different datacenters is when you have a layer2 backend connection between the datacenters and your pve-hosts and vm's are using an pi-space which is announced from different providers on each datacenter;
screenie
ah cool - would be nice to have this in the pve-gui per vm or backup job because you probably do not want this for every test-vm or different amount of kept backups
it is not clear to me why you setup a pve-cluster when both pve-hosts are not sharing the same networks - in this setup you cannot migrate vm's from one pve-host to the other one;
are the two pve-hosts directly connected to the internet or is there a firewall/router/l3-switch in between...
there is actually no possibility for a user management, so i assume at the moment you can't change it;
and renaming the system root account should be never done....
ok, drbd without the whole rhcs makes it much easier :-)
i am right that you are using a separate lvol for each vm?
one lvol for all my vm's are enough because when the backup job run's all vm's can be processed with the same snapshot - or is there something i missed?
so, i will try the drbd...
do you have more details on your entire network configuration/szenario?
are your vmware/hosts/guest on the same vlan, trunking configured on the old switch but not on the new one, interface configuration on pve host/guest, stp configuration, ...
looks like there is a problem supporting vlans on bonding devices with kernel version 2.6.24-11-pve which is working with kernel 2.6.32-1-pve
boot error message:
vlan_check_real_dev: VLANs not supported on bond0
is there a patch for the 2.6.24 availiable to get this working?
because 2.6.32 has...
hi,
when i understand it right - for a pve cluster with live migration i need a drbd active/active+gfs2 or a shared mounted nfs on both nodes where the vm's are stored, correct?
or is there a additional way for doing this? without shared storage?
and what is the recommendation for such setup...
the performance should be better with your hardware.....
CPU BOGOMIPS: 23940.91
REGEX/SECOND: 993583
HD SIZE: 82.50 GB (/dev/sda3)
BUFFERED READS: 175.34 MB/sec
AVERAGE SEEK TIME: 8.61 ms
FSYNCS/SECOND: 2825.50
DNS EXT: 90.10 ms
DNS INT: 20.85 ms...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.