I wish I've done the same, but for now I'm stuck with a common drbd volume for both servers
Well, that's some of my point, by putting the drbd layer on a lvm volume rather than having the drbd device as a pv in a volume group you would have much more flexibility than with the current model. It...
I've been using a couple of pve clusters with drbd between two servers for about a year. My conclusion is that this ain't stable enough for production. From time to time something happens, and the mirror breaks. It might be a network problem or something else. I then have to manually resolve the...
Some HP G5 series servers have virtualization turned off in the BIOS as factory default. I seem to remember that some of the first versions also didn't have BIOS support for virtualization. If you don't find anything about virtualization in the BIOS, you should boot the server on a Smart-update...
Does that mean that you won't put into 2.0, or just that you don't have it ready for production in pve yet?
If your going to put into 2.0, does the current version of kvm have support for sheepdog?
Sorry for not making that clear. It could be done using the commandline using lvcreate, but I'm not sure how pve picks up logical volumes into a virtual machine.
dietmar: instead of doing a restart of the network, one could do a ifup vmbrX. Also to be able to remove a bridge you could do a ifdown vmbrX, and thus be able to add and remove bridges on the fly.
I expect that you might get this to work if you create a bridge between the tun interface and vmbr0, and manage to set the endpoint ip on vmbr0 rather than the tun interface.
There is no automatic way to do this, but this is what I did a couple of hours ago.
1) Stop the virtual machine in question
2) Back up your config file (/etc/pve/qemu-server/<vmid>.conf)
3) Create a new disk with the same size
4) Log in to a console, and do " dd if=/dev/<src vg>/<src lv>...
I'm setting up a backupsystem at home and want to run the backupsystem in a virtual machine. I've got everything to work, but from time to time mtx stops responding inside the vm. I can still do a mtx -f /dev/sg0 status on the host, but on the guest it just hangs. Has anyone had any similar...
We've tested to run a virtual server(kvm) with a san disk atached. All seems ok, but if we do some heavy writes(dd if=/dev/null of=<file on san>) to the disk inside the vm, the filesystem goes to pieces. Our setup is like this:
On the host:
san -> fc -> multipath ->...
For testing purposes I've tried to use pve inside a kvm guest, but if I try to create a openvz container with a bridged interface, the pve instance shuts down. To solve this problem I changed the nic from virtio to e1000. That solved the problem. Where the problem is I don't know, neither do I...
How do you do that? And would it be possible to use Debian/Etch or Debian/Lenny.
For my company it is not an option to throw away servers without 64 bit support. Even thou it would make me happy to change a couple of hundred 2U boxses with 1U ones(cramped serverrooms).....
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.