Ended up renaming existing volumes on destination and doing a new replication before migrating. Everything went fine after renaming volumes and rescheduling replication (gui) and then doing migration also from the gui.
However, I might have stumbled upon unexpected behaviour when PVE...
Hi,
I recently reinstalled to PVE 5.1 and have been using a desktop as backup when doing changes to the server.
Migrating between the two has worked flawlessly (2-cluster with zfs), but for some reason the replication decided to send a full image while it already exists on the other side...
Before putting it into any production system I would wait for the proxmox team to update the vzctl release for proxmox.
That said you can build the debian package yourself by following this guide: https://git.proxmox.com/?p=pve-common.git;a=blob_plain;f=README.dev;hb=HEAD
When you get to step 3...
It's not necessary to install upstart. There's a fix in vzctl 3.2 which is easily backported to current proxmox vzctl. See https://bugzilla.proxmox.com/show_bug.cgi?id=210
Seems like DRBD ressources are started later than qemu-server. Try changing the startup order. E.g. like this moving drbd to startup order 23 (qemu-server is 24 here):
update-rc.d -f drbd remove
update-rc.d drbd start 23 2 3 4 5 . stop 08 0 1 6 .
Best regards,
Bo
1. Yes, provided you use shared storage you only need to make sure you rsync configuration files to a backup directory on the other server. So a little manual work is needed in case of hw breakdown on the primary node. However normal maintenance you should be able to live migrate most machines...
Hi,
We had a breakdown of a server last friday due to a missing NFS share for backup. It's a rather big logfile, but it seems the lost connection to an NFS server has something to do with it.
I noticed this is similar to this thread...
Current proxmox 1.7 allows creation of OpenVZs with ID's less than 100. When running vzdump on a VM created with ID 15 I get the following error: "ERROR: got reserved VM ID 15".
Best regards,
Bo
Hi,
I have a bunch of NICs in my server and want to distribute the VMs to different NICs. However all the VMs are on the same subnet and it seems to give me some trouble when routing.
The general subnet is 255.255.224.0. If I add a machine to vmbr1 with e.g. 192.168.31.230 I am not able to ping...
I went for the Myricom NICs so I don't remember much about the Dolphin adapters, sorry :(. Neither did I keep the binaries - they went away with a clean install of proxmox after my testing :(.
What I have is a "DIS_DX_install_DIS_RELEASE_3_6_1_2_AUGUST_18_2010.sh" which probably was the last one...
Hi Udo,
I extracted the source tarball using "DIS_.......sh --get-tarball". Extracting that tarball you'll find a "build_deb"-script located in "adm/bin/Linux_pkgs/". You will need the kernel headers to compile. Otherwise I think it was quite straight forward to sort out the other dependencies...
Latency is around the same I get with 10GbE. However I'm able to get around 900 fsyncs/s.
Have you tried setting your sync rate lower? Don't know if it changes anything, but the manual recommends 1/3 of bandwidth of the bottleneck of the system which I guess is the disk system.
How is the...
Comverted an old W2k server and ended up with the same problem. None of the solutions highlighted worked for me, but I found this page:
https://bugzilla.redhat.com/show_bug.cgi?id=479977
At some point it refers to...
That sounds correct. Almost the same scenario is explained here: http://pve.proxmox.com/wiki/DRBD#Recovery_from_communication_failure. As explained there you could actually have two DRBD devices - one for each server. This gives you a DRBD device for VMs for each server. Then you don't have one...
I'm also trying to learn DRBD. I have used http://www.drbd.org/users-guide/p-performance.html for tweaking performance. Take a look at the sections "Measuring throughput" and "Measuring latency". If you use LVM you probably want to make a partition separately for these tests in order not to...
Yes and no. If you run a syncronized setup (protocol C in drbd) you will have to wait for the second node. However if you are using hardware RAID controller with BBU you might not be hit that hard - even though if you try to stream a large file I guess your limit will be the SATA drives...
Currently running 1.4/2.6.24 using both KVM and OpenVZ in production. I have not upgraded but I'm testing 1.6/2.6.32 on some new hardware. I cannot compare performance wise as the new hardware is faster. So far only annoyances is missing live migration of openvz (compared to 2.6.18) and missing...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.