i think, more or less, i followed that
i had some trouble to understand the master and backup names. these where used later in the setup
"handle_notify: our own notification, ignoring" sounds strange
here are my notes during the setup ... not clear and a little chaotic ;)
While each cluster...
thanks for that, we will do this. i think we hit something ... see attachment
its i/o wait of one our linux vms, an 26.4. we updated and on 2.5. the problems started
any idea, why the space is growing with journal mode?
we got snapshot based running about 50 days without problems.
now, we put new ssd in the server and upgraded to pve 7.2.
on week later, our vms got high i/o waiting values.
we did'nt find anything and had to disable the snapshots.
maybe we hit the bug in:
pve-qemu-kvm=6.2.0-7...
hi rainerle ... we run in the same problem, but did not fixed it.
i saw, that we forgot to enable trim in our kvm vms.
after manual "trim -av" we got almost 1,5tb data back.
but we actually run in the problem, that with activated journaling feature, most of our windows vms do not boot, after...
one update again ... makes the parts more strange
maybe, since we enabled journal based replication, we never stopped a vm
so, we cleanup the pool config and removed all the journaling features ...
what should i say ... our vms are booting
i think, we run in a another problem.
but, we also...
hi ... my update
we tried today to cleanup our second cluster, after we moved everything to our rbd mirror
we stopped some vms to make a final backup. some of these vms stuck with timeout.
after that, we installed the devel pve-qemu-kvm=6.2.0-8, but this does not help.
at the end, after many...
great to hear that ... thanks for that information
the support told us yesterday the same
came in german, but here:
Wir haben nach viele Versuchen (z.B. downgrading von pve-qemu-kvm) via rbd journaling auf dem Failover Cluster geschwenkt.
Auf welche Version haben Sie ein Downgrade...
some other question
do you all have these problems in combination with pbs?
i updated to 2.2 ... released yesterday :/
i wonder, how close and state of the art, we are
hi, we ran last night in the same problem.
many vms stuck with qmp timeouts.
we have an rbd-mirror standby cluster, with journal replication. this cluster is now online and was working up to now.
we have one vm with the same problem.
should we test the new pve-qemu-kvm version?
must we change...
ok, answering myself ... found a solution
you can run multiple rbd-mirror daemons
on is the leader and the other can take over
there is nothing special to do
maybe, i was thinking to complex
monitoring will be special, but thats future
hi,
we try to setup snapshot based replication between 2 ceph cluster.
journal based did not worked, cause it was to slow and ate up the free space.
my question is, is it possible to run the rbd-mirror process on more than one nodes?
actually we have one process on the first node. can we setup...
@Whatever ... hi, i ran into the same problem.
the usage grows up to 5tb in 10 days after enabling the journaling flag.
did you solve the problem?
kind regards,
ronny
hi,
i need some help, cause i did not found any solutions (or may be to blind).
we have a proxmox 6.x cluster with ceph and 38tb data.
now, we setup a backup cluster and configured rbd-mirror, journaling mode.
everthing works fine and after we enabled journaling, the replication started.
but...
hi,
for the next plan, we try to find the best storage solution.
i read a lot, but was not finding the right information.
we will need multiple 2 tb ssd.
i found a lot about samsung sm863a and now pm863a or sm863 ... and now, i'm confused. :)
what ssd are you using in you live cluster...
hi and thank you,
so you edit the ceph.conf manually?
is there no other configuration for the osd needed?
i thought, that i have to configure each osd with the ip address.
ronny
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.