This is very likely a performance problem. I am seeing this occasionally myself on systems with HDD. If you are Replicating bidrectional, it might help to prevent overlaps. If server A replicates to B while B replicates to A, it might be good to have this separated. After I did this, I get the...
Thanks for the hint with the proxy_arp. I have added this to the configuration. About the IPs: sorry for the confusion, these are not the real IPs as mentioned above the config files, but I agree: The replacement was more than unsuitable. I will edit the configs above to reflect this.
Thanks for...
Hi Wolfgang,
thanks a lot for helping out! Meanwhile I have resorted to a much simpler configuration in which I created a new vmbr interface for the failover IP and configured rinetd to foward ports 80 and 443 to the container vm (instead of iptables forwarding because it is only about a dozens...
Hi everybody,
I am totally lost with a problem that bugs me since a couple of days, but despite reading numerous posts/sites, I wasn't able to solve it (Sorry in advance if this was answered here somewhere and I just didn't get it)
We have a small two-node Proxmox cluster runinng Proxmox 6.2...
Yes, of course. As the error message indicates there are leftovers from the broken replication process. I wasn't sure if I can delete those directly from the filesystem without messing Proxmox up, but as it seems, it works.
I deleted the remains on the target server with:
zfs unmount...
Nevermind, I decided to be brave and try out what I thought should be the proper solutions and I seem to be lucky today :)
Replication and migration are working again.
Anyway, thanks for listening.
Hi,
I am facing a problem after the replication between two nodes hung up. I am now seeing the following error when trying to migrate the two containers which were being replicated when the replication stopped working. Please bear with me, I am long time Linux user, but new to zfs. Can you...
Hi,
please forgive me if the following question was answered already and I just didn't get it when reading all the posts.
What I have is a small 2-node cluster without HA.
There are different suggestions to ensure that the remaining node keeps running in case that one node goes down.
Mainly...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.