I just came across this problem too.
So just to get this right, if we want to backup twice eg once to local datacenter, once to remote datacenter, we lose all the goodness of incremental etc and are effectively doing full backups every time?
On my small cluster I also see a huge difference between PBS and NFS backup storage with Ceph.
This is a 25GB VM
To PBS - Over two and a half hours:
Same VM to NFS using a regular backup, no PBS - Just over three minutes:
Yes no problem, with me it was that I had just added the nodes to the cluster. This meant the storages were showing on each server. What I had to do was simply go into each storage and edit it to restrict the storage to its own server.
I hope this helps, I assumed everyone else knew this and...
When I try this I get:
zfs error: cannot open 'rpool': no such pool
2020-09-27 16:15:47 ERROR: Failed to sync data - could not activate storage 'ZFS', zfs error: cannot open 'rpool': no such pool
2020-09-27 16:15:47 aborting phase 1 - cleanup resources
2020-09-27 16:15:47 ERROR: migration...
My issue with the sql was actually related to me upgrading to wheezy from jessie to try and fix the issue. The my.cnf file was the problem so actually nothing to do with proxmox.
My other two issues where networking and a kernel panic relating to the scsi driver.
My networking issue was fixed...
I cannot understand where those keys are, its a long time since I have done this stuff but you should be able to manually generate them. You haven't tried to cluster them have you?
Look in syslog, there should be some indication, you could also manually try and start the services. OVH itself is fine, I've got a couple this weekend just gone with 6 on.
I've got myself into a bit of a pickle.
I had a 3 node ceph cluster running for a good couple of years no problem. I'm now upgrading to 6, I was going to switch to ZFS but I have changed my mind though I could be open to persuasion. I was going to reduce my node count in the DC to reduce power...
I am having problems with ZFS, all sorts of things are going wrong, mysql failing with too may open files etc. I have had to migrate this vm to a box I just rented and installed 5.4 on. This looks bad though as 5.4 is going EOL.
Is anyone experiencing problems with older VMs?
I've only just installed 6.2 and started migrating VMs but I'm getting a Debian Jessie constantly crashing with a kernel panic relating to some scsci thing or other and I now have an Ubuntu 8.04 VM that almost immediately after bootup loses...
I have a set up similar to this for about three years now without problem. I barely remember setting it up but I know the corosync was redundant in that we have a 1GB connection through a regular switch and then a couple of Intel X520-DA2 in each node directly attached.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.