This is all getting a bit over my head but what I can tell you is I used an old motherboard I had spare, https://www.asrockrack.com/general/productdetail.asp?Model=D1541D4U-2T8R#Specifications 64GB ECC RAM, 4 x Kingston DC600M Series 3.84TB in a ZFS striped mirror, as recommended by a proxmox...
Thanks for all your replies.
This is for a PBS server. The bottleneck is definitely not the SATA interface. I have been doing some migrating of VMs between various datacenters and this is where the slowness of the restore came to light.
Its certainly PBS related as to speed things up I started...
Tomorrow I have 4 x DC600M 3.84TB disks arriving. The whole reason I am getting these is due to very slow restores when using HDD.
The question now is do I use hardware raid or zfs. I have a hardware raid card with BBU and also LSI HBAs so I can go with either.
Also in terms of raid level, I...
I just came across this problem too.
So just to get this right, if we want to backup twice eg once to local datacenter, once to remote datacenter, we lose all the goodness of incremental etc and are effectively doing full backups every time?
On my small cluster I also see a huge difference between PBS and NFS backup storage with Ceph.
This is a 25GB VM
To PBS - Over two and a half hours:
Same VM to NFS using a regular backup, no PBS - Just over three minutes:
Yes no problem, with me it was that I had just added the nodes to the cluster. This meant the storages were showing on each server. What I had to do was simply go into each storage and edit it to restrict the storage to its own server.
I hope this helps, I assumed everyone else knew this and...
When I try this I get:
zfs error: cannot open 'rpool': no such pool
2020-09-27 16:15:47 ERROR: Failed to sync data - could not activate storage 'ZFS', zfs error: cannot open 'rpool': no such pool
2020-09-27 16:15:47 aborting phase 1 - cleanup resources
2020-09-27 16:15:47 ERROR: migration...
My issue with the sql was actually related to me upgrading to wheezy from jessie to try and fix the issue. The my.cnf file was the problem so actually nothing to do with proxmox.
My other two issues where networking and a kernel panic relating to the scsi driver.
My networking issue was fixed...
I cannot understand where those keys are, its a long time since I have done this stuff but you should be able to manually generate them. You haven't tried to cluster them have you?
Look in syslog, there should be some indication, you could also manually try and start the services. OVH itself is fine, I've got a couple this weekend just gone with 6 on.
I've got myself into a bit of a pickle.
I had a 3 node ceph cluster running for a good couple of years no problem. I'm now upgrading to 6, I was going to switch to ZFS but I have changed my mind though I could be open to persuasion. I was going to reduce my node count in the DC to reduce power...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.