That's what I was referring to: https://forum.proxmox.com/threads/proxmoxve-ceph-clock-issue.20684/#post-153248
I followed the instruction and since then no issues and at the time there were many other users claiming that systemd is no go for time yet.
Had the same issue as described there...
Right, the systemd-time is used by default and is not working right at least on 4.x with clock skew I had to disable it and use ntpd instead and it's been working with no issues for last year or so. Many users complained about it, it also came out as an issue while live-migrating - kind of...
On PVE 5.1 with ceph does the ntp still persists and we need to switch to the NTP server and disable what I think was ntpd taht came with system to prevent clock skew from happening ?
Thank you
no :-) I was not planing to install ceph on Synology - I was simply wondering if it can be used and if anybody uses it for VM storage using nfs mount or iSCSI.
Did anybody do this ? We are looking for a backup option for our data center. I see they have VMware, Hyper-V and Citrix option but not KVM. Was wondering if it would work if we converted a hard drive to the vmdk format.
Thank you
I was wondering about using it for VM storage using NFS mount or iSCSI instead of Ceph or any other storage. I know that Ceph is relatively free (you need to have somebody that knows how to set it up) , scales better and have some features that Synology NAS simply does not have but with 10Gb...
Thanks this is really helpful!
Denny the NUC is much more expansive but also more powerful than the Odroid - any advantages there over going with the 55$ Odroid just for quorum , the model you sent in the link is like 299$.
Thank you again.
What are you using for a 3rd node in a situation that a client only can afford two "real" servers ? I need something cheap to offer.
I have read that some people are using Raspberry PI , is it working ok ? Just for the quorum.
Thank you for any advice.
Thank you for sharing, looks promising. I will keep an eye on it (or two eyes...)
Did you try it in test environment ? ...or when you say not yet complete it means that it simply does not work yet.
We are currently using ceph with our PVE and everything works great. However is there / is anybody using an HA solution that would work immediately and keep mirroring the VMs in real time (DRBD) and once one VM is down the other one kiks in , preferably preserving all sessions ?
I think it can...
I understand , let me by clear, I am not updating to jewel, I just want to update my PVE cluster and I am asking about client packages compatibility with ceph hammer. I am running ceph hummer on a separate cluster and I am not updating yet.
Thank you
I am preparing to update pve cluster 4.3.71 to the newest firmware but I have a ceph storage (separate cluster) based on Hammer. I see that there are 2 ceph packages updated, are they backward compatible ? does anybody know ? ...or if I update I need to update my ceph cluster to jewel ?
I am...
So it turns out I had a problem with one OSD which was down/out and nothing I could do to bringing it back up. I just removed it and successfully re-added it and it is working fine. During this process as I moved all the VMs to a different storage I rebooted the all the nodes and the ram and...
I have 3 nodes dedicated to storage, exact the same hardware spec, and setup in the cluster with the same hard drive modes for ceph OSDs, everything the same. The PVE and Ceph clusters works perfectly fine (a little bit of OI wait but I was told it is normal). All the VMs work with no problems...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.