The problem was the SSL check that you can't disable. Why whmcs introduce a feature like that and not let users turn it off is just incredible. We worked around it by changing the product definitions so that the "Product Type" is set to "Other". Apparently it doesn't try to check the SSL cert...
Just following up on this here as there's been some movement on this issue. As mentioned in the bugzilla thread by Tim Marx
We've rolled our own solution for backups that doesn't use NFS or CIFS so it's not a burning issue for us anymore. But I thought others following this thread would be...
Yes. We saw the problem on 2 of our 5 nodes when upgrading. We opened a support case for it but forgot to enable journal persistence on one of the nodes and that was the second one to have the issue. As we couldn't provide any further details we closed the case. It's great that you could...
@rahul1985joshi I forget to mention that Ceph will want 4GB of RAM per OSD for it's caching etc so in the setup outlined above you'd need 292GB RAM per node. I didn't realise that when we started looking at Ceph but luckily had a lot of RAM headroom so it wasn't an issue for us.
David
...
Happy to help. I'm pretty new with Proxmox but have been running hosting platforms for a very long time so ping me back if I can help any further.
David
...
Are you running shared storage (ceph or nfs or something) ? Or are you using local storage and expecting the VM migration to physically copy all that data between the cluster nodes?
You've given no details about the workloads that'll be running on the VMs, IOPS requirements, CPU loading, or availability expectation, so anything that's suggested here is a pure guess. But here's something for you to start with.
You'll need 3 x your storage requirements + 20% as a bare...
Hi
I've just experienced this as well. The node is a new node we are using just for ceph exports (i.e. isn't running any vms). It was running the "public / free" code from a few weeks ago under kernel 5.0.15-1-pve. I upgraded it to current from the enterprise repo.
From the console (and in...
There's been no activity on bugzilla. I think it's going to take some time to resolve this one.
We'd rather not use a fileserver for writing the backups anyway, due to the potential for hard lockups if the nfs / cifs target goes away. We decided to run up another node in the cluster just for...
Using 7.9.1 (or 7.8.3) and the latest version of the module has significant delays rendering pages. When you hit a page that uses the module there's a delay of around 12 seconds before it calls the proxmox api's. It's taken us almost 2 weeks to get their support people to even start looking...
Hi
This may be a little off topic, but we can't use Proxmox if it doesn't integrate into a support portal / billing system.
Is there any alternative to the whmcs module offered by Modules Garden? We've been trying to work through problems with their product for weeks now and their technical...
Bingo ! Thanks wolfgang. That was it. The mgr directory was empty but the mon directory still had an entry on the new node. I removed that and the UI is now happy.
Thanks
David
...
Hi
I added a new node to our cluster. This node will run ceph but not run a monitor or manager or have any OSDs (it's just a 'client' so we can export ceph volumes to local storage). When installing ceph and adding it to the cluster it came up with a monitor. I stopped and destroyed the...
@Alwin There's a problem writing backups to NFS targets that causes guests to become unresponsive. The problem does not occur writing to the same remote server using CIFS rather than NFS. The problem is being tracked in bugzilla at
Bug 2554 - Guest slow down while backup to NFS
I'm also...
Thanks @t.lamprecht, I'll run this up in the next day or 2. For our purposes it's more about separating the "backup" node from the "compute & storage" nodes so that an issue with one doesn't impact on the other. Thanks for your feedback and the quick response.
Hi
I think this will work fine but thought I'd ask here before I buy another license to try it out. Can I have a node that is part of the ceph cluster but doesn't have any OSDs? I want to have a cluster node that is just a 'ceph client'. It would not run VMs. It'd just be used for rbd...
Hey @spirit, my comment related to the "bringing his vm guests to their knees" which is the problem we are seeing with backups to nfs shares. CPU in the vms goes through the roof and they drop off the network when they are being backed up.
With all the trouble we've been seeing backing up to...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.