Yes, as you selected the wrong server, when the client connects via their client area the module will create them an account in Proxmox on the selected node for the VM ID specified, which would have then gave them access.
This is not Proxmox fault but it how the module works, hope that makes sense.
This will be how the WHMCS module works and not Proxmox at fault at all, depending on the module you use they work in different ways.
But the most that I have seen & used upon first contact from a client it will use the root login to create a user with the PVEVMUser perms for that VM, any...
Your need to express these as separate disks, you may be able to do that depending on how the NVMe is attached.
Also the default Journal size is 5GB so you would need to have the size minimum to that.
Just a heads up, have tried the latest CEPH release 10.2.* and is the same issue via the GUI / Proxmox CLI, I had to manually create the OSD via the CEPH commands.
I am changing monitors within my CEPH Cluster.
Have updated on CEPH which is all fine, I just need to update in Proxmox, which I am looking to do by just editing : /etc/pve/storage.cfg
1/ Is this the correct method?
2/ Will the KVM KRBD mounts pick-up this change automatically or will I need...
Fixed!
What I had to do was kill/stop the service on every node, and run pmxcfs -f on each node, then left everything for a few minutes to sync and clear the backlog.
After then ctrl + c the pmxfs the service then starts fine, seems that there was too big of a backlog to catch while doing the...
Let it to run for a while and didn't do any further output, come out and tried to start the cluster and have the same error message.
I have restarted the command again, only thing I am wondering is if it's trying to cross sync with some of the other servers where there /etc/pve/ is offline...
Thanks! will try that, will it confirm once sync is completed, as there is a couple of servers with /etc/pve down will it sync from few that have pve-cluster running?
Last output currently is "[libqb] info: server name: pve2"
Just the grep command it self:
ps faxl | grep pmxcfs
0 0 12981 11801 20 0 12728 1852 pipe_w S+ pts/0 0:00 \_ grep pmxcfs
If I am reading the status output right and from what df -h shows while the start command is hanging it does start and mount /etc/pve at "notice...
In a better situation that I was at the start, nodes that have pve-cluster started accept cluster cli commands and can list all the nodes communicating via corosync.
However nodes that don't have pve-cluster started no matter how many restart commands after a period of the start command...
So corosync is running fine now on every node, below is an example output.
service corosync status
● corosync.service - Corosync Cluster Engine
Loaded: loaded (/lib/systemd/system/corosync.service; enabled)
Active: active (running) since Tue 2017-04-11 10:57:20 BST; 12min ago
Process...
Is not still running, I have just run it again and get the following:
service pve-cluster restart
Job for pve-cluster.service failed. See 'systemctl status pve-cluster.service' and 'journalctl -xn' for details.
root@sn7:/# ^C
root@sn7:/# systemctl status pve-cluster.service
●...
On the first node corosync restarted fine.
pve-cluster handed for a while and then failed to restart with the following output:
service pve-cluster restart
Job for pve-cluster.service failed. See 'systemctl status pve-cluster.service' and 'journalctl -xn' for details.
root@sn7:~# ^C...
Hello,
I had an issue which caused the Proxmox Cluster to break due to an extended period of network issues on the cluster communications network.
I have brought all VM's online on a new Proxmox cluster, however the old broken cluster still has the CEPH Cluster attached to it, this is running...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.