Search results

  1. A

    Install Ceph Server on Proxmox VE (Video tutorial)

    Can easily be made via the CLI and then used and show via the GUI no issues.
  2. A

    Critical security bug in NoVNC console

    One thing you can do outside of the module is either use a cluster or don't duplicate ID's across the servers.
  3. A

    Critical security bug in NoVNC console

    Yes, as you selected the wrong server, when the client connects via their client area the module will create them an account in Proxmox on the selected node for the VM ID specified, which would have then gave them access. This is not Proxmox fault but it how the module works, hope that makes sense.
  4. A

    Critical security bug in NoVNC console

    This will be how the WHMCS module works and not Proxmox at fault at all, depending on the module you use they work in different ways. But the most that I have seen & used upon first contact from a client it will use the root login to create a user with the PVEVMUser perms for that VM, any...
  5. A

    Ceph Journal on SSD

    Your need to express these as separate disks, you may be able to do that depending on how the NVMe is attached. Also the default Journal size is 5GB so you would need to have the size minimum to that.
  6. A

    New CEPH OSD

    Just a heads up, have tried the latest CEPH release 10.2.* and is the same issue via the GUI / Proxmox CLI, I had to manually create the OSD via the CEPH commands.
  7. A

    Trim virtual drives

    Can see your using virtio-scsi due to "scsi0" How big was the qcow file before you run the fstrim command?
  8. A

    .

    .
  9. A

    CEPH Update Monitors

    I am changing monitors within my CEPH Cluster. Have updated on CEPH which is all fine, I just need to update in Proxmox, which I am looking to do by just editing : /etc/pve/storage.cfg 1/ Is this the correct method? 2/ Will the KVM KRBD mounts pick-up this change automatically or will I need...
  10. A

    Corosync/Cluster + CEPH Broken

    Fixed! What I had to do was kill/stop the service on every node, and run pmxcfs -f on each node, then left everything for a few minutes to sync and clear the backlog. After then ctrl + c the pmxfs the service then starts fine, seems that there was too big of a backlog to catch while doing the...
  11. A

    Corosync/Cluster + CEPH Broken

    Let it to run for a while and didn't do any further output, come out and tried to start the cluster and have the same error message. I have restarted the command again, only thing I am wondering is if it's trying to cross sync with some of the other servers where there /etc/pve/ is offline...
  12. A

    Corosync/Cluster + CEPH Broken

    Thanks! will try that, will it confirm once sync is completed, as there is a couple of servers with /etc/pve down will it sync from few that have pve-cluster running? Last output currently is "[libqb] info: server name: pve2"
  13. A

    Corosync/Cluster + CEPH Broken

    Just the grep command it self: ps faxl | grep pmxcfs 0 0 12981 11801 20 0 12728 1852 pipe_w S+ pts/0 0:00 \_ grep pmxcfs If I am reading the status output right and from what df -h shows while the start command is hanging it does start and mount /etc/pve at "notice...
  14. A

    Corosync/Cluster + CEPH Broken

    In a better situation that I was at the start, nodes that have pve-cluster started accept cluster cli commands and can list all the nodes communicating via corosync. However nodes that don't have pve-cluster started no matter how many restart commands after a period of the start command...
  15. A

    Corosync/Cluster + CEPH Broken

    So corosync is running fine now on every node, below is an example output. service corosync status ● corosync.service - Corosync Cluster Engine Loaded: loaded (/lib/systemd/system/corosync.service; enabled) Active: active (running) since Tue 2017-04-11 10:57:20 BST; 12min ago Process...
  16. A

    Corosync/Cluster + CEPH Broken

    Is not still running, I have just run it again and get the following: service pve-cluster restart Job for pve-cluster.service failed. See 'systemctl status pve-cluster.service' and 'journalctl -xn' for details. root@sn7:/# ^C root@sn7:/# systemctl status pve-cluster.service ●...
  17. A

    Corosync/Cluster + CEPH Broken

    On the first node corosync restarted fine. pve-cluster handed for a while and then failed to restart with the following output: service pve-cluster restart Job for pve-cluster.service failed. See 'systemctl status pve-cluster.service' and 'journalctl -xn' for details. root@sn7:~# ^C...
  18. A

    Corosync/Cluster + CEPH Broken

    Should I go around every node and do this one after each other? Or a particular way?
  19. A

    Corosync/Cluster + CEPH Broken

    Hello, I had an issue which caused the Proxmox Cluster to break due to an extended period of network issues on the cluster communications network. I have brought all VM's online on a new Proxmox cluster, however the old broken cluster still has the CEPH Cluster attached to it, this is running...
  20. A

    Proxmox 5 beta OSD trouble...

    Before you used v5 how did your servers use to sync after a few minutes? This is exactly what NTP is and does.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!