Hi!
I've followed the instructions from https://pve.proxmox.com/wiki/Ceph_Nautilus_to_Octopus
Everything went well until the restarting OSD step https://pve.proxmox.com/wiki/Ceph_Nautilus_to_Octopus#Restart_the_OSD_daemon_on_all_nodes
The HDD filestore OSDs don't come back online.
I've also...
I found a solution.
Detaching the disk
virtio1: /dev/sas1/frapelogdb-n4,size=2T
lvrename sas1 frapelogdb-n4 vm-167-disk-1
qm rescan
Then you are able to attache the disk via the web UI and also able to resize it.
Hi!
Is there another way, except https://fulitvh.ad1.proemion.com:8006/#v1:0:=qemu%2F106:4::::::, to point to a VM?
Something like https://promox:8006/vmName would be so much easier in some cases. Or at least it would not be required to query the API to find out the ID.
Thanks in advance!
No
You're right. The corosync.conf has still the node2 entry.
[12:32]root@fullipgvh-n2:~# pvecm status
Quorum information
------------------
Date: Mon Feb 18 12:33:25 2019
Quorum provider: corosync_votequorum
Nodes: 1
Node ID: 0x00000001
Ring ID: 1/32...
I've set up a new PVE instance, created the cluster configuration using the web UI, and installed a second PVE instance which should join the new created cluster.
Unfortunately the host entry of the second instance was wrong and pointed to a wrong IP address and after the join process the IP of...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.