Search results

  1. M

    HDD filestore OSD does not come up after Ceph upgrade from Nautilus to Octopus

    Hi! I've followed the instructions from https://pve.proxmox.com/wiki/Ceph_Nautilus_to_Octopus Everything went well until the restarting OSD step https://pve.proxmox.com/wiki/Ceph_Nautilus_to_Octopus#Restart_the_OSD_daemon_on_all_nodes The HDD filestore OSDs don't come back online. I've also...
  2. M

    [SOLVED] Unable to parse volume ID

    I found a solution. Detaching the disk virtio1: /dev/sas1/frapelogdb-n4,size=2T lvrename sas1 frapelogdb-n4 vm-167-disk-1 qm rescan Then you are able to attache the disk via the web UI and also able to resize it.
  3. M

    [SOLVED] Unable to parse volume ID

    Hi! I've wanted to resize the virtio1 disk of this VM: [13:37]root@frapevh004:/etc/pve/qemu-server# cat 167.conf agent: 1 boot: order=virtio0;net0 cores: 1 memory: 32768 name: frapelogdb001-n4 net0: virtio=6E:A5:69:CE:5B:EA,bridge=vmbr300 numa: 0 onboot: 1 ostype: l26 scsihw: virtio-scsi-pci...
  4. M

    Create URL to a specific VM

    @Dominic That makes sense of course. Unfortunately not in my case :) But thanks for responding so fast!
  5. M

    Create URL to a specific VM

    Hi! Is there another way, except https://fulitvh.ad1.proemion.com:8006/#v1:0:=qemu%2F106:4::::::, to point to a VM? Something like https://promox:8006/vmName would be so much easier in some cases. Or at least it would not be required to query the API to find out the ID. Thanks in advance!
  6. M

    Issues with cluster config

    @dlimbeck Seems to work! Thanks!
  7. M

    Issues with cluster config

    Just remove it and that's it?
  8. M

    Issues with cluster config

    No You're right. The corosync.conf has still the node2 entry. [12:32]root@fullipgvh-n2:~# pvecm status Quorum information ------------------ Date: Mon Feb 18 12:33:25 2019 Quorum provider: corosync_votequorum Nodes: 1 Node ID: 0x00000001 Ring ID: 1/32...
  9. M

    Issues with cluster config

    I've set up a new PVE instance, created the cluster configuration using the web UI, and installed a second PVE instance which should join the new created cluster. Unfortunately the host entry of the second instance was wrong and pointed to a wrong IP address and after the join process the IP of...