Unable to destroy VM and DRBD volumes/resources

Kei

Active Member
May 29, 2016
88
2
28
37
Hello guys,
Due to some testing that I've done, I ended up having some "garbage" in my PVE cluster that I would like to get rid of. First of all the VM's, as shown on the left coloumn under "datacenter" on the GUI. If I remove them, I get this error: TASK ERROR: storage 'drbd1' does not exists. In fact, I did rename the DRBD pool definition in /etc/pve/storage.cfg, and the error is due to this. That said, can I somehow forcefully remove these VMs from the cluster, knowing that very likely the DRBD disk is no longer present?

Last thing, I would like to remove the volumes and resources binded to DRBD9 of the VM's that I don't use anymore. Please note that removing the VM from the PVE GUI, only caused this string to appear: "pending actions: remove". Even upon reboot, the volume and resource are still there:

# drbdmanage list-resources
+------------------------------------------------------------------------------------------------------------+
| Name | | State |
|------------------------------------------------------------------------------------------------------------|
| vm-199-disk-1 | | pending actions: remove |
| vm-201-disk-1 | | pending actions: remove |
| vm-214-disk-1 | | ok |
+------------------------------------------------------------------------------------------------------------+
# drbdmanage list-volumes
+------------------------------------------------------------------------------------------------------------+
| Name | Vol ID | Size | Minor | | State |
|------------------------------------------------------------------------------------------------------------|
| vm-199-disk-1 | 0 | 10 GiB | 100 | | ok |
| vm-201-disk-1 | 0 | 10 GiB | 101 | | ok |
| vm-214-disk-1 | 0 | 10 GiB | 102 | | ok |
+------------------------------------------------------------------------------------------------------------+

How can I get rid of those?
Thank you"
 
the first questions is easy: simply remove the VM config file in /etc/pve if you are sure none of the referenced disks exist any more.
the second question requires manual intervention with drbdsetup/drbdadm in order to get the cluster synced again. I vaguely remember ending up in that state before, but not how I actually removed the disks. could you provide more information about the volume/resource and cluster state (via drbdsetup/drbdadm , as drbdmanage is not helpful in many situations like this)
 
Thank you Fabian, the first part was very straight forward to accomplish.
I'm about to paste some of the output that I hope can be useful:


root@vega:~# drbdsetup show
Code:
resource .drbdctrl {
  _this_host {
  node-id       0;
  volume 0 {
  device       minor 0;
  disk       "/dev/drbdpool/.drbdctrl_0";
  meta-disk       internal;
  }
  volume 1 {
  device       minor 1;
  disk       "/dev/drbdpool/.drbdctrl_1";
  meta-disk       internal;
  }
  }
  connection {
  _peer_node_id 1;
  path {
  _this_host ipv4 192.168.60.155:6999;
  _remote_host ipv4 192.168.60.156:6999;
  }
  net {
  cram-hmac-alg     "sha256";
  shared-secret     "3pyER5W0yUa8PQdgOhQE";
  _name     "altair";
  }
  }
}

resource vm-214-disk-2 {
  _this_host {
  node-id       1;
  volume 0 {
  device       minor 103;
  disk       "/dev/drbdpool/vm-214-disk-2_00";
  meta-disk       internal;
  disk {
  size     20971520s; # bytes
  }
  }
  }
  connection {
  _peer_node_id 0;
  path {
  _this_host ipv4 192.168.60.155:7003;
  _remote_host ipv4 192.168.60.156:7003;
  }
  net {
  allow-two-primaries   yes;
  cram-hmac-alg     "sha1";
  shared-secret     "pxeI0AIHD9sR5YFJWUJ0";
  _name     "altair";
  }
  }
}

root@vega:~# drbdmanage list-volumes
Code:
+------------------------------------------------------------------------------------------------------------+
| Name  | Vol ID |  Size | Minor |  | State |
|------------------------------------------------------------------------------------------------------------|
| vm-199-disk-1 |  0 | 10 GiB |  100 |  |  ok |
| vm-201-disk-1 |  0 | 10 GiB |  101 |  |  ok |
| vm-214-disk-1 |  0 | 10 GiB |  102 |  |  ok |
| vm-214-disk-2 |  0 | 10 GiB |  103 |  |  ok |
+------------------------------------------------------------------------------------------------------------+

root@vega:~# drbdmanage list-resources
Code:
+------------------------------------------------------------------------------------------------------------+
| Name  |  |  State |
|------------------------------------------------------------------------------------------------------------|
| vm-199-disk-1 |  | pending actions: remove |
| vm-201-disk-1 |  | pending actions: remove |
| vm-214-disk-1 |  |  ok |
| vm-214-disk-2 |  |  ok |
+------------------------------------------------------------------------------------------------------------+

root@vega:~# lsblk
Code:
NAME  MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda  8:0  0 111,8G  0 disk
├─sda1  8:1  0  1007K  0 part
├─sda2  8:2  0  127M  0 part
├─sda3  8:3  0  7,9G  0 part
│ ├─pve-root  251:0  0  5,5G  0 lvm  /
│ └─pve-swap  251:6  0  2G  0 lvm  [SWAP]
└─sda4  8:4  0 103,8G  0 part
  ├─drbdpool-.drbdctrl_0  251:1  0  4M  0 lvm  
  │ └─drbd0  147:0  0  4M  1 disk
  ├─drbdpool-.drbdctrl_1  251:2  0  4M  0 lvm  
  │ └─drbd1  147:1  0  4M  1 disk
  ├─drbdpool-drbdthinpool_tmeta  251:3  0  616M  0 lvm  
  │ └─drbdpool-drbdthinpool-tpool  251:5  0  102G  0 lvm  
  │  ├─drbdpool-drbdthinpool  251:7  0  102G  0 lvm  
  │  └─drbdpool-vm--214--disk--2_00 251:8  0  10G  0 lvm  
  │  └─drbd103  147:103  0  10G  1 disk
  └─drbdpool-drbdthinpool_tdata  251:4  0  102G  0 lvm  
  └─drbdpool-drbdthinpool-tpool  251:5  0  102G  0 lvm  
  ├─drbdpool-drbdthinpool  251:7  0  102G  0 lvm  
  └─drbdpool-vm--214--disk--2_00 251:8  0  10G  0 lvm  
  └─drbd103  147:103  0  10G  1 disk

root@vega:~# drbdadm status
Code:
.drbdctrl role:Secondary
  volume:0 disk:UpToDate
  volume:1 disk:UpToDate
  altair role:Secondary
  volume:0 peer-disk:UpToDate
  volume:1 peer-disk:UpToDate

vm-214-disk-2 role:Secondary
  disk:UpToDate
  altair role:Primary
  peer-disk:UpToDate

Thank you in advance and please let me know if I can provide further informations with more commands. Unfortunately I still dont know drbdsetup/drbdadm too well for now.
 
could you post the same commands also from the second node?

additionally, you could include "drbdmanage list-assignments" and "drbd-overview" and maybe try to remove the "pending removal" resources with drbdmanage on both nodes (again).

it seems like the state that drbdmanage sees and the actual state have gone out of sync (e.g., drbdmanage says there is a vm-214-disk-1 volume, but lsblk says otherwise)
 
Hello,
I've done further tests, and now deleted vm 214 from the cluster GUI, which caused the correct and painless removal from the DRBD9 pool without any further action. I assume that I have some sort of "dirt" in the drbd configuration to cause this VM residue, however I am willing to reinstall all of the cluster, also for adding a third node, so I might not require the solution for this problem; however I will paste my complete config because maybe other PVE users might benefit from this if someone can come up with a solution that is much more feasible of a complete reinstall.

root@vega:~# drbdsetup show
resource .drbdctrl {
_this_host {
node-id 0;
volume 0 {
device minor 0;
disk "/dev/drbdpool/.drbdctrl_0";
meta-disk internal;
}
volume 1 {
device minor 1;
disk "/dev/drbdpool/.drbdctrl_1";
meta-disk internal;
}
}
connection {
_peer_node_id 1;
path {
_this_host ipv4 192.168.60.155:6999;
_remote_host ipv4 192.168.60.156:6999;
}
net {
cram-hmac-alg "sha256";
shared-secret "3pyER5W0yUa8PQdgOhQE";
_name "altair";
}
}
}

resource vm-215-disk-1 {
_this_host {
node-id 1;
volume 0 {
device minor 104;
disk "/dev/drbdpool/vm-215-disk-1_00";
meta-disk internal;
disk {
size 20971520s; # bytes
}
}
}
connection {
_peer_node_id 0;
path {
_this_host ipv4 192.168.60.155:7004;
_remote_host ipv4 192.168.60.156:7004;
}
net {
allow-two-primaries yes;
cram-hmac-alg "sha1";
shared-secret "ovOuJM/uA/LeZ6Fll9rZ";
_name "altair";
}
}
}

root@vega:~# drbd-overview
0:.drbdctrl/0 Connected(2*) Secondary(2*) UpToDa/UpToDa
1:.drbdctrl/1 Connected(2*) Secondary(2*) UpToDa/UpToDa
104:vm-215-disk-1/0 Connected(2*) Primar/Second UpToDa/UpToDa
root@vega:~# drbdmanage list-assignments
+------------------------------------------------------------------------------------------------------------+
| Node | Resource | Vol ID | | State |
|------------------------------------------------------------------------------------------------------------|
| altair | vm-199-disk-1 | * | | disconnected, FAILED(3), pending actions: decommission |
| altair | vm-201-disk-1 | * | | disconnected, FAILED(3), pending actions: decommission |
| altair | vm-215-disk-1 | * | | ok |
| vega | vm-199-disk-1 | * | | disconnected, FAILED(3), pending actions: decommission |
| vega | vm-201-disk-1 | * | | disconnected, FAILED(3), pending actions: decommission |
| vega | vm-215-disk-1 | * | | ok |
+------------------------------------------------------------------------------------------------------------+
root@vega:~# drbdmanage list-volumes
+------------------------------------------------------------------------------------------------------------+
| Name | Vol ID | Size | Minor | | State |
|------------------------------------------------------------------------------------------------------------|
| vm-199-disk-1 | 0 | 10 GiB | 100 | | ok |
| vm-201-disk-1 | 0 | 10 GiB | 101 | | ok |
| vm-215-disk-1 | 0 | 10 GiB | 104 | | ok |
+------------------------------------------------------------------------------------------------------------+
root@vega:~# drbdmanage list-resources
+------------------------------------------------------------------------------------------------------------+
| Name | | State |
|------------------------------------------------------------------------------------------------------------|
| vm-199-disk-1 | | pending actions: remove |
| vm-201-disk-1 | | pending actions: remove |
| vm-215-disk-1 | | ok |
+------------------------------------------------------------------------------------------------------------+
root@vega:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 111,8G 0 disk
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 127M 0 part
├─sda3 8:3 0 7,9G 0 part
│ ├─pve-root 251:0 0 5,5G 0 lvm /
│ └─pve-swap 251:6 0 2G 0 lvm [SWAP]
└─sda4 8:4 0 103,8G 0 part
├─drbdpool-.drbdctrl_0 251:1 0 4M 0 lvm
│ └─drbd0 147:0 0 4M 1 disk
├─drbdpool-.drbdctrl_1 251:2 0 4M 0 lvm
│ └─drbd1 147:1 0 4M 1 disk
├─drbdpool-drbdthinpool_tmeta 251:3 0 616M 0 lvm
│ └─drbdpool-drbdthinpool-tpool 251:5 0 102G 0 lvm
│ ├─drbdpool-drbdthinpool 251:7 0 102G 0 lvm
│ └─drbdpool-vm--215--disk--1_00 251:9 0 10G 0 lvm
│ └─drbd104 147:104 0 10G 0 disk
└─drbdpool-drbdthinpool_tdata 251:4 0 102G 0 lvm
└─drbdpool-drbdthinpool-tpool 251:5 0 102G 0 lvm
├─drbdpool-drbdthinpool 251:7 0 102G 0 lvm
└─drbdpool-vm--215--disk--1_00 251:9 0 10G 0 lvm
└─drbd104 147:104 0 10G 0 disk
root@vega:~# lvscan
ACTIVE '/dev/drbdpool/.drbdctrl_0' [4,00 MiB] inherit
ACTIVE '/dev/drbdpool/.drbdctrl_1' [4,00 MiB] inherit
ACTIVE '/dev/drbdpool/drbdthinpool' [102,00 GiB] inherit
ACTIVE '/dev/drbdpool/vm-215-disk-1_00' [10,00 GiB] inherit
ACTIVE '/dev/pve/swap' [2,00 GiB] inherit
ACTIVE '/dev/pve/root' [5,50 GiB] inherit
root@vega:~# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
.drbdctrl_0 drbdpool -wi-ao---- 4,00m
.drbdctrl_1 drbdpool -wi-ao---- 4,00m
drbdthinpool drbdpool twi-aotz-- 102,00g 9,81 0,91
vm-215-disk-1_00 drbdpool Vwi-aotz-- 10,00g drbdthinpool 99,98
root pve -wi-ao---- 5,50g
swap pve -wi-ao---- 2,00g

root@altair:~# drbdsetup show
resource .drbdctrl {
_this_host {
node-id 1;
volume 0 {
device minor 0;
disk "/dev/drbdpool/.drbdctrl_0";
meta-disk internal;
}
volume 1 {
device minor 1;
disk "/dev/drbdpool/.drbdctrl_1";
meta-disk internal;
}
}
connection {
_peer_node_id 0;
path {
_this_host ipv4 192.168.60.156:6999;
_remote_host ipv4 192.168.60.155:6999;
}
net {
cram-hmac-alg "sha256";
shared-secret "3pyER5W0yUa8PQdgOhQE";
_name "vega";
}
}
}

resource vm-215-disk-1 {
_this_host {
node-id 0;
volume 0 {
device minor 104;
disk "/dev/drbdpool/vm-215-disk-1_00";
meta-disk internal;
disk {
size 20971520s; # bytes
}
}
}
connection {
_peer_node_id 1;
path {
_this_host ipv4 192.168.60.156:7004;
_remote_host ipv4 192.168.60.155:7004;
}
net {
allow-two-primaries yes;
cram-hmac-alg "sha1";
shared-secret "ovOuJM/uA/LeZ6Fll9rZ";
_name "vega";
}
}
}

root@altair:~# drbd-overview

0:.drbdctrl/0 Connected(2*) Secondary(2*) UpToDa/UpToDa
1:.drbdctrl/1 Connected(2*) Secondary(2*) UpToDa/UpToDa
104:vm-215-disk-1/0 Connected(2*) Second/Primar UpToDa/UpToDa
root@altair:~#
root@altair:~# drbdmanage list-volumes
+------------------------------------------------------------------------------------------------------------+
| Name | Vol ID | Size | Minor | | State |
|------------------------------------------------------------------------------------------------------------|
| vm-199-disk-1 | 0 | 10 GiB | 100 | | ok |
| vm-201-disk-1 | 0 | 10 GiB | 101 | | ok |
| vm-215-disk-1 | 0 | 10 GiB | 104 | | ok |
+------------------------------------------------------------------------------------------------------------+
root@altair:~# drbdmanage list-resources
+------------------------------------------------------------------------------------------------------------+
| Name | | State |
|------------------------------------------------------------------------------------------------------------|
| vm-199-disk-1 | | pending actions: remove |
| vm-201-disk-1 | | pending actions: remove |
| vm-215-disk-1 | | ok |
+------------------------------------------------------------------------------------------------------------+
root@altair:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 111,8G 0 disk
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 127M 0 part
├─sda3 8:3 0 7,9G 0 part
│ ├─pve-root 251:0 0 5,5G 0 lvm /
│ └─pve-swap 251:6 0 2G 0 lvm [SWAP]
└─sda4 8:4 0 103,8G 0 part
├─drbdpool-.drbdctrl_0 251:1 0 4M 0 lvm
│ └─drbd0 147:0 0 4M 1 disk
├─drbdpool-.drbdctrl_1 251:2 0 4M 0 lvm
│ └─drbd1 147:1 0 4M 1 disk
├─drbdpool-drbdthinpool_tmeta 251:3 0 104M 0 lvm
│ └─drbdpool-drbdthinpool-tpool 251:5 0 102G 0 lvm
│ ├─drbdpool-drbdthinpool 251:7 0 102G 0 lvm
│ └─drbdpool-vm--215--disk--1_00 251:9 0 10G 0 lvm
│ └─drbd104 147:104 0 10G 1 disk
└─drbdpool-drbdthinpool_tdata 251:4 0 102G 0 lvm
└─drbdpool-drbdthinpool-tpool 251:5 0 102G 0 lvm
├─drbdpool-drbdthinpool 251:7 0 102G 0 lvm
└─drbdpool-vm--215--disk--1_00 251:9 0 10G 0 lvm
└─drbd104 147:104 0 10G 1 disk
root@altair:~# lvscan
ACTIVE '/dev/drbdpool/.drbdctrl_0' [4,00 MiB] inherit
ACTIVE '/dev/drbdpool/.drbdctrl_1' [4,00 MiB] inherit
ACTIVE '/dev/drbdpool/drbdthinpool' [102,00 GiB] inherit
ACTIVE '/dev/drbdpool/vm-215-disk-1_00' [10,00 GiB] inherit
ACTIVE '/dev/pve/swap' [2,00 GiB] inherit
ACTIVE '/dev/pve/root' [5,50 GiB] inherit
root@altair:~# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
.drbdctrl_0 drbdpool -wi-ao---- 4,00m
.drbdctrl_1 drbdpool -wi-ao---- 4,00m
drbdthinpool drbdpool twi-aotz-- 102,00g 9,81 5,34
vm-215-disk-1_00 drbdpool Vwi-aotz-- 10,00g drbdthinpool 99,98
root pve -wi-ao---- 5,50g
swap pve -wi-ao---- 2,00g
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!