[SOLVED] Having trouble deleting container in 4.2

warinthestars

New Member
Sep 6, 2015
11
0
1
So I downloaded and installed a template. Now I want to remove said container. I'm able to delete OS disc images and templates from the storage, however, it won't let me remove the container VM drive. I'm logged in as Root & I'm unable from the web interface.

Is there something else I should be trying?
 
Are the LXC running?
What say the command lxc-info --name ID in the State filed? RUNNING? If running, you should stop the LXC first with lxc-stop --name ID. And then, destroy the LXC with lxc-destroy --name ID.

Greetings!
 
So I ran the command, the container was stopped, and then I ran the destroy command, and it's still there. even after rebooting
 
mmm... let's see...
Supposed that you are in pve-node1 host, that is the node in where you have your evil and demoniac CT. So, you are logging through ssh or physically in the front of a monitor connected to that node. Run a commands in your bash shell. You destroyed the CT with ID 666 (suppose), but when you run lxc-ls the in the output still exist 666??? It does? If yes, try with:
lxc-destroy -o /tmp/destroyed.log -l INFO -n IDofCT -f

and then paste the output (please use code when paste it).

Greetings!
 
Container 104 is what I'm trying to destroy.

root@agc:~# lxc-ls
101 104
root@agc:~# lxc-destroy -o /tmp/destroyed.log -l INFO -n IDofCT -f
Container is not defined
 
for IDofCT i meant ID of Container. So, your command must be:
root@agc:~# lxc-destroy -o /tmp/destroyed.log -l INFO -n 104 -f

with that, you are saying that want to destroy your LXC 104, that want a INFO logging level, and save it in /tmp/destroyed.log and the -f switch at the end, is for force the command (if running, stop and destroy for example).

Try again...
 
Oops! That makes more sense. Sorry.


lxc-destroy 1470178777.246 WARN lxc_confile - confile.c:config_pivotdir:1817 - lxc.pivotdir is ignored. It will soon become an error.
lxc-destroy 1470178777.247 WARN lxc_cgmanager - cgmanager.c:cgm_get:994 - do_cgm_get exited with error
lxc-destroy 1470178777.248 WARN lxc_cgmanager - cgmanager.c:cgm_get:994 - do_cgm_get exited with error
lxc-destroy 1470178777.250 INFO lxc_container - lxccontainer.c:container_destroy:2183 - Destroyed rootfs for 104
lxc-destroy 1470178777.250 INFO lxc_container - lxccontainer.c:container_destroy:2199 - Destroyed directory for 104

Now I just have LXC 101 in the lxc-ls output.
 
However, When I log into the web interface, it still shows 104. As well as when I re-run lxc-ls, it shows 101, 104.

It's still there.
 
mmm.... really, i don't know why you get this error.
But, you can try cleaning the thing manually. That is, remove the storage in where the rootfs resides (and any other storage medium if any), and then erase the configfile. For that, you need to known (surely that you known) what type of storage you use... simply, take a look to your /etc/pve/lxc/104.conf:
In my example, I have a LXC with ID 681, would be:
Code:
root@agc:~# cat /etc/pve/lxc/681.conf
(you get something like)

arch: amd64
cpulimit: 1
cpuunits: 1024
hostname: lxctest
memory: 512
mp0: local:681/vm-681-disk-1.raw,mp=/home,size=8G
mp1: zerus-zfs:subvol-681-disk-1,mp=/var,size=8G
net0: name=eth0,bridge=vmbr0,hwaddr=AA:57:9F:FA:C7:1D,ip=dhcp,tag=20,type=veth
ostype: debian
rootfs: zerus-lvm:vm-681-disk-1,size=8G
swap: 512

look at the lines starting with mpN and the rootfs. In the example, you get two mountpoints: the rootfs (always have one ;)) and one for home. One is in a ZFS storage, and the other, is local.
Depending the tipe of storage(s) that you have, you must remove it/them, and then, remove the config file. For this example, would be:

root@agc:~# rm -R /var/lib/vz/images/681 #with this, I remove my mp0 (my LXC /home)
root@agc:~# lvremove my-lvm-vg-name/vm-681-disk-1 #with this, I remove my rootfs (my LXC /)
root@agc:~# zfs destroy zstorage/subvol-681-disk-1 #with this, I remove my mp1, (my LXC /var)
root@agc:~# rm -R /var/lib/lxc/681 #with this, I remove the config of the LXC
root@agc:~# rm /etc/pve/lxc/681.conf #with this, I remove the LXC from the PVE cluster


Greetings!
 
Are you trying to remove the disk via the storage? The correct way would be via the container's Resources tab. Or, in order to delete the entire container, use the Remove button on the top right after selecting the container.

And in response to the othr responses here: Don't use lxc-destroy, use
Code:
# pct destroy 104

Now if that fails, please post the error output as well as
Code:
# cat /etc/pve/lxc/104.conf


As for deleting disks: use # pvesm free VolumeName
 
Last edited:
  • Like
Reactions: linkstat
Are you trying to remove the disk via the storage? The correct way would be via the container's Resources tab. Or, in order to delete the entire container, use the Remove button on the top right after selecting the container.

And in response to the othr responses here: Don't use lxc-destroy, use
Code:
# pct destroy 104

Now if that fails, please post the error output as well as
Code:
# cat /etc/pve/lxc/104.conf


As for deleting disks: use # pvesm free VolumeName



I'm such an Idiot. I couldn't find the Remove button to save my life.
 
Hi @All,

I apologize for re-opening this thread however, I am having an issue here.
I have 4 servers in a cluster, at this moment i ONLY have a VM and a CT.

I ran the apt-get update and apt-dist upgrade command across the cluster and rebooted. No issues.
After reboot however, the VM comes online with no problem while the CT remained offline.

I tried to "remove" the CT, but it gives me the same error i get via the CLI:

root@cloud:~# pct destroy 102
can't remove CT 102 - protection mode enabled
root@cloud:~#
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The config file is as follows:
root@cloud:~# cat /etc/pve/lxc/102.conf
arch: amd64
cores: 1
hostname: DHCPServer1
memory: 512
nameserver: 8.8.8.8 4.4.4.4
net0: name=eth0,bridge=vmbr0,firewall=1,gw=185.191.239.1,hwaddr=92:42:E1:D9:E9:47,ip=185.191.239.2/26,type=veth
ostype: centos
protection: 1
rootfs: CLOUD1:vm-102-disk-1,quota=1,size=12G
searchdomain: dncp.cloud.rackend.com
swap: 512
root@cloud:~#

Please note i used the lxc-destroy --name 102 command and it told me it was deleted.
However, the CT is still in the GUI.

I would like to terminate every single thing having to do with this CT.

Thanks,

Foster Banks
 
  • Like
Reactions: dailyxe
You set the "protection" flag (you cannot destroy protected guests). Simply remove/delete that flag if you really want to destroy the guest.
 
Oh
Hi @All,

I apologize for re-opening this thread however, I am having an issue here.
I have 4 servers in a cluster, at this moment i ONLY have a VM and a CT.

I ran the apt-get update and apt-dist upgrade command across the cluster and rebooted. No issues.
After reboot however, the VM comes online with no problem while the CT remained offline.

I tried to "remove" the CT, but it gives me the same error i get via the CLI:

root@cloud:~# pct destroy 102
can't remove CT 102 - protection mode enabled
root@cloud:~#
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The config file is as follows:
root@cloud:~# cat /etc/pve/lxc/102.conf
arch: amd64
cores: 1
hostname: DHCPServer1
memory: 512
nameserver: 8.8.8.8 4.4.4.4
net0: name=eth0,bridge=vmbr0,firewall=1,gw=185.191.239.1,hwaddr=92:42:E1:D9:E9:47,ip=185.191.239.2/26,type=veth
ostype: centos
protection: 1
rootfs: CLOUD1:vm-102-disk-1,quota=1,size=12G
searchdomain: dncp.cloud.rackend.com
swap: 512
root@cloud:~#

Please note i used the lxc-destroy --name 102 command and it told me it was deleted.
However, the CT is still in the GUI.

I would like to terminate every single thing having to do with this CT.

Thanks,

Foster Banks
Thanks my friend have problem too. I will try
 
I have encountered same kind of problem with ceph rbd. I problem came from a VM image on the rbd that wasn't destroyed during the destruction of the VM.

I have found the problem switching from "rbd -p my-pool list" to "rbd -p my-pool list --long". I had i line more in the short version. It was the faulty image to remove by "rbd -p my-pool rm my-faulty-file".
 
You can shearch into /etc/pve/local/lxc the VMS that you need delete, in my case I deleted search the vm in this path.
In the first picture, when I tried delete the container 105, Proxmox give me error, so i searched the info in the path and delete here (/etc/pve/local/lxc).

Sorry by my english. If you need more information don not hesitate contact me.


Regards from Argentina.
 

Attachments

  • proxmox1.JPG
    proxmox1.JPG
    39.5 KB · Views: 21
  • Proxmox2.JPG
    Proxmox2.JPG
    28.4 KB · Views: 21
Are you trying to remove the disk via the storage? The correct way would be via the container's Resources tab. Or, in order to delete the entire container, use the Remove button on the top right after selecting the container.

And in response to the othr responses here: Don't use lxc-destroy, use
Code:
# pct destroy 104

Now if that fails, please post the error output as well as
Code:
# cat /etc/pve/lxc/104.conf


As for deleting disks: use # pvesm free VolumeName
I am having this issue:

Code:
root@pve-node-1:~# pct destroy 300
can't activate LV 'pve/vm-300-disk-0' to zero-out its data: Failed to find logical volume "pve/vm-300-disk-0"

Don't know how to remove this error. Please advise
 
Sorry to re-open this topic - i can't remove this UbuntuTemplate [ID: 1061] as i've it linked with a external drive (VM-Drives)

oot@dev:~# cat /etc/pve/lxc/1061.conf
arch: amd64
cores: 1
features: nesting=1
hostname: UbuntuTemplate
memory: 512
net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=BC:24:11:94:26:46,ip=dhcp,type=veth
onboot: 1
ostype: ubuntu
rootfs: VM-Drives:1061/vm-1061-disk-0.raw,size=8G
swap: 512
unprivileged: 1
root@dev:~#

can someone please advise?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!