So I downloaded and installed a template. Now I want to remove said container. I'm able to delete OS disc images and templates from the storage, however, it won't let me remove the container VM drive. I'm logged in as Root & I'm unable from the web interface. Is there something else I should be trying?
Are the LXC running? What say the command lxc-info --name ID in the State filed? RUNNING? If running, you should stop the LXC first with lxc-stop --name ID. And then, destroy the LXC with lxc-destroy --name ID. Greetings!
So I ran the command, the container was stopped, and then I ran the destroy command, and it's still there. even after rebooting
mmm... let's see... Supposed that you are in pve-node1 host, that is the node in where you have your evil and demoniac CT. So, you are logging through ssh or physically in the front of a monitor connected to that node. Run a commands in your bash shell. You destroyed the CT with ID 666 (suppose), but when you run lxc-ls the in the output still exist 666??? It does? If yes, try with: lxc-destroy -o /tmp/destroyed.log -l INFO -n IDofCT -f and then paste the output (please use code when paste it). Greetings!
Container 104 is what I'm trying to destroy. root@agc:~# lxc-ls 101 104 root@agc:~# lxc-destroy -o /tmp/destroyed.log -l INFO -n IDofCT -f Container is not defined
for IDofCT i meant ID of Container. So, your command must be: root@agc:~# lxc-destroy -o /tmp/destroyed.log -l INFO -n 104 -f with that, you are saying that want to destroy your LXC 104, that want a INFO logging level, and save it in /tmp/destroyed.log and the -f switch at the end, is for force the command (if running, stop and destroy for example). Try again...
Oops! That makes more sense. Sorry. lxc-destroy 1470178777.246 WARN lxc_confile - confile.c:config_pivotdir:1817 - lxc.pivotdir is ignored. It will soon become an error. lxc-destroy 1470178777.247 WARN lxc_cgmanager - cgmanager.c:cgm_get:994 - do_cgm_get exited with error lxc-destroy 1470178777.248 WARN lxc_cgmanager - cgmanager.c:cgm_get:994 - do_cgm_get exited with error lxc-destroy 1470178777.250 INFO lxc_container - lxccontainer.c:container_destroy:2183 - Destroyed rootfs for 104 lxc-destroy 1470178777.250 INFO lxc_container - lxccontainer.c:container_destroy:2199 - Destroyed directory for 104 Now I just have LXC 101 in the lxc-ls output.
However, When I log into the web interface, it still shows 104. As well as when I re-run lxc-ls, it shows 101, 104. It's still there.
mmm.... really, i don't know why you get this error. But, you can try cleaning the thing manually. That is, remove the storage in where the rootfs resides (and any other storage medium if any), and then erase the configfile. For that, you need to known (surely that you known) what type of storage you use... simply, take a look to your /etc/pve/lxc/104.conf: In my example, I have a LXC with ID 681, would be: Code: root@agc:~# cat /etc/pve/lxc/681.conf (you get something like) arch: amd64 cpulimit: 1 cpuunits: 1024 hostname: lxctest memory: 512 mp0: local:681/vm-681-disk-1.raw,mp=/home,size=8G mp1: zerus-zfs:subvol-681-disk-1,mp=/var,size=8G net0: name=eth0,bridge=vmbr0,hwaddr=AA:57:9F:FA:C7:1D,ip=dhcp,tag=20,type=veth ostype: debian rootfs: zerus-lvm:vm-681-disk-1,size=8G swap: 512 look at the lines starting with mpN and the rootfs. In the example, you get two mountpoints: the rootfs (always have one ) and one for home. One is in a ZFS storage, and the other, is local. Depending the tipe of storage(s) that you have, you must remove it/them, and then, remove the config file. For this example, would be: root@agc:~# rm -R /var/lib/vz/images/681 #with this, I remove my mp0 (my LXC /home) root@agc:~# lvremove my-lvm-vg-name/vm-681-disk-1 #with this, I remove my rootfs (my LXC /) root@agc:~# zfs destroy zstorage/subvol-681-disk-1 #with this, I remove my mp1, (my LXC /var) root@agc:~# rm -R /var/lib/lxc/681 #with this, I remove the config of the LXC root@agc:~# rm /etc/pve/lxc/681.conf #with this, I remove the LXC from the PVE cluster Greetings!
Are you trying to remove the disk via the storage? The correct way would be via the container's Resources tab. Or, in order to delete the entire container, use the Remove button on the top right after selecting the container. And in response to the othr responses here: Don't use lxc-destroy, use Code: # pct destroy 104 Now if that fails, please post the error output as well as Code: # cat /etc/pve/lxc/104.conf As for deleting disks: use # pvesm free VolumeName