[Newb] Pulled disk, pool, lxc and VM orphaned, how to remove?

jaxjexjox

Member
Mar 25, 2022
53
0
11
Test environment, just fiddling.

Pulled test hard disk, replaced with intended final disk.

Ended up with an LXC, VM and ZFS pool "orphaned" on the system as it can't see them but it also can't remove them.
Googled how to nuke the VM (delete a file on filesystem)

Unsure how to nuke the LXC and pool?
Any tips?

"could not activate storage 'VMS', zfs error: cannot import 'VMS': no such pool available (500)"
 
You can do that by editing the config files.

Remove the "/etc/pve/lxc/"<VMID>.conf" files of the orphaned LXCs, the "/etc/pve/qemu-server/"<VMID>.conf" files of the orphaned VMs and remove the lines of your orphaned ZFS pool in the "/etc/pve/storage.conf".

By the way...correct way before removing a disk would to destroy the VMs/LXCs by GUI/CLI, then go to Datacenter -> Storage -> YourZFSPool -> Remove" and then export or destroy the pool by CLI with "zpool destroy YourPool" or "zpool export YourPool" commands.
 
Last edited:
Thank you very much, obviously it was dumb of me in the first place. I just kinda assumed the UI would offer a 'nuke anyway' option.

Now to solve bigger issues.

(like allocating 180GB of unpartitioned boot NVMe drive to become an LXC or VM store)
 
Thank you very much, obviously it was dumb of me in the first place. I just kinda assumed the UI would offer a 'nuke anyway' option.

Now to solve bigger issues.

(like allocating 180GB of unpartitioned boot NVMe drive to become an LXC or VM store)
You probably first want to create backups. If its a ZFS pool you would need to manually expand the ZFS partition and make sure that the ZFS option "autoexpand" is enabled. If its a LVM-Thin you would need to manually expand the LVM partition, extend the VG and extend the two LVM-Thin LVs.

Might be easier to just install PVE again with the correct size (there are advanced options in the installer where you can tell how LVM/ZFS should be partitioned) and then restore guests from backup.
 
So did I mess up in the first instant?


root@proxmox:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 931.5G 0 disk
└─sda1 8:1 0 931.5G 0 part
nvme0n1 259:0 0 238.5G 0 disk
├─nvme0n1p1 259:1 0 1007K 0 part
├─nvme0n1p2 259:2 0 512M 0 part /boot/efi
└─nvme0n1p3 259:3 0 59.5G 0 part
├─pve-swap 253:0 0 7G 0 lvm [SWAP]
├─pve-root 253:1 0 14.8G 0 lvm /
├─pve-data_tmeta 253:2 0 1G 0 lvm
│ └─pve-data 253:4 0 28.4G 0 lvm
└─pve-data_tdata 253:3 0 28.4G 0 lvm
└─pve-data 253:4 0 28.4G 0 lvm
root@proxmox:~# fdisk -l
Disk /dev/nvme0n1: 238.47 GiB, 256060514304 bytes, 500118192 sectors
Disk model: BC711 NVMe SK hynix 256GB
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 2D9C51E1-3860-40C0-AF0E-XXXXXXXXXX

Device Start End Sectors Size Type
/dev/nvme0n1p1 34 2047 2014 1007K BIOS boot
/dev/nvme0n1p2 2048 1050623 1048576 512M EFI System
/dev/nvme0n1p3 1050624 125829120 124778497 59.5G Linux LVM





I specifically made a 60GB partition "for Proxmox" thinking I could allocate remaining space to be usable?

If I just started with 256GB in the first place, would my VMs and LXC's be stored inside these 2 ?

pve-data_tmeta 253:2 0 1G 0 lvm
│ └─pve-data 253:4 0 28.4G 0 lvm
└─pve-data_tdata 253:3 0 28.4G 0 lvm
└─pve-data 253:4 0 28.4G 0 lvm
 
So did I mess up in the first instant?


root@proxmox:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 931.5G 0 disk
└─sda1 8:1 0 931.5G 0 part
nvme0n1 259:0 0 238.5G 0 disk
├─nvme0n1p1 259:1 0 1007K 0 part
├─nvme0n1p2 259:2 0 512M 0 part /boot/efi
└─nvme0n1p3 259:3 0 59.5G 0 part
├─pve-swap 253:0 0 7G 0 lvm [SWAP]
├─pve-root 253:1 0 14.8G 0 lvm /
├─pve-data_tmeta 253:2 0 1G 0 lvm
│ └─pve-data 253:4 0 28.4G 0 lvm
└─pve-data_tdata 253:3 0 28.4G 0 lvm
└─pve-data 253:4 0 28.4G 0 lvm
root@proxmox:~# fdisk -l
Disk /dev/nvme0n1: 238.47 GiB, 256060514304 bytes, 500118192 sectors
Disk model: BC711 NVMe SK hynix 256GB
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 2D9C51E1-3860-40C0-AF0E-XXXXXXXXXX

Device Start End Sectors Size Type
/dev/nvme0n1p1 34 2047 2014 1007K BIOS boot
/dev/nvme0n1p2 2048 1050623 1048576 512M EFI System
/dev/nvme0n1p3 1050624 125829120 124778497 59.5G Linux LVM





I specifically made a 60GB partition "for Proxmox" thinking I could allocate remaining space to be usable?

If I just started with 256GB in the first place, would my VMs and LXC's be stored inside these 2 ?

pve-data_tmeta 253:2 0 1G 0 lvm
│ └─pve-data 253:4 0 28.4G 0 lvm
└─pve-data_tdata 253:3 0 28.4G 0 lvm
└─pve-data 253:4 0 28.4G 0 lvm
Jup. PVE will use your 60G partition for both the root file system and as a storage for guests. See the paragraph "Advanced LVM Configuration Options" in the wiki: https://pve.proxmox.com/wiki/Installation

maxroot would be the size of the root filesystem where you can only store backups/isos/templates/snippets (or any other files/folders).
swapsize defines your swap LV.
Everything thats left will be used for the LVM-Thin where you can only store VMs/LXCs, unless you manually limited its size by changing "maxvz", "hdsize" or "minfree".
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!