Container storage sub volume disappeared

walking

New Member
Jul 21, 2022
5
0
1
On Sunday I manually shut down my proxmox ve server for a planned power outage. During this process I did the following:
1: shut down my samba file server container (#107)
2: deleted an old test VM (VM105)
-there was an option to included purging unreferenced disks, I wanted to recover the storage space so I enabled this thinking it would ONLY purge the storage referenced by THIS vm (vm105).
3: shut down my other vm's and containers (pfsense, etc)
4: shut down the host after waiting for all tasks to complete

Upon starting the host after power was restored the subvolume for the file server was gone (no trace of subvolume 107 on /rpool2 anymore).

I did some searching and found a command to check the snapshots for a copy and...I have no snapshots. Something I thought I had configured previously, but here I am.
root@pve:/# zfs list -t snapshot
no datasets available
root@pve:/# zfs list -t snapshot /rpool2
no datasets available

Question 1:
I think I already know the answer, but is there any possibility of 'undeleting' the missing subvolume short of recreating it and restoring from an external backup?

Question 2:
Was it the purge option from step 2 which may have caused this to happen? I'm trying to understand what happened and learn something useful from it.
 
What if you run zfs list -t all rpool2

In the zfs list command, you should not prepend the pool name with a slash, as it is not a FS path. Also, listing all types of datasets and not just snapshots might list it as it might not have snapshots present.

Can you post the config of container 107 in [CODE][/CODE] tags?
pct config 107
 
Here is what I got:
Code:
root@pve:~# zfs list -t all rpool2
NAME     USED  AVAIL     REFER  MOUNTPOINT
rpool2  39.4G   860G      104K  /rpool2

root@pve:~# zfs list -t all
NAME                       USED  AVAIL     REFER  MOUNTPOINT
rpool                     20.2G  94.1G      104K  /rpool
rpool/ROOT                20.1G  94.1G       96K  /rpool/ROOT
rpool/ROOT/pve-1          20.1G  94.1G     20.1G  /
rpool/data                  96K  94.1G       96K  /rpool/data
rpool2                    39.4G   860G      104K  /rpool2
rpool2/subvol-101-disk-0  5.34G  94.7G     5.34G  /rpool2/subvol-101-disk-0
rpool2/subvol-106-disk-0  1.01G  6.99G     1.01G  /rpool2/subvol-106-disk-0
rpool2/vm-102-disk-0      33.0G   879G     13.7G  -

root@pve:~# pct config 107
arch: amd64
cores: 1
hostname: fileserver
memory: 2048
mp0: local-zfs2:subvol-100-disk-1,mp=/datastore/veeamsvc,size=0T
mp1: local-zfs2:subvol-100-disk-2,mp=/datastore/steve,size=0T
net0: name=eth0,bridge=vmbr0,firewall=1,gw=192.168.100.1,hwaddr=EE:14:42:94:EB:E1,ip=192.168.100.91/24,type=veth
onboot: 1
ostype: debian
rootfs: local-zfs2:subvol-100-disk-0,size=0T
swap: 512
unprivileged: 1

root@pve:~#

(I added a few blank lines above to make it a little easier to read)

History: I will note that the fileserver container was rebuilt once before (after my boot disk died), which is why the container id is 107 and the storage volumes indicate 100. I never did get around to renaming the files to match, though I remember thinking I should at the time. The container is looking for a trio of subvol-100- files, which are now gone.

Upon further inspection of the shutdown I performed on Sunday, the VM I destroyed was VM100. I am thinking the purge option may have presumed any storage with "100" in the name was a candidate for deletion and I just happened to have some container storage that matched the pattern. Well, at least now I know how that 'purge unreferenced disks' option works.

Lesson learned: ensure storage files match the container ID or vm ID when rebuilding
Lesson learned: do not allow CT or VM ID's to overlap
Lesson learned: verify snapshots are working as intended after disk changes
 
I've just experienced a similar issue.
I had a perfectly normal working container with NFS and SMB share + mounted folders from proxmox host.
I shutdown the container.
Got the error when tried to boot it again.
When I looked at the ZFS storage there was no this container storage. It just disappeared!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!