strange zfs list output

danboid

Renowned Member
Jul 5, 2012
16
0
66
Here's how my zfs list output looked yesterday:

$ zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 17.1G 843G 104K /rpool
rpool/ROOT 16.6G 843G 96K /rpool/ROOT
rpool/ROOT/pve-1 16.6G 843G 16.5G /
rpool/data 380M 843G 104K /rpool/data
rpool/data/subvol-100-disk-0 380M 29.6G 380M /rpool/data/subvol-100-disk-0

Here's how it looks now:

# zfs list
NAME USED AVAIL REFER MOUNTPOINT
contank 384M 16.6T 153K /contank
contank/containers 380M 16.6T 153K /contank/containers
contank/containers/subvol-100-disk-0 380M 29.6G 380M /contank/containers/subvol-100-disk-0
rpool 13.6G 847G 104K /rpool
rpool/ROOT 13.6G 847G 96K /rpool/ROOT
rpool/ROOT/pve-1 13.6G 847G 13.0G /
rpool/ROOT/pve-1/08d97922887f8835afddd81575f1807bc08871123f9af6d300d8daf0814e18cc 96.0M 847G 205M legacy
rpool/ROOT/pve-1/11d5a366035b1203f15a14da312b3ab1fd1a34730ffec1494df324abcd68c270 1.31M 847G 507M legacy
rpool/ROOT/pve-1/11d5a366035b1203f15a14da312b3ab1fd1a34730ffec1494df324abcd68c270-init 152K 847G 506M legacy
rpool/ROOT/pve-1/2da96e738d600e2e5c216fe596caac557f22344d5f9727365d891ebdcad6d3df 130M 847G 333M legacy
rpool/ROOT/pve-1/5d6139e5596c4c26b225e4a570179b2fc5efb43408bef4edd4844f4b834dc785 8.89M 847G 97.3M legacy
rpool/ROOT/pve-1/6a3cddb167234de79dbeb4c9a7750410fb3eed685d4b37ae3743ef1397e8fd07 15.0M 847G 111M legacy
rpool/ROOT/pve-1/815e9b8df3ee78066eef81a3cdd4614fe48d70384c43bbf07d28703372c988a1 173M 847G 506M legacy
rpool/ROOT/pve-1/ca30df1dbabe4b7c65bdd33530dac4a507dc18ef471c94550a29d7ca911aee64 120K 847G 506M legacy
rpool/ROOT/pve-1/cdc798d5f9933fa5a9e9c7860be334663aad45af6d46c9a6f9a43dffb0ee33af 90.2M 847G 90.2M legacy
rpool/ROOT/pve-1/d9453922610edd691bb5cf787668d635b130d02f1c015774832ff388b9862784 1.33M 847G 507M legacy
rpool/ROOT/pve-1/d9453922610edd691bb5cf787668d635b130d02f1c015774832ff388b9862784-init 152K 847G 506M legacy
rpool/data 96K 847G 96K /rpool/data

What could've caused all of these extra "legacy" datasets to appear below rpool/ROOT/pve-1?

Yesterday I created a new ZFS pool, I added it to the proxmox storage nodes and then I deleted the default local-zfs node. I presumed that because I had already deleted the only test container stored in local-zfs (rpool/data) I would be safe to do that but maybe that is what caused this to happen? I can't think of anything else I did that might've caused this.

Is it safe to delete local-zfs from the storage section of the proxmox web UI if no cointainers or VMs are stored on it?
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!