zfs error: SMB SID

rcd

Well-Known Member
Jul 12, 2019
246
25
58
63
Getting zfs error: SMB SID when adding storage.

I have my pools
Code:
root@pve3:~# zfs list
NAME           USED  AVAIL  REFER  MOUNTPOINT
poolz          905G  13.4T   204K  /poolz
poolz/vmdata   179K  13.4T   179K  /poolz/vmdata
poolz/vz       179K  1024M   179K  /poolz/vz
poolz/work     905G  7.12T   905G  /work

I add my storage
Code:
root@pve3:~# pvesm add zfspool vmdata --pool poolz/vmdata

Code:
root@pve3:~# pvesm list vmdata
invalid option 'p'
usage:
        list [-rH][-d max] [-o property[,...]] [-t type[,...]] [-s property] ...
            [-S property] ... [filesystem|volume|snapshot] ...
The following properties are supported:
        PROPERTY       EDIT  INHERIT   VALUES
        available        NO       NO   <size>
        compressratio    NO       NO   <1.00x or higher if compressed>
        creation         NO       NO   <date>
        defer_destroy    NO       NO   yes | no
        mounted          NO       NO   yes | no
        origin           NO       NO   <snapshot>
        referenced       NO       NO   <size>
        type             NO       NO   filesystem | volume | snapshot
        used             NO       NO   <size>
        usedbychildren   NO       NO   <size>
        usedbydataset    NO       NO   <size>
        usedbyrefreservation  NO       NO   <size>
        usedbysnapshots  NO       NO   <size>
        userrefs         NO       NO   <count>
        aclinherit      YES      YES   discard | noallow | restricted | passthrough | passthrough-x
        aclmode         YES      YES   discard | groupmask | passthrough
        atime           YES      YES   on | off
        canmount        YES       NO   on | off | noauto
        casesensitivity  NO      YES   sensitive | insensitive | mixed
        checksum        YES      YES   on | off | fletcher2 | fletcher4 | sha256
        compression     YES      YES   on | off | lzjb | gzip | gzip-[1-9] | zle
        copies          YES      YES   1 | 2 | 3
        dedup           YES      YES   on | off | verify | sha256[,verify]
        devices         YES      YES   on | off
        exec            YES      YES   on | off
        logbias         YES      YES   latency | throughput
        mlslabel        YES      YES   <sensitivity label>
        mountpoint      YES      YES   <path> | legacy | none
        nbmand          YES      YES   on | off
        normalization    NO      YES   none | formC | formD | formKC | formKD
        primarycache    YES      YES   all | none | metadata
        quota           YES       NO   <size> | none
        readonly        YES      YES   on | off
        recordsize      YES      YES   512 to 128k, power of 2
        refquota        YES       NO   <size> | none
        refreservation  YES       NO   <size> | none
        reservation     YES       NO   <size> | none
        secondarycache  YES      YES   all | none | metadata
        setuid          YES      YES   on | off
        sharenfs        YES      YES   on | off | share(1M) options
        sharesmb        YES      YES   on | off | sharemgr(1M) options
        snapdir         YES      YES   hidden | visible
        utf8only         NO      YES   on | off
        version         YES       NO   1 | 2 | 3 | 4 | current
        volblocksize     NO      YES   512 to 128k, power of 2
        volsize         YES       NO   <size>
        vscan           YES      YES   on | off
        xattr           YES      YES   on | off
        zoned           YES      YES   on | off
        userused@...     NO       NO   <size>
        groupused@...    NO       NO   <size>
        userquota@...   YES       NO   <size> | none
        groupquota@...  YES       NO   <size> | none
Sizes are specified in bytes with standard units such as K, M, G, etc.
User-defined properties can be specified by using a name containing a colon (:).
The {user|group}{used|quota}@ properties must be appended with
a user or group specifier of one of these forms:
    POSIX name      (eg: "matt")
    POSIX id        (eg: "126829")
    SMB name@domain (eg: "matt@sun")
zfs error:     SMB SID         (eg: "S-1-234-567-89")

Note the last line "zfs error: SMB SID (eg: "S-1-234-567-89")." I get the same error if I view the storage from the GUI

1651232835882.png
And whats with the odd invalid option 'p' error?
What am I missing?
 
Last edited:
Code:
root@pve3:~# pveversion -v
proxmox-ve: 7.1-2 (running kernel: 5.13.19-2-pve)
pve-manager: 7.1-12 (running version: 7.1-12/b3c09de3)
pve-kernel-helper: 7.2-1
pve-kernel-5.13: 7.1-9
pve-kernel-5.13.19-6-pve: 5.13.19-15
pve-kernel-5.13.19-2-pve: 5.13.19-4
ceph-fuse: 15.2.15-pve1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.2
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.1-7
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.1-5
libpve-guest-common-perl: 4.1-1
libpve-http-server-perl: 4.1-1
libpve-storage-perl: 7.1-2
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.12-1
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-2
proxmox-backup-client: 2.1.6-1
proxmox-backup-file-restore: 2.1.6-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.4-9
pve-cluster: 7.1-3
pve-container: 4.1-4
pve-docs: 7.1-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.4-1
pve-ha-manager: 3.3-3
pve-i18n: 2.6-2
pve-qemu-kvm: 6.2.0-5
pve-xtermjs: 4.16.0-1
qemu-server: 7.1-5
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.7.1~bpo11+1
vncterm: 1.7-1
zfsutils-linux: residual config
root@pve3:~#

Here you are :)
 
Last edited:
Thanks, now the zfsutils-linux error has disappeared:
Code:
root@pve3:~# pveversion -v
...
zfsutils-linux: 2.1.4-pve1

On the other hand, I now seems to have lost my zpool. Partitioins created are still there and active, but I think if I reboot the server they might be gone. That's not so good as I have some data there I'd prefer not having to restore again (takes upwards of 24 hours).

Code:
root@pve3:~# zpool status
no pools available

root@pve3:~# zfs list
no datasets available

I tried importing it but get this error:
Code:
root@pve3:~# zpool import -f
   pool: poolz
     id: 5053827273944205062
  state: UNAVAIL
status: The pool is currently imported by another system.
 action: The pool must be exported from pve3 (hostid=3c093934)
        before it can be safely imported.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-EY
 config:

        poolz       UNAVAIL  currently in use
          raidz1-0  ONLINE
            sdb1    ONLINE
            sdc1    ONLINE
            sdd1    ONLINE
            sde1    ONLINE
            sdf1    ONLINE

And obviously I can't export a pool that I can't see. What's the best way forward from here?