CIFS VFS: Send error in SessSetup = -13

achim22

Renowned Member
May 21, 2015
419
5
83
59
Dortmund
Hallo,
ich habe seit neustem diesen Eintrag im Log und dachte das das PW falsch sei.

Das cifs Storage, was ich habe, funktioniert aber einwandfrei!

VG

May 5 18:08:12 pve kernel: [11940371.793228] CIFS VFS: Send error in SessSetup = -13
May 5 18:08:12 pve kernel: [11940371.804860] Status code returned 0xc000006d STATUS_LOGON_FAILURE
May 5 18:08:14 pve kernel: [11940373.829401] cifs_vfs_err: 2 callbacks suppressed
May 5 18:08:14 pve kernel: [11940373.829402] CIFS VFS: Free previous auth_key.response = 0000000043e263ec
May 5 18:08:14 pve kernel: [11940373.841404] Status code returned 0xc000006d STATUS_LOGON_FAILURE
May 5 18:08:14 pve kernel: [11940373.841408] CIFS VFS: Send error in SessSetup = -13
May 5 18:08:14 pve kernel: [11940373.841846] CIFS VFS: Free previous auth_key.response = 0000000043e263ec
May 5 18:08:14 pve kernel: [11940373.853649] Status code returned 0xc000006d STATUS_LOGON_FAILURE
May 5 18:08:14 pve kernel: [11940373.853652] CIFS VFS: Send error in SessSetup = -13
May 5 18:08:16 pve kernel: [11940375.877354] CIFS VFS: Free previous auth_key.response = 0000000043e263ec
May 5 18:08:16 pve kernel: [11940375.889407] Status code returned 0xc000006d STATUS_LOGON_FAILURE
May 5 18:08:16 pve kernel: [11940375.889410] CIFS VFS: Send error in SessSetup = -13
May 5 18:08:16 pve kernel: [11940375.889818] CIFS VFS: Free previous auth_key.response = 0000000043e263ec
May 5 18:08:16 pve kernel: [11940375.901479] Status code returned 0xc000006d STATUS_LOGON_FAILURE
May 5 18:08:16 pve kernel: [11940375.901482] CIFS VFS: Send error in SessSetup = -13
May 5 18:08:18 pve kernel: [11940377.925538] CIFS VFS: Free previous auth_key.response = 0000000043e263ec
May 5 18:08:18 pve kernel: [11940377.937413] Status code returned 0xc000006d STATUS_LOGON_FAILURE
May 5 18:08:18 pve kernel: [11940377.937416] CIFS VFS: Send error in SessSetup = -13
May 5 18:08:18 pve kernel: [11940377.949014] Status code returned 0xc000006d STATUS_LOGON_FAILURE
May 5 18:08:20 pve kernel: [11940379.973536] cifs_vfs_err: 2 callbacks suppressed
May 5 18:08:20 pve kernel: [11940379.973537] CIFS VFS: Free previous auth_key.response = 0000000043e263ec
May 5 18:08:20 pve kernel: [11940379.985361] Status code returned 0xc000006d STATUS_LOGON_FAILURE
May 5 18:08:20 pve kernel: [11940379.985375] CIFS VFS: Send error in SessSetup = -13
May 5 18:08:20 pve kernel: [11940379.985822] CIFS VFS: Free previous auth_key.response = 0000000043e263ec
May 5 18:08:20 pve kernel: [11940379.997433] Status code returned 0xc000006d STATUS_LOGON_FAILURE
May 5 18:08:20 pve kernel: [11940379.997436] CIFS VFS: Send error in SessSetup = -13
May 5 18:08:22 pve kernel: [11940382.021663] CIFS VFS: Free previous auth_key.response = 0000000043e263ec
May 5 18:08:22 pve kernel: [11940382.033557] Status code returned 0xc000006d STATUS_LOGON_FAILURE
May 5 18:08:22 pve kernel: [11940382.033560] CIFS VFS: Send error in SessSetup = -13
May 5 18:08:22 pve kernel: [11940382.033939] CIFS VFS: Free previous auth_key.response = 0000000043e263ec
May 5 18:08:22 pve kernel: [11940382.045505] Status code returned 0xc000006d STATUS_LOGON_FAILURE
May 5 18:08:22 pve kernel: [11940382.045509] CIFS VFS: Send error in SessSetup = -13



# pvesm status
Name Type Status Total Used Available %
BX60_1 cifs active 7844147317 2219906026 5624241291 28.30%
Backup dir active 3843575844 2080541892 1567720856 54.13%
Backup_Server dir active 3843575844 2080541892 1567720856 54.13%
Server-A dir active 6395775616 2033019520 4362756096 31.79%
V-Server dir active 6395775616 2033019520 4362756096 31.79%
test dir active 6395775616 2033019520 4362756096 31.79%
local dir active 6395775616 2033019520 4362756096 31.79%
local-zfs zfspool active 5500419208 1137663056 4362756152 20.68%
 
hi,

was steht in deinem /etc/fstab ?

vielleicht hast du irgendwo ein anderes cifs konfiguriert und das schlaegt fehl?
 
Stand heute ist das scheinbar immernoch reproduzierbar: schließe ich eine neue Hetzner Storagebox an ein Cluster bei Hetzner an, so tritt nach wenigen Stunden oben beschriebenes Fehlerbild ein (auf allen Nodes!).

Code:
umount /mnt/pve/new-storage-box-mount-name
"behebt" das Problem; danach scheint sofort ein re-mount ausgeführt zu werden, der dann wieder funktioniert. Das muss aber auf jedem Proxmox Host manuell ausgeführt werden und ich weiß noch nicht, ob es nur vorübergehend ist.

Im gleichen Cluster habe ich noch eine weitere Storage Box eingebunden, die seit Jahren (müssten über 3 sein) tadellos mit aus meiner Sicht identischer Konfiguration läuft. Also ggf. ein Problem, dass einen Blick mehr wert ist @oguz / Proxmox Staff.
 
hi,

Stand heute ist das scheinbar immernoch reproduzierbar: schließe ich eine neue Hetzner Storagebox an ein Cluster bei Hetzner an, so tritt nach wenigen Stunden oben beschriebenes Fehlerbild ein (auf allen Nodes!).
poste bitte:

* pveversion -v
* cat /etc/pve/storage.cfg
* mount
* /var/log/syslog (relevante zeilen, wie z.b. +-10 minuten vor/nach den fehlern)
 
@oguz anbei soweit ich es posten kann. "xxxxx" ist die "neue" die gerade das Problem hatte, "yyyy" die alte. Das selbe Spiel hatte ich vor ein paar Monaten schon mal, da hatte ich aber dann die damals neue gekündigt und stattdessen die alte upgegradet...
Code:
pveversion -v

proxmox-ve: 7.0-2 (running kernel: 5.11.22-3-pve)
pve-manager: 7.0-11 (running version: 7.0-11/63d82f4e)
pve-kernel-5.11: 7.0-6
pve-kernel-helper: 7.0-6
pve-kernel-5.4: 6.4-5
pve-kernel-5.11.22-3-pve: 5.11.22-6
pve-kernel-5.4.128-1-pve: 5.4.128-1
pve-kernel-5.4.106-1-pve: 5.4.106-1
pve-kernel-5.4.78-2-pve: 5.4.78-2
pve-kernel-5.4.73-1-pve: 5.4.73-1
ceph-fuse: 14.2.21-1
corosync: 3.1.2-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: residual config
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.21-pve1
libproxmox-acme-perl: 1.2.0
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.0-4
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-5
libpve-guest-common-perl: 4.0-2
libpve-http-server-perl: 4.0-2
libpve-storage-perl: 7.0-10
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.9-4
lxcfs: 4.0.8-pve2
novnc-pve: 1.2.0-3
proxmox-backup-client: 2.0.8-1
proxmox-backup-file-restore: 2.0.8-1
proxmox-mini-journalreader: 1.2-1
proxmox-widget-toolkit: 3.3-6
pve-cluster: 7.0-3
pve-container: 4.0-9
pve-docs: 7.0-5
pve-edk2-firmware: 3.20200531-1
pve-firewall: 4.2-2
pve-firmware: 3.2-4
pve-ha-manager: 3.3-1
pve-i18n: 2.4-1
pve-qemu-kvm: 6.0.0-3
pve-xtermjs: 4.12.0-1
qemu-server: 7.0-13
smartmontools: 7.2-pve2
spiceterm: 3.2-2
vncterm: 1.7-1
zfsutils-linux: 2.0.5-pve1


Code:
cat /etc/pve/storage.cfg

dir: local
        path /var/lib/vz
        content iso,backup,vztmpl

zfspool: local-zfs
        pool rpool/data
        content rootdir,images
        sparse 1

cifs: yyyy
        path //mnt/pve/yyyyyy
        server yyyyyy.your-storagebox.de
        share backup
        content rootdir,iso,images,vztmpl,backup
        prune-backups keep-last=8
        username yyyyyy

cifs: xxxxx
        path /mnt/pve/hetzner-prehapp-sb1
        server xxxxx.your-storagebox.de
        share backup
        content rootdir,images,iso,backup,vztmpl
        prune-backups keep-last=8
        username xxxxx

Code:
mount

//yyyy/yyyy on /mnt/pve/yyyyyy type cifs (rw,relatime,vers=3.0,cache=strict,username=yyyyyy,uid=0,noforceuid,gid=0,noforcegid,addr=0.0.0.0,file_mode=0755,dir_mode=0755,soft,nounix,serverino,mapposix,rsize=4194304,wsize=4194304,bsize=1048576,echo_interval=60,actimeo=1)
//xxxxx/xxxxx on /mnt/pve/xxxxx type cifs (rw,relatime,vers=3.0,cache=strict,username=xxxxx,uid=0,noforceuid,gid=0,noforcegid,addr=0.0.0.0,file_mode=0755,dir_mode=0755,soft,nounix,serverino,mapposix,rsize=4194304,wsize=4194304,bsize=1048576,echo_interval=60,actimeo=1)

Code:
/var/log/syslog (vor und nach manuellem `umount`)

Feb  8 14:51:14 px11 kernel: [13880861.246176] CIFS: Status code returned 0xc000006d STATUS_LOGON_FAILURE
Feb  8 14:51:14 px11 kernel: [13880861.246183] CIFS: VFS: \\xxxxx.your-storagebox.de Send error in SessSetup = -13
Feb  8 14:51:15 px11 kernel: [13880862.550253] CIFS: Status code returned 0xc000006d STATUS_LOGON_FAILURE
Feb  8 14:51:15 px11 kernel: [13880862.550260] CIFS: VFS: \\xxxxx.your-storagebox.de Send error in SessSetup = -13
Feb  8 14:51:15 px11 pvestatd[3484]: unable to activate storage 'xxxxx' - directory '/mnt/pve/xxxxx' does not exist or is unreachable
Feb  8 14:51:16 px11 kernel: [13880863.252492] CIFS: Status code returned 0xc000006d STATUS_LOGON_FAILURE
Feb  8 14:51:16 px11 kernel: [13880863.252502] CIFS: VFS: \\xxxxx.your-storagebox.de Send error in SessSetup = -13
Feb  8 14:51:16 px11 kernel: [13880863.262067] CIFS: Status code returned 0xc000006d STATUS_LOGON_FAILURE
Feb  8 14:51:16 px11 kernel: [13880863.262070] CIFS: VFS: \\xxxxx.your-storagebox.de Send error in SessSetup = -13
Feb  8 14:51:18 px11 kernel: [13880865.268494] CIFS: Status code returned 0xc000006d STATUS_LOGON_FAILURE
Feb  8 14:51:18 px11 kernel: [13880865.268503] CIFS: VFS: \\xxxxx.your-storagebox.de Send error in SessSetup = -13
Feb  8 14:51:18 px11 kernel: [13880865.278057] CIFS: Status code returned 0xc000006d STATUS_LOGON_FAILURE
Feb  8 14:51:18 px11 kernel: [13880865.278060] CIFS: VFS: \\xxxxx.your-storagebox.de Send error in SessSetup = -13
Feb  8 14:51:19 px11 kernel: [13880865.842583] CIFS: Status code returned 0xc000006d STATUS_LOGON_FAILURE
Feb  8 14:51:19 px11 kernel: [13880865.842590] CIFS: VFS: \\xxxxx.your-storagebox.de Send error in SessSetup = -13
Feb  8 14:51:19 px11 systemd[1]: xxxx\xxxx\xxxxx.mount: Succeeded.
Feb  8 14:51:19 px11 systemd[794523]: xxxx\xxxx\xxxxx.mount: Succeeded.
Feb  8 14:51:25 px11 kernel: [13880872.032991] CIFS: Attempting to mount \\xxxxx.your-storagebox.de\backup
Feb  8 14:52:00 px11 systemd[1]: Starting Proxmox VE replication runner...
Feb  8 14:52:00 px11 systemd[1]: pvesr.service: Succeeded.
Feb  8 14:52:00 px11 systemd[1]: Finished Proxmox VE replication runner.
Feb  8 14:53:00 px11 systemd[1]: Starting Proxmox VE replication runner...
Feb  8 14:53:00 px11 systemd[1]: pvesr.service: Succeeded.
Feb  8 14:53:00 px11 systemd[1]: Finished Proxmox VE replication runner.
Feb  8 14:54:00 px11 systemd[1]: Starting Proxmox VE replication runner...
Feb  8 14:54:00 px11 systemd[1]: pvesr.service: Succeeded.
Feb  8 14:54:00 px11 systemd[1]: Finished Proxmox VE replication runner.
Feb  8 14:55:00 px11 systemd[1]: Starting Proxmox VE replication runner...

unable to activate storage 'xxxxx' - directory '/mnt/pve/xxxxx' does not exist or is unreachable ist vermutlich der beste Indikator. Das Mount-Verzeichnis sah vor dem `umount` auch defekt aus (`ls -l` hat es ohne Nutzer/Gruppe und Rechte und stattdessen mit `?????` angezeigt - gleichermaßen auf allen Nodes).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!