SMB/CIFS Storage - backup not working? [PVE 9.1.4]

mcendro

New Member
Jan 10, 2026
2
0
1
Dear All,

I am facing his bizzare problem, that is likely my lack of understanding of how things are supposed to work. I just can't figure it out and was hoping more experienced colleagues here can assist.

I create a SMB/CIFS Storage:
1768059751468.png

eveyrything works fine, and I can see the storage visible across my hosts:
1768059786947.png

however, when I go to the storage and attempt to access Backups, I get following error:
1768059835268.png


I can't figure this out. I can see the the share is already mounted on the host, so no wonder it fails to mount it again.
I cannot do any backups to this storage either as the backup jobs fail with the same error, attempting to mount the SMB/CIFS target that is already mounted on the host.

what am I doing wrong?
 
You probably saw that there is a question mark there..
If you think it's trying to mount it twice (it probably isn't), try disabling it in the UI, then explicitly unmount it (umount /mnt/pve/backups) or reboot if that's an option.
Then well, it's telling you what to do.. use dmesg and check what is happening there and in the logs. Also share the config for the storage maybe..
 
I tried several times and in the end gave up. Instead I relied on Directory and did the CIFS mounts directly at the host. This works very well. Is reliable and stable. I set the mounts to happen at boot on each of my nodes, so is really no longer a concern.

I think there is a bug in how Proxmox attempts to handle the CIFS mounts itself, particularly in its ability to detected whether it is already mounted. Hence those strange errors I shared in my initial post.
This can be reproduced easily. If anyone is keen to troubleshoot further, then let me know. I’m happy to help.
 
Don't forget to run
Code:
pvesm set backup --is_mountpoint yes
in a shell on the host. To protect your root from getting out of space when the mount point isn't mounted.
 
  • Like
Reactions: mcendro