NAS constantly active after setting up SMB share in Proxmox

tombond

New Member
Jun 28, 2023
12
2
3
Hi, sorry if this is a stupid beginner question but I've now tried my luck for several days but I just can't solve my problem.

I recently set up my very first Proxmox Server on a ThinkCentre tiny PC. My main goal was for it to host HomeAssistant and RaspberryMatic and that worked fine.

Now in addition to this, I would like to run a Jellyfin server which shall use my media, which is located on a Synology NAS. I've tried to follow some online tutorials and added the NAS share as SMB/CIFS to Proxmox and I just called it "NAS" -> /mnt/pve/NAS

Besides, I mounted the share in my Jellyfin container and so I was able to use it successfully. All fine until now.

But then I realized, that since I did this my NAS was constantly (every ~5 seconds) making this clicking "something is happening on the HDD" noise. I searched around the web and found many people who seemed to have similar issues, but I believe none of them were using SMB, since they all talked about lvm. I still tried some of their advice but none really worked for me. Most hints went towards something called vgscan and adding the drive to some global filter (https://forum.proxmox.com/threads/disk-prevent-from-spinning-down-because-of-pvestatd.53237/).

Since I really just use Jellyfin for ~2h each day I don't want Proxmox to keep my NAS/HDD active all the time. What can I do to stop this constant access and to allow my drives to go to sleep?

Thanks for your help
Thomas
 
If you added it as a Storage, then Proxmox will check it all the time for the graphs. If so, maybe you can enable and disable the storage (using a command-line script) on certain times (using crontab) to minimize the polling of the NAS?
 
Since Jellyfin serves movies and shows I can't really tell at what time of the day this is needed. Wife and kids :-S
I just hoped that without anyone watching the drives go to sleep and if I request something they will spin up. That's how it was when my Jellyfin server was running on Windows.
 
Since Jellyfin serves movies and shows I can't really tell at what time of the day this is needed. Wife and kids :-S
I just hoped that without anyone watching the drives go to sleep and if I request something they will spin up. That's how it was when my Jellyfin server was running on Windows.
If you added it as a Storage, then Proxmox will check it all the time for the graphs.
Did you add the NAS as a storage?
If you run our software in a container, then you can use lxc.mount to mount the NAS without Proxmox knowing about it (and you don't add it as a storage) and maybe you won't have this problem. Alternatively, run you software in a VM and Proxmox won't know about it and won't poll it all the time. Please note that this depends on whether you added the NAS as a storage. If not, then I don't know why it does not go to sleep when using Proxmox.
 
  • Like
Reactions: _gabriel
With my current setup I followed a manual to add the NAS as storage, yes. I will look into how to directly mount the drive to the Container without first adding it to Proxmox.
 
  • Like
Reactions: leesteken
So I tried some things now:

1. I enabled NFS on my Synology and configured the share accordingly (I tried both RW as well as RO, but I would prefer RO if this is possible).
2. I added the following entry to /etc/pve/lxc/105.conf:

Code:
lxc.mount.entry: 192.168.178.92/Movies /media/Movies nfs ro,vers=3 0 0

3. I added the following entry to /etc/fstab of my container:

Code:
192.168.178.92:/Movies /media/Movies nfs ro,vers=3 0 0

4. I ran mount -a, but the response was "mount.nfs: access denied by server while mounting 192.168.178.92:/Movies"


Did I do everything right and this is now some weird mismatch / misconfiguration of my NFS in the Synology or do I have to change something?
 
So I tried even more and since NFS just somehow didn't work for me I switched back to cifs.

It also seems that I didn't need the entry in /etc/pve/lxc/105.conf so I removed it again. I just added the following to the containers /etc/fstab and now it works:

Code:
//192.168.178.92/Movies /media/Movies cifs username=USERNAME,password=PASSWORD,rw 0 0

I will keep an eye on it but for now it seems like this solution lets the drive go back to sleep like I was used to it from windows.


EDIT:

Something which still bothers me as someone who is just starting to learn: I read many times that there are benefits when mounting the share via the /etc/pve/lxc/xxx.conf file so I undid my fstab and tried getting the same result using the lxc conf, with no success. This is how the conf looks like atm:

Code:
lxc.mount.entry: //192.168.178.92/Movies /media/Movies cifs username=USERNAME,password=PASSWORD,rw 0 0

I don't know what I'm doing wrong and how I can get hints as to where my mistake lies. When I configured it via fstab I could run mount -a and get an immediate feedback, but with the lxc.mount.entry I don't know how to do that.
 
Last edited:
Hello,
I'm a new user and this is my first post.

I found this thread online researching the same kind of issue the OP had. Basically I mounted a SMB share to be used as backup, and since then my NAS stopped going to sleep because Proxmox is accessing it all the time.

With the help of other 2 threads:
https://forum.proxmox.com/threads/hdd-never-spin-down.53522/
https://forum.proxmox.com/threads/pvesm-umount.115269/

I found a first raw solution. Because my backup are set at 21.00, I enable the share at that time and then I disable it 1 hour later, I created these 2 crontab:
Code:
0 21 * * * /usr/sbin/pvesm set proxmox_backup --disable 0 >/dev/null 2>&1
0 22 * * * /usr/sbin/pvesm set proxmox_backup --disable 1 >/dev/null 2>&1

This is a very raw solution because if a backup takes more than 60 minutes the share is disabled creating an error. Of course I can expand the time to be more than 1 hour, but the problem is the same.

I'm now wondering if there is a way to perform a "pre-script" and a "post-script" around the backup process, so that the enable/disable is actually made in line with the actual process.

Thank you
 
  • Like
Reactions: carsb
This is my latest solution, if you have any additional suggestions they would be very appreciate.
Basically I add the mount when it's needed (backup start) and I remove it when the backup ended.

Mind that I never remove/add the Proxmox Storage, I only make adjustements on the actual network share. This because we confirmed that simply disabling the Proxmox Storage does not make any difference because it is the mount keeping the connection open.

At the beginning of my tests I was also trying to add/remove the Proxmox Storage during the script, but I found out I cannot actually remove it otherwise the backup is not even starting
Code:
TASK ERROR: could not activate storage 'backup_proxmox': storage 'backup_proxmox' does not exist could not get storage information for 'backup_proxmox': storage 'backup_proxmox' does not exist
So I left it, it simply points to a non existing folder/share, at the end I only need it active when the backup needs to run

STEP 1: Creating the Proxmox Storage
At first I created a new Storage of type directory
Code:
pvesm add dir backup_proxmox --path /mnt/backup_proxmox --content backup
This can be made from UI as well, I think it make no difference

STEP 2: Creating the hook script
I created the folder where to put the script
Code:
mkdir /var/lib/vz/snippets
cd /var/lib/vz/snippets

I created a new file
Code:
nano hook_backup_storage.sh

With the following content
Code:
!/bin/bash

echo "hook-script: $1"

if [ "$1" == "job-start" ]
then
        echo "job-start: mounting backup share"
        mkdir /mnt/backup_proxmox
        mount -t cifs -o vers=2.0,username=******,password=****** //192.168.***.***/backup_proxmox /mnt/backup_proxmox
fi

if [ "$1" == "job-end" ]
then
        echo "job-end: unmounting backup share"
        umount /mnt/backup_proxmox
        rm -r /mnt/backup_proxmox
fi

exit 0

I'm using vers=2.0 because my NAS does not support default version 3.0

I set the file to be executable
Code:
chmod +x hook_backup_storage.sh

STEP 3: Modifying backup script
It is my understanding that the UI does not support the "script" flag, so I opened the file where the UI store the backup configuration and I manually added the line
Code:
nano /etc/pve/jobs.cfg

I only added the last line:
Code:
vzdump: backup-38218ccd-fbfe
        schedule 21:00
        all 1
        compress zstd
        enabled 1
        mailnotification failure
        mailto ******
        mode snapshot
        notes-template {{guestname}}
        prune-backups keep-daily=10,keep-weekly=8
        storage backup_proxmox
        script /var/lib/vz/snippets/hook_backup_storage.sh

STEP 4: Testing
I found out where the logs are
Code:
cd /var/log/vzdump/

I opened a random file:
Code:
root@pve:/var/log/vzdump# tail lxc-101.log -n 20

2023-12-06 21:30:19 INFO: including mount point rootfs ('/') in backup
2023-12-06 21:30:19 INFO: backup mode: snapshot
2023-12-06 21:30:19 INFO: ionice priority: 7
2023-12-06 21:30:19 INFO: hook-script: backup-start
>>>> 2023-12-06 21:30:19 INFO: job-start: mounting backup share
2023-12-06 21:30:20 INFO: hook-script: pre-stop
2023-12-06 21:30:20 INFO: create storage snapshot 'vzdump'
2023-12-06 21:30:21 INFO: hook-script: pre-restart
2023-12-06 21:30:21 INFO: hook-script: post-restart
2023-12-06 21:30:21 INFO: creating vzdump archive '/mnt/backup_proxmox/dump/vzdump-lxc-101-2023_12_06-21_30_19.tar.zst'
2023-12-06 21:30:36 INFO: Total bytes written: 1547366400 (1.5GiB, 95MiB/s)
2023-12-06 21:30:41 INFO: archive file size: 562MB
2023-12-06 21:30:41 INFO: adding notes to backup
2023-12-06 21:30:42 INFO: prune older backups with retention: keep-daily=10, keep-weekly=8
2023-12-06 21:30:42 INFO: removing backup 'backup_proxmox:backup/vzdump-lxc-101-2023_12_06-21_14_14.tar.zst'
2023-12-06 21:30:42 INFO: pruned 1 backup(s) not covered by keep-retention policy
2023-12-06 21:30:42 INFO: hook-script: backup-end
>>>> 2023-12-06 21:30:42 INFO: job-end: unmounting backup share
2023-12-06 21:30:42 INFO: cleanup temporary 'vzdump' snapshot
2023-12-06 21:30:43 INFO: Finished Backup of VM 101 (00:00:24)

Now with the problems.... The backup did not end properly.
This is the log from the PVE Shell, in the bottom area where all the tasks are logged.
Code:
INFO: removing backup 'backup_proxmox:backup/vzdump-lxc-101-2023_12_06-21_14_14.tar.zst'
INFO: pruned 1 backup(s) not covered by keep-retention policy
INFO: hook-script: backup-end
INFO: job-end: unmounting backup share
INFO: cleanup temporary 'vzdump' snapshot
  Logical volume "snap_vm-101-disk-0_vzdump" successfully removed.
INFO: Finished Backup of VM 101 (00:00:24)
INFO: Backup finished at 2023-12-06 21:30:43
cp: cannot create regular file '/mnt/backup_proxmox/dump/vzdump-lxc-101-2023_12_06-21_30_19.log': No such file or directory
INFO: hook-script: log-end
command 'df -P -T -B 1 /mnt/backup_proxmox/dump' failed: exit code 1
ERROR: Backup of VM 102 failed - unable to create temporary directory '/mnt/backup_proxmox/dump/vzdump-qemu-102-2023_12_06-21_30_43.tmp' at /usr/share/perl5/PVE/VZDump.pm line 1005.
INFO: Failed at 2023-12-06 21:30:43
INFO: hook-script: backup-abort
INFO: hook-script: log-end
command 'df -P -T -B 1 /mnt/backup_proxmox/dump' failed: exit code 1

It seems that after the "job-end" hook, the procedure still need to write another file, and it cannot write as the share is now unmounted.
Is there any practice to handle this scenario?

Of course, based on logs, a simple solution would be putting the umount part on the log-end script, but maybe there is something better.

Thank you
 
Last edited:
  • Like
Reactions: carsb and elyviere
This is my latest solution, if you have any additional suggestions they would be very appreciate.
Basically I add the mount when it's needed (backup start) and I remove it when the backup ended.
Personally, I don't use backup targets that I somehow turn off. All of my backup targets can be reached at all times, on the one hand jobs are constantly running anyway and on the other hand I don't know when I want to restore something.

Unfortunately you have to wait for others who might have a different idea.
 
Personally, I don't use backup targets that I somehow turn off. All of my backup targets can be reached at all times, on the one hand jobs are constantly running anyway and on the other hand I don't know when I want to restore something.

Unfortunately you have to wait for others who might have a different idea.

I understand.

In my case I use the NAS very little and most of the time of the day it remains idle/sleep for energy saving, I need to have it available when the backup starts (once a day, for 10 mins), so I used this first approach. For my research online I did not found a way to leave the share mounted while going to idle unless I actually access the share.
 
Personally, I don't use backup targets that I somehow turn off. All of my backup targets can be reached at all times, on the one hand jobs are constantly running anyway and on the other hand I don't know when I want to restore something.

Unfortunately you have to wait for others who might have a different idea.
It seems that the solution I found is only partially working. At least, the backup itself it is properly made, this is the most important part.
Even if I moved the umount to the last available hook (log-end), it seems that there is a final action that is failing
Code:
INFO: hook-script: backup-end
INFO: cleanup temporary 'vzdump' snapshot
  Logical volume "snap_vm-100-disk-0_vzdump" successfully removed.
INFO: Finished Backup of VM 100 (00:00:24)
INFO: Backup finished at 2023-12-08 13:35:28
INFO: hook-script: log-end
INFO: log-end: unmounting backup share
command 'df -P -T -B 1 /mnt/backup_proxmox/dump' failed: exit code 1
Use of uninitialized value in string eq at /usr/share/perl5/PVE/VZDump.pm line 992.
Use of uninitialized value in string eq at /usr/share/perl5/PVE/VZDump.pm line 992.
Use of uninitialized value in string eq at /usr/share/perl5/PVE/VZDump.pm line 992.
Use of uninitialized value in string eq at /usr/share/perl5/PVE/VZDump.pm line 992.
Use of uninitialized value in string eq at /usr/share/perl5/PVE/VZDump.pm line 992.
Use of uninitialized value in string eq at /usr/share/perl5/PVE/VZDump.pm line 992.
Use of uninitialized value in concatenation (.) or string at /usr/share/perl5/PVE/VZDump.pm line 997.
INFO: filesystem type on dumpdir is '' -using /var/tmp/vzdumptmp689088_101 for temporary files

From my understanding, after the backup procedure is trying to displays information about total space and available space on a file system (df command), and this is failing because the share is now unmounted. It is not a problem per se as the backup is actually made, but this kind of behaviour is making the general backup process failing due to this error.
I don't see any hook that can be launched after the log-end, seems to be the last one.
 
Para integrar tu NAS Synology con Proxmox y configurar la suspensión del NAS de manera que Proxmox no lo despierte hasta que sea necesario, sigue estos pasos:

1. Configurar el NAS Synology para suspensión:
• Accede a la interfaz de administración del NAS Synology.
• Ve a Panel de control > Hardware y energía > Configuración de encendido y apagado.
• Configura el NAS para que entre en modo de suspensión después de un período de inactividad.
2. Configuración en Proxmox:
• Primero, asegúrate de que los contenedores (CT) y máquinas virtuales (VM) en Proxmox solo accedan al NAS cuando sea necesario.
• Si usas NFS o CIFS para montar el NAS en Proxmox, puedes configurar scripts para montar y desmontar estos sistemas de archivos según sea necesario.
3. Scripting en Proxmox:
• Crea scripts en Proxmox para montar y desmontar el NAS. Estos scripts deben verificar el estado del NAS y montarlo solo cuando sea necesario.
Por ejemplo, puedes crear un script para montar el NAS:

Code:
#!/bin/bash
# Verifica si el NAS está en línea
if ping -c 1 192.168.1.100 &> /dev/null
then
    # Monta el NAS
    mount -t nfs 192.168.1.100:/volume1/share /mnt/nas
else
    echo "NAS no está disponible"
fi

3. Y otro para desmontarlo:

Code:
#!/bin/bash
# Desmonta el NAS
umount /mnt/nas


4. Configuración de contenedores y VMs:
• Configura tus contenedores y VMs para ejecutar estos scripts cuando necesiten acceso al NAS.
• Puedes usar hooks o scripts de inicio/apagado en los contenedores y VMs para ejecutar estos scripts automáticamente.
5. Automatización con cron o systemd:
• Puedes usar cron o systemd para automatizar la ejecución de estos scripts en momentos específicos o basados en eventos.

Por ejemplo, para ejecutar el script de montaje al iniciar una VM, podrías usar un hook en Proxmox:

Code:
# En /etc/pve/qemu-server/<vmid>.conf, añade:
hookscript: local:snippets/mount-nas.sh


Y el contenido de mount-nas.sh podría ser:


Code:
#!/bin/bash
if [ "$1" = "pre-start" ]; then
    /path/to/mount-nas.sh
elif [ "$1" = "post-stop" ]; then
    /path/to/umount-nas.sh
fi

De esta manera, garantizas que tu NAS solo se despierta cuando realmente es necesario y permanece en suspensión el resto del tiempo, reduciendo el consumo de energía y el desgaste del equipo.
 
Last edited:
I am also experiencing this issue.

@nicola_s Did you find a solution that worked for you?

It would be great, if we wouldn't need to resort to workarounds (cron, invoking vzdump on CLI with manual options, dummy directory, manual mounting).
Besides, these quirks are external to Proxmox and its user interface, might raise compatibility issues, and aren't backed up with Proxmox configuration.

I am wondering, what would be a good feature request to solve this issue in a simple manner.

Preferred:
- being able to adjust CIFS share polling interval (default: every 10 seconds) or completely disable it via Datacenter -> Storage -> share option

Other (less preferred) alternatives:
- Option for backup jobs to re-enable a selected storage (if needed) and disable it again after backup job has finished
- Option for vzdump to execute backup jobs by their ID, so you don't need to specify backup parameters outside Proxmox UI (but still, this would require a scheduler like cron)

Hook scripts are not that useful here, as they require the backup job to have already started, which does not happen, if underlying storage is disabled/directory does not exist. CIFS storage currently needs to be kept disabled to prevent constant polls. And creating a dummy directory imo is quite ugly.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!