Hetzner Storagebox cifs error after IP change

Oct 2, 2022
38
3
8
Hi, i am using a Hetzner storage box in my proxmox cluster. From time to time the connection is failing and I delete and add the storage box again to make to working.

Today I had the same problem and tried to find the root cause.

What I can see when calling mount is that the ipv4 address is used to mount the storage box:

Code:
//XXXX.your-storagebox.de/backup on /mnt/pve/storage-proxmox1 type cifs (rw,relatime,vers=3.1.1,cache=strict,username=XXXX,uid=0,noforceuid,gid=0,noforcegid,addr=65.21.90.67,file_mode=0755,dir_mode=0755,soft,nounix,serverino,mapposix,rsize=4194304,wsize=4194304,bsize=1048576,echo_interval=60,actimeo=1)

But if I dig the DNS for my storage box I receive a different IPv4 address. Seems that Hetzner change the IP adress for storage boxes from time to time.

Because I do not use the IP address but the FQDN for adding the storage to my cluster: What would be a proper solution? I a quite surprised that mounting the storage box via FQDN leads to a static ip mounting.
 
Hi, I suppose mount.cifs resolves the hostname an IP address once when it is called by PVE, and from then on, it operates on this IP address. I'm not aware of a way to configure it to resolve again when the connection fails.

One workaround I can think of: You could have a script periodically check whether the IP of the current mount is still up-to-date, and if it isn't, unmount the mount point (via umount -l /path/to/mountpoint). It would then take PVE a few seconds to detect that the storage is missing and run mount again, which should then resolve the hostname to the new IP.
 
Hi I did unmount as you suggested and now it is using the ipv6 address. But I can recall that the IPv6 address was also used the last time. So there seems a certain point in time where the mount changes to ipv4. The server has not been restarted or so so I can't think of an event which cause the mount to switch from the IPv6 to IPv4 address...
 
I created a shell script which I run as cronjob once a day to test and unmount. Here it is if anybody else has need for it:

Code:
#!/bin/bash

mount_point="/mnt/pve/{your_storage_name}"

get_timestamp() {
    date +"%Y-%m-%d %H:%M:%S"
}

df_output=$(df -h | grep $mount_point)

if [ -z "$df_output" ]; then
    echo "$(get_timestamp): Host is down. Remounting the volume..."

    umount -l $mount_point

    if [ $? -eq 0 ]; then
        echo "$(get_timestamp): Volume unmounted successfully."
        sleep 30

        df_output=$(df -h | grep $mount_point)
        if [ -n "$df_output" ]; then
            echo "$(get_timestamp): Volume has been remounted successfully."
        else
            echo "$(get_timestamp): Failed to remount the volume."
        fi
    else
        echo "$(get_timestamp): Failed to unmount the volume."
    fi
else
    echo "$(get_timestamp): Host is up. Volume is already mounted."
    fi
 
  • Like
Reactions: mauzeris

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!