Backup to FTP

lince

Member
Apr 10, 2015
78
3
8
Hello,

I wanted to do some backup to a server on the internet just in case the HDD I have at home for backups fails.

I'm considering two options, one could be curlftpfs. Mount the ftp locally and configure proxmox to backup there. Not sure what would happen if the connection is lost at some point....

The other option would be to use a script in vzdump. The question is, when is the script executed, before the backup or after the backup ?

Regards.
 
Hello,

I wanted to do some backup to a server on the internet just in case the HDD I have at home for backups fails.

I'm considering two options, one could be curlftpfs. Mount the ftp locally and configure proxmox to backup there. Not sure what would happen if the connection is lost at some point....

The other option would be to use a script in vzdump. The question is, when is the script executed, before the backup or after the backup ?

Regards.

vzdump allows you to run a hook script at various points in the backup process, see /usr/share/doc/pve-manager/examples/vzdump-hook-script.pl for an example.
 
  • Like
Reactions: lince
I didn't know that it could be launched at any point. It sounds like that would solve the problem. I will have a look at the example script :)

Thank you.
 
Hi,
wanted to do that the same way, since my provider gives me 2TB backup space, but only with FTP and CIFS access :/
Previously, I backed up directly to the backup location mounted as curlftpfs and with mount.cifs, but that was unreliable and slow. Sometimes without monitoring it did hang days.

I scripted a bit together, maybe someone finds it useful.
Dependency: curl

Usage:
Backup to the local hard disk in a specific storage (I use "local-backup"). If the backup finishes, this hook script pushes the file to FTP and, if successful (it will try 3 times), deletes the local file. Else it fails, which also marks the backup as failed and you should get a mail. If it worked, it'll also purge old files of the same VM from the FTP (by default which are older than 3 days).

For me, sometimes the control connection failed after a successful transport and curl returned 28. So I added a check which compares the file sizes locally and remote.

For the need to restore, I added a CIFS mount point to /etc/fstab with noauto-option. Then added the mount point as VZdump storage in Proxmox. In the case, I just need to mount and go to my FTP storage to restore.

Code:
#!/usr/bin/env bash
### FTP config
FTPHOST="your.ftp.com"
FTPPATH="dump/"
FTPUSER=""
FTPPASS=""
### Purge after x days
PURGE=3
### Storage ID to move
STORAGE="local-backup"

##### CONFIG END #####

### FUNCTIONS
function log {
    echo "[HOOK] $*"
}
function purge_ftp {
    VM=$1
    log "PURGE for VM ${VM} started."

    PURGEDATE=$(date --date="${PURGE} days ago" +%s)
    FILES=$(curl -s -u "${FTPUSER}:${FTPPASS}" --list-only "ftp://${FTPHOST}/${FTPPATH}" | sort)
    RETURNCODE=$?
    if [[ $RETURNCODE -ne 0 ]]; then
        log "    LIST: ftp://${FTPHOST}/${FTPPATH} - FAILED (curl): $RETURNCODE"
        return $RETURNCODE
    fi
    for FILE in ${FILES[@]}; do
        SPLITFILE="${FILE/./ }"; SPLITFILE="${SPLITFILE//-/ }"; SPLITFILE=(${SPLITFILE//_/-})

        VMSTR="${SPLITFILE[2]}"
        DATESTR="${SPLITFILE[3]}"
        TIMESTR="${SPLITFILE[4]}"; TIMESTR="${TIMESTR//-/:}"
        if [[ $(date --date="$DATESTR $TIMESTR" +%s) -lt $PURGEDATE && "$VMSTR" == "$VM" ]]; then
            curl -s -u "${FTPUSER}:${FTPPASS}" --head -Q "-DELE ${FTPPATH}${FILE}" "ftp://${FTPHOST}"
            RETURNCODE=$?
            if [[ $RETURNCODE -ne 0 ]]; then
                log "    DELETE: $FILE - FAILED (curl): $RETURNCODE"
                return $RETURNCODE
            else
                log "    DELETE: $FILE - SUCCESS"
            fi
        elif [[ "$VMSTR" == "$VM" ]]; then
            log "    KEEP: $FILE - not older than ${PURGE} days"
        fi
    done
    return 0
}
function upload_ftp {
    FILE=$1
    log "UPLOAD for ${FILE} started."

    RETURNCODE=1
    TRIES=0
    while [[ $RETURNCODE -ne 0 && $TRIES -lt 3 ]]; do
        ((TRIES++))
        curl -s -u "${FTPUSER}:${FTPPASS}" --keepalive-time 30 -T "$FILE" "ftp://${FTPHOST}/${FTPPATH}"
        RETURNCODE=$?
        if [[ $RETURNCODE -ne 0 ]]; then
            # try to mitigate curl (28): Timeout, curl(55)
            # usually file transfers fine, but control channel is killed by the firewall
            FILENAME=$(basename "${FILE}")
            LOCALSIZE=$(stat --printf="%s" "${FILE}" | tr -d '[[:space:]]')
            REMOTESIZE=$(curl -sI -u "${FTPUSER}:${FTPPASS}" "ftp://${FTPHOST}/${FTPPATH}${FILENAME}" | awk '/Content-Length/ { print $2 }' | tr -d '[[:space:]]')
            log "    UPLOAD #${TRIES}: $FILE to ftp://${FTPHOST}/${FTPPATH} - local: ${LOCALSIZE}, remote: ${REMOTESIZE}"
            if [[ "$REMOTESIZE" -eq "$LOCALSIZE" ]]; then
                log "    UPLOAD #${TRIES}: $FILE to ftp://${FTPHOST}/${FTPPATH} - WARN (curl): $RETURNCODE, but seems complete"
                RETURNCODE=0
            else
                log "    UPLOAD #${TRIES}: $FILE to ftp://${FTPHOST}/${FTPPATH} - FAILED (curl): $RETURNCODE"
                if [[ $RETURNCODE -eq 55 ]]; then
                    curl -s -u "${FTPUSER}:${FTPPASS}" --head -Q "-DELE ${FTPPATH}${FILENAME}" "ftp://${FTPHOST}"
                fi
            fi
        fi
    done
    if [[ $RETURNCODE -ne 0 ]]; then
        log "    UPLOAD: $FILE to ftp://${FTPHOST}/${FTPPATH} - FAILED PERMANENTLY"
    else
        log "    UPLOAD: $FILE to ftp://${FTPHOST}/${FTPPATH} - SUCCESS"
    fi
    return $RETURNCODE
}

### MAIN
PHASE=$1
if [[ "$PHASE" == "job-start" || "$PHASE" == "job-end" || "$PHASE" == "job-abort" ]]; then
    #DUMPDIR
    #STOREID

    exit 0
fi
if [[ "$PHASE" == "backup-start" || "$PHASE" == "backup-end" || "$PHASE" == "backup-abort" || "$PHASE" == "log-end" || "$PHASE" == "pre-stop" || "$PHASE" == "pre-restart" || "$PHASE" == "post-restart" ]]; then
    MODE=$2 # stop,suspend,snapshot
    VMID=$3
    #DUMPDIR
    #STOREID
    #VMTYPE # openvz,qemu
    #HOSTNAME

    if [[ "$PHASE" == "backup-end" && "$STOREID" == "$STORAGE" ]]; then
        #TARFILE
        log "transfer backup archive of VM ${VMID} to FTP"
        upload_ftp "$TARFILE"
        RETURNCODE=$?  
        if [[ $RETURNCODE -ne 0 ]]; then exit $RETURNCODE; fi
        log "remove local backup archive of VM ${VMID}"
        rm "$TARFILE"

        log "purge old backup and log files of VM ${VMID} from FTP"
        purge_ftp $VMID
        RETURNCODE=$?
        exit $RETURNCODE
    fi

    if [[ "$PHASE" == "log-end" && "$STOREID" == "$STORAGE" ]]; then
        #LOGFILE
        log "transfer log file of VM ${VMID} to FTP"
        upload_ftp "$LOGFILE"
        RETURNCODE=$?
        if [[ $RETURNCODE -ne 0 ]]; then exit $RETURNCODE; fi
        log "remove local log file of VM ${VMID}"
        rm "$LOGFILE"

        exit $RETURNCODE
    fi

    exit 0
fi
# manual tasks

if [[ $PHASE == "manual-cleanup" ]]; then
        VMID=$2
        if [[ -z $VMID ]]; then
                echo "Usage: $(basename $0) manual-cleanup <VMID>"
                exit 1
        fi

        log "purge old backup and log files of VM ${VMID} from FTP"
        purge_ftp $VMID
        RETURNCODE=$?
        log "purge old backup and log files of VM ${VMID} from local storage"
        rm -r ${STORAGEDIR}/dump/vzdump-*-${VMID}-*
        exit $RETURNCODE
fi
 
Last edited:
  • Like
Reactions: hanru
Hey, its a little late to join the discussion but I thought that could be interesting as well.

I just updated my little script I use to backup to an untrusted FTP server like the one you get from your Provider. They could have a look at your files ;-)

To do that I use GPG to encrypt them before they get uploaded and it renames your backups (locally as well) so that they contain the hostname to better Identify them.

It also removes old backups remote and local (has to do that on its own now due to the renaming)

If anyone is interested, I made it a gist at github (not allowed to post full url as new user ;-( ):
gist.github.com/bitcloud/4ac52586334a2ddc54ff
 
Hey, probably this is outdated already, but if still the case for somebody using Proxmox 5 then try considering my script: idct/php-proxmox-backups

It allows you to perform backups, save then to ftp, notify via email and/or telegram by just supplying proper config file.

Sadly, I cannot post a link, but if you check by github page (user: ideaconnect) you will find it easily.

Supports also keeping a backlog of backups (fixed number of backups behind).

I am more than happy for any provided contribution regarding the tool.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!