[SOLVED] Backup Client Fails with EINVAL: Invalid argument

MeCJay12

Member
Feb 19, 2023
19
2
8
I've moved all my VM backups to PBS but now that I'm trying to move my file backups to PBS I'm having trouble. My backup client runs successfully periodically but also likes to fail with the vey generic "EINVAL: Invalid argument" error and a file. The file that it errors on will change from run to run. Any idea what's going on here?

Code:
Starting backup: [files]:host/Crontab/2024-10-24T13:07:58Z
Client name: Crontab
Starting backup protocol: Thu Oct 24 09:07:58 2024
Downloading previous manifest (Mon Oct 14 20:43:51 2024)
Upload directory '/data/' to 'backup@pbs@pbsLabB:8007:backup' as data.pxar.didx
unclosed encoder dropped
closed encoder dropped with state
unfinished encoder state dropped
catalog upload error - channel closed
Error: error at "TV Shows/Young Sheldon/Season 05/Young Sheldon - S05E01 - One Bad Night and Chaos of Selfish Desires.mkv"

Caused by:
    EINVAL: Invalid argument
 
Hi,
please provide some more information in order to assist.

The output of proxmox-backup-manager version --verbose and the proxmox-backup-client version, as well as the task log for the backup job from the PBS server are of interest. Also, the exact command line used for invocation of the backup job in you cron job.

Further, do check the systemd journal for errors by dumping the journal for the timespan around the failing backup job, you can achieve this by journactl --since <DATETIME> --until <DATETIME> > journal.txt and add it as attachment.

Further, what underlying filesystem is used for the target datastore? I suspect this to be rather a server side storage issue, EINVAL errors are typically encountered when the underlying datastore storage is not supported, e.g. [0,1]

[0] https://forum.proxmox.com/threads/virtualized-pbs-with-shared-mount.100455/
[1] https://forum.proxmox.com/threads/cant-start-backup-process-on-pbs.154623/
 
My PBS is a VM running on a PVE server. The backup storage is an ext4 formatted VirtIO SCSI virtual disk. I recognize that running PBS in a VM isn't recommended but the footprint of three new servers and disks to do a full 3-2-1 backup is cost prohibitive.

Proxmox Backup Server Version
Code:
user@pbsLabB:~$ sudo proxmox-backup-manager version --verbose
[sudo] password for user:
proxmox-backup                    3.2.0        running kernel: 6.8.8-2-pve
proxmox-backup-server             3.2.7-1      running version: 3.2.6
proxmox-kernel-helper             8.1.0
proxmox-kernel-6.8                6.8.12-2
proxmox-kernel-6.8.8-2-pve-signed 6.8.8-2
proxmox-kernel-6.8.4-3-pve-signed 6.8.4-3
proxmox-kernel-6.8.4-2-pve-signed 6.8.4-2
ifupdown2                         3.2.0-1+pmx9
libjs-extjs                       7.0.0-4
proxmox-backup-docs               3.2.7-1
proxmox-backup-client             3.2.7-1
proxmox-mail-forward              0.2.3
proxmox-mini-journalreader        1.4.0
proxmox-offline-mirror-helper     0.6.7
proxmox-widget-toolkit            4.2.3
pve-xtermjs                       5.3.0-3
smartmontools                     7.3-pve1
zfsutils-linux                    2.2.6-pve1

Proxmox Backup Client Version & Script run to backup
Code:
root@Crontab:/# proxmox-backup-client version
client version: 3.2.7
root@Crontab:/# cat /scripts/pbcFilesBackup.sh
#!/bin/bash

export PBS_REPOSITORY="backup@pbs@pbsLabB:backup"
export PBS_PASSWORD='password'
export XDG_RUNTIME_DIR=/run/user/1000

proxmox-backup-client backup data.pxar:/data/ veeam.pxar:/Veeam/ --ns files

Full journalctl but the only thing unusual I found was that it exited with the following
Code:
Oct 24 21:51:48 pbsLabB proxmox-backup-proxy[606]: backup ended and finish failed: backup ended but finished flag is not set.
Oct 24 21:51:48 pbsLabB proxmox-backup-proxy[606]: removing unfinished backup
Oct 24 21:51:48 pbsLabB proxmox-backup-proxy[606]: removing backup snapshot "/mnt/disk/ns/files/host/Crontab/2024-10-24T13:07:58Z"
Oct 24 21:51:49 pbsLabB proxmox-backup-proxy[606]: TASK ERROR: backup ended but finished flag is not set.
 
What filesystem is used at the sources, /data and /Veeam? Further, please do provide the systemd journal by some other means, which does not require me to login via a google account, thank you!
 
Last edited:
Sorry, fixed permission on the link. You can open it without a Google account now.

/data and /Veeam are CIFS shares mounted in Ubuntu then bind mounted into the Docker container that runs PBC.
 
Sorry, fixed permission on the link. You can open it without a Google account now.
Thanks!

/data and /Veeam are CIFS shares mounted in Ubuntu then bind mounted into the Docker container that runs PBC.
What docker image and parameters are you using and what are the mount parameters for the CIFS shares? Latter you can get from the output of mount executed on Ubuntu. Also check the syslog on your Ubuntu host, container and CIFS share host, I do not expect anything of interest to be there, but it is worth to check nevertheless.
 
The Docker container is a one I created to run scripts; not a major one. It's mecjay12/crontab and the dockerfile is here; PBC is still the last commit.

Mount parameters
Code:
//Files.domain.com/Data/ /mnt/Data/ cifs username=user,password=password,uid=root,gid=root,file_mode=0777,dir_mode=0777,noauto,x-systemd.automount 0 0

Mount command from Docker host attached

Docker inspect to include docker mounts
Code:
user@doklaba:~$ docker inspect Crontab
[
    {
        "Id": "84df512f6f981c62c2178d093e8bdbf963abaf5e4b38b724dd1a3a462ae5bc3d",
        "Created": "2024-09-23T15:52:57.960894411Z",
        "Path": "/bin/sh",
        "Args": [
            "-c",
            "crontab /etc/cron.d/* && ln -fs /usr/share/zoneinfo/$TZ /etc/localtime && dpkg-reconfigure --frontend noninteractive tzdata && cron && tail -f /var/log/cron.log"
        ],
        "State": {
            "Status": "running",
            "Running": true,
            "Paused": false,
            "Restarting": false,
            "OOMKilled": false,
            "Dead": false,
            "Pid": 9150,
            "ExitCode": 0,
            "Error": "",
            "StartedAt": "2024-10-15T21:06:26.249421529Z",
            "FinishedAt": "2024-10-15T21:04:30.537460439Z"
        },
        "Image": "sha256:65e6c39a78833ed71e300dc9f22a5397a043f734170d309c9c75553ac68bb39f",
        "ResolvConfPath": "/var/lib/docker/containers/84df512f6f981c62c2178d093e8bdbf963abaf5e4b38b724dd1a3a462ae5bc3d/resolv.conf",
        "HostnamePath": "/var/lib/docker/containers/84df512f6f981c62c2178d093e8bdbf963abaf5e4b38b724dd1a3a462ae5bc3d/hostname",
        "HostsPath": "/var/lib/docker/containers/84df512f6f981c62c2178d093e8bdbf963abaf5e4b38b724dd1a3a462ae5bc3d/hosts",
        "LogPath": "",
        "Name": "/Crontab",
        "RestartCount": 0,
        "Driver": "overlay2",
        "Platform": "linux",
        "MountLabel": "",
        "ProcessLabel": "",
        "AppArmorProfile": "docker-default",
        "ExecIDs": null,
        "HostConfig": {
            "Binds": [
                "/mnt/Docker/Crontab/conf/:/etc/cron.d:ro",
                "/mnt/Docker/Crontab/ssh/:/root/.ssh:rw"
            ],
            "ContainerIDFile": "",
            "LogConfig": {
                "Type": "local",
                "Config": {
                    "max-size": "10m"
                }
            },
            "NetworkMode": "726e8d42701e22f2b1cd73a46926d21c9406f5afda017a54eca0be5919e330a3",
            "PortBindings": {},
            "RestartPolicy": {
                "Name": "always",
                "MaximumRetryCount": 0
            },
            "AutoRemove": false,
            "VolumeDriver": "",
            "VolumesFrom": null,
            "ConsoleSize": [
                0,
                0
            ],
            "CapAdd": null,
            "CapDrop": null,
            "CgroupnsMode": "private",
            "Dns": [],
            "DnsOptions": [],
            "DnsSearch": [],
            "ExtraHosts": [],
            "GroupAdd": null,
            "IpcMode": "private",
            "Cgroup": "",
            "Links": null,
            "OomScoreAdj": 0,
            "PidMode": "",
            "Privileged": false,
            "PublishAllPorts": false,
            "ReadonlyRootfs": false,
            "SecurityOpt": null,
            "UTSMode": "",
            "UsernsMode": "",
            "ShmSize": 67108864,
            "Runtime": "runc",
            "Isolation": "",
            "CpuShares": 0,
            "Memory": 0,
            "NanoCpus": 0,
            "CgroupParent": "",
            "BlkioWeight": 0,
            "BlkioWeightDevice": null,
            "BlkioDeviceReadBps": null,
            "BlkioDeviceWriteBps": null,
            "BlkioDeviceReadIOps": null,
            "BlkioDeviceWriteIOps": null,
            "CpuPeriod": 0,
            "CpuQuota": 0,
            "CpuRealtimePeriod": 0,
            "CpuRealtimeRuntime": 0,
            "CpusetCpus": "",
            "CpusetMems": "",
            "Devices": null,
            "DeviceCgroupRules": null,
            "DeviceRequests": null,
            "MemoryReservation": 0,
            "MemorySwap": 0,
            "MemorySwappiness": null,
            "OomKillDisable": null,
            "PidsLimit": null,
            "Ulimits": null,
            "CpuCount": 0,
            "CpuPercent": 0,
            "IOMaximumIOps": 0,
            "IOMaximumBandwidth": 0,
            "Mounts": [
                {
                    "Type": "tmpfs",
                    "Target": "/tmp"
                },
                {
                    "Type": "tmpfs",
                    "Target": "/run/user/1000",
                    "TmpfsOptions": {
                        "Mode": 448
                    }
                },
                {
                    "Type": "bind",
                    "Source": "/mnt/Data/Scripts/",
                    "Target": "/scripts",
                    "ReadOnly": true
                },
                {
                    "Type": "bind",
                    "Source": "/mnt/Docker/NPM/certs/",
                    "Target": "/LE",
                    "ReadOnly": true
                },
                {
                    "Type": "bind",
                    "Source": "/mnt/Veeam/",
                    "Target": "/Veeam"
                },
                {
                    "Type": "bind",
                    "Source": "/var/run/docker.sock",
                    "Target": "/var/run/docker.sock"
                },
                {
                    "Type": "bind",
                    "Source": "/mnt/Data/",
                    "Target": "/data",
                    "ReadOnly": true
                }
            ],
            "MaskedPaths": [
                "/proc/asound",
                "/proc/acpi",
                "/proc/kcore",
                "/proc/keys",
                "/proc/latency_stats",
                "/proc/timer_list",
                "/proc/timer_stats",
                "/proc/sched_debug",
                "/proc/scsi",
                "/sys/firmware",
                "/sys/devices/virtual/powercap"
            ],
            "ReadonlyPaths": [
                "/proc/bus",
                "/proc/fs",
                "/proc/irq",
                "/proc/sys",
                "/proc/sysrq-trigger"
            ]
        },
        "GraphDriver": {
            "Data": {
                "LowerDir": "/var/lib/docker/overlay2/015df6511ad43bf0c260a12886e9c503c14b04a5a59af58f3f49c5c8b0075c6a-init/diff:/var/lib/docker/overlay2/2onbqkvlb9abzmq63t698xn84/diff:/var/lib/docker/overlay2/htooxrtw8yo5m2m3daxoxcggw/diff:/var/lib/docker/overlay2/bhulwtqa6uzkiysqr6vyxl586/diff:/var/lib/docker/overlay2/ylkoj5u3ndcr01kyjknds3daa/diff:/var/lib/docker/overlay2/opa03w81jmvs06v3vnsknn6d2/diff:/var/lib/docker/overlay2/sp9201jyrqudrq4v01e0zmbos/diff:/var/lib/docker/overlay2/heis3g6ata8vozncyrd235r7r/diff:/var/lib/docker/overlay2/avx4qjfx217muvdt012sjofwe/diff:/var/lib/docker/overlay2/3tw8kglgvabzu82cx5b2ysy5w/diff:/var/lib/docker/overlay2/8c9d71f9abed105d0eabd7d7746763d4f4323398a51114e4775f5550446daa27/diff",
                "MergedDir": "/var/lib/docker/overlay2/015df6511ad43bf0c260a12886e9c503c14b04a5a59af58f3f49c5c8b0075c6a/merged",
                "UpperDir": "/var/lib/docker/overlay2/015df6511ad43bf0c260a12886e9c503c14b04a5a59af58f3f49c5c8b0075c6a/diff",
                "WorkDir": "/var/lib/docker/overlay2/015df6511ad43bf0c260a12886e9c503c14b04a5a59af58f3f49c5c8b0075c6a/work"
            },
            "Name": "overlay2"
        },
        "Mounts": [
            {
                "Type": "bind",
                "Source": "/mnt/Docker/NPM/certs",
                "Destination": "/LE",
                "Mode": "",
                "RW": false,
                "Propagation": "rprivate"
            },
            {
                "Type": "bind",
                "Source": "/mnt/Veeam",
                "Destination": "/Veeam",
                "Mode": "",
                "RW": true,
                "Propagation": "rprivate"
            },
            {
                "Type": "bind",
                "Source": "/mnt/Data",
                "Destination": "/data",
                "Mode": "",
                "RW": false,
                "Propagation": "rprivate"
            },
            {
                "Type": "bind",
                "Source": "/mnt/Data/Scripts",
                "Destination": "/scripts",
                "Mode": "",
                "RW": false,
                "Propagation": "rprivate"
            },
            {
                "Type": "tmpfs",
                "Source": "",
                "Destination": "/tmp",
                "Mode": "",
                "RW": true,
                "Propagation": ""
            },
            {
                "Type": "bind",
                "Source": "/var/run/docker.sock",
                "Destination": "/var/run/docker.sock",
                "Mode": "",
                "RW": true,
                "Propagation": "rprivate"
            },
            {
                "Type": "bind",
                "Source": "/mnt/Docker/Crontab/conf",
                "Destination": "/etc/cron.d",
                "Mode": "ro",
                "RW": false,
                "Propagation": "rprivate"
            },
            {
                "Type": "bind",
                "Source": "/mnt/Docker/Crontab/ssh",
                "Destination": "/root/.ssh",
                "Mode": "rw",
                "RW": true,
                "Propagation": "rprivate"
            },
            {
                "Type": "tmpfs",
                "Source": "",
                "Destination": "/run/user/1000",
                "Mode": "",
                "RW": true,
                "Propagation": ""
            }
        ],
        "Config": {
            "Hostname": "Crontab",
            "Domainname": "",
            "User": "",
            "AttachStdin": false,
            "AttachStdout": true,
            "AttachStderr": true,
            "Tty": false,
            "OpenStdin": false,
            "StdinOnce": false,
            "Env": [
                "TZ=America/New_York",
                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
            ],
            "Cmd": [
                "/bin/sh",
                "-c",
                "crontab /etc/cron.d/* && ln -fs /usr/share/zoneinfo/$TZ /etc/localtime && dpkg-reconfigure --frontend noninteractive tzdata && cron && tail -f /var/log/cron.log"
            ],
            "Image": "mecjay12/crontab",
            "Volumes": null,
            "WorkingDir": "",
            "Entrypoint": null,
            "OnBuild": null,
            "Labels": {
                "com.docker.compose.config-hash": "084610de1c3096ad732965d51bda7c55d820194c2313514b08383562815f4be8",
                "com.docker.compose.container-number": "1",
                "com.docker.compose.depends_on": "",
                "com.docker.compose.image": "sha256:65e6c39a78833ed71e300dc9f22a5397a043f734170d309c9c75553ac68bb39f",
                "com.docker.compose.oneoff": "False",
                "com.docker.compose.project": "crontab",
                "com.docker.compose.project.config_files": "/data/compose/3/docker-compose.yml",
                "com.docker.compose.project.working_dir": "/data/compose/3",
                "com.docker.compose.replace": "9902effa222b9fb7f77d3fff364e028181f63a68eebaf471ff174fe37903ed21",
                "com.docker.compose.service": "crontab",
                "com.docker.compose.version": "2.29.2"
            }
        },
        "NetworkSettings": {
            "Bridge": "",
            "SandboxID": "8a6601c357687e03f107c4b421d1f54d3f57745a1571c4ad32304eb633f08206",
            "SandboxKey": "/var/run/docker/netns/8a6601c35768",
            "Ports": {},
            "HairpinMode": false,
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "SecondaryIPAddresses": null,
            "SecondaryIPv6Addresses": null,
            "EndpointID": "",
            "Gateway": "",
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "IPAddress": "",
            "IPPrefixLen": 0,
            "IPv6Gateway": "",
            "MacAddress": "",
            "Networks": {
                "better_bridge": {
                    "IPAMConfig": null,
                    "Links": null,
                    "Aliases": [
                        "Crontab",
                        "crontab"
                    ],
                    "MacAddress": "02:42:ac:12:00:23",
                    "DriverOpts": null,
                    "NetworkID": "726e8d42701e22f2b1cd73a46926d21c9406f5afda017a54eca0be5919e330a3",
                    "EndpointID": "46489b00cebf9e0c5b3f3887d6b579a24124228060d64e2a1a25b2d7e188a529",
                    "Gateway": "172.18.0.1",
                    "IPAddress": "172.18.0.35",
                    "IPPrefixLen": 16,
                    "IPv6Gateway": "fd00:0:0:1::1",
                    "GlobalIPv6Address": "fd00:0:0:1::23",
                    "GlobalIPv6PrefixLen": 64,
                    "DNSNames": [
                        "Crontab",
                        "crontab",
                        "84df512f6f98"
                    ]
                }
            }
        }
    }
]

I'll grab syslog info after this week's backup runs. All of last week's syslogs have rotated out at this point. I can tell you that I don't have syslog in the Docker container.
 

Attachments

Thanks for the provided details. Can you reproduce the issue with the exact file after it failed by backing up just that single folder? Please try to stat the file at which the backup run fails. How often does the backup fail and if so, does the same run trough in the next run without changes to the files.

This intermittent errors might indicate an underlying connection or hardware issue.
 
Let's put this troubleshooting on hold. The regular daily backups started failing so I rebooted the PVE host and everything is working again. I'll post again if the issue comes back.
 
Let's put this troubleshooting on hold. The regular daily backups started failing so I rebooted the PVE host and everything is working again. I'll post again if the issue comes back.
If that is the case, I would suggest to perform a prolonged memory test as bad ram can have all sorts of strange manifestations.