Error: parse_rfc_3339 failed - wrong length at line 1 column 60 (500)

when do you get this error? can you post the content of the file /var/log/proxmox-backup/tasks/active ?
 
Hi i am sry but i make new vm for pbs to test it again it seems that backup was ok but with the log writing this had a problems the reason of error was this

INFO: Finished Backup of VM 105 (00:04:14)
INFO: Backup finished at 2020-09-27 12:36:13
ERROR: Backup job failed - upload log failed: Error: parse_rfc_3339 failed - wrong length at line 1 column 60
TASK ERROR: upload log failed: Error: parse_rfc_3339 failed - wrong length at line 1 column 60
 
but this comes only on local storage on nfs mounted storage was eveything fine -> now make a new vm without local backup storage only monted 2 nfs shares with 1 32 tb and 1 with 6 tb but weird they show as both 14,6gb each but test backup is fine no errors only the 300mb backup is greater now on pbs server they are now 800mb a question can we samba shares mount to or only nfs share?
 
ok now tryed a cifs share mounting and it is the same result with 14,6gb is showing me as the others same
 
here a refresh when i take this nfs shares directly from PVE all are ok they are with size correct from volume and the backup sizes back to normal to 300mb in this case on PBS they are 800 mb
 
what does 'proxmox-backup-client version --repository YOURBACKUPREPO' show ?
(where YOURBACKUPREPO is your repo + user e.g. root@pam@yourbackupserver:yourdatastore )
 
Hi it is all at latest state

Backup Server 0.8-21
BETA

client version: 0.8.21
server version: 0.8.21
 

Attachments

  • Screenshot 2020-09-28 163146.png
    Screenshot 2020-09-28 163146.png
    40 KB · Views: 8
hi, i do not understand your full problem:

Hi i am sry but i make new vm for pbs to test it again it seems that backup was ok but with the log writing this had a problems the reason of error was this
do you still have this log problem? if yes please post the complete task log

but this comes only on local storage on nfs mounted storage was eveything fine -> now make a new vm without local backup storage only monted 2 nfs shares with 1 32 tb and 1 with 6 tb but weird they show as both 14,6gb each but test backup is fine no errors only the 300mb backup is greater now on pbs server they are now 800mb a question can we samba shares mount to or only nfs share?
ok now tryed a cifs share mounting and it is the same result with 14,6gb is showing me as the others same
here a refresh when i take this nfs shares directly from PVE all are ok they are with size correct from volume and the backup sizes back to normal to 300mb in this case on PBS they are 800 mb


AFAIU, you try to mount cifs/nfs as a base for your datastore but they all show the same usage?

can you post the output of 'mount' 'df -h' and the content of /etc/proxmox-backup/datastore.cfg ?
 
Hi mate,
first many thx for your quick renponses.Yes exactly thoose errors are only on local storage on nfs or cifs shares all work fine so i decide to mount thoose drives for test to try. Just for testing how PBS is. Thoose drives are 32TB nfs share for my backups.

root@pbs:~# mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,relatime)
udev on /dev type devtmpfs (rw,nosuid,relatime,size=993704k,nr_inodes=248426,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=145556k,nr_inodes=181945,mode=755)
/dev/mapper/pbs-root on / type ext4 (rw,relatime,errors=remount-ro)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,size=493284k,nr_inodes=123321)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k,nr_inodes=123321)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,size=493284k,nr_inodes=123321,mode=755)
cgroup2 on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
none on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/rdma type cgroup (rw,nosuid,nodev,noexec,relatime,rdma)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
mqueue on /dev/mqueue type mqueue (rw,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=45,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=14706)
sunrpc on /run/rpc_pipefs type rpc_pipefs (rw,relatime)
configfs on /sys/kernel/config type configfs (rw,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=98656k,mode=700)

root@pbs:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 971M 0 971M 0% /dev
tmpfs 143M 15M 128M 11% /run
/dev/mapper/pbs-root 15G 2.5G 12G 18% /
tmpfs 482M 0 482M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 482M 0 482M 0% /sys/fs/cgroup
tmpfs 97M 0 97M 0% /run/user/0

root@pbs:~# cat /etc/proxmox-backup/datastore.cfg
datastore: NBS-VM-BACKUPS
comment
path //192.168.10.5/volume1/VM-BACKUPS

datastore: BTN-VM-Backups
comment
path //192.168.2.8/mnt/zfs

datastore: NBS_CIFS_Backup
comment
path //192.168.10.5/VM-BACKUPS

as you see VM-BACKUPS is 2 times there one for NFS and the other for CIFS share to try mybe it works.

many thx
 
on the PVE side all are ok same mounts

root@pvs:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 16G 0 16G 0% /dev
tmpfs 3.2G 10M 3.2G 1% /run
/dev/mapper/pve-root 94G 7.0G 83G 8% /
tmpfs 16G 45M 16G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 16G 0 16G 0% /sys/fs/cgroup
/dev/sda2 511M 312K 511M 1% /boot/efi
/dev/fuse 30M 44K 30M 1% /etc/pve
192.168.10.5:/volume1/ISO 28T 3.1T 25T 11% /mnt/pve/NBS_ISO
192.168.10.6:/mnt/daten/VM-STOR2 4.9T 256K 4.9T 1% /mnt/pve/Truenas_VM-STOR2
192.168.10.5:/volume1/VM-BACKUPS 28T 3.1T 25T 11% /mnt/pve/NBS_Backup
192.168.10.5:/volume1/LCX-VM-Templates 28T 3.1T 25T 11% /mnt/pve/NBS-LCX-VM-Templates
//192.168.2.8/VM-Backup 5.3T 1.5G 5.3T 1% /mnt/pve/BTN-VM-Backups
192.168.10.5:/volume1/VM-STOR 28T 3.1T 25T 11% /mnt/pve/NBS_VM-Storage
tmpfs 3.2G 0 3.2G 0% /run/user/0
 
root@pbs:~# cat /etc/proxmox-backup/datastore.cfg
datastore: NBS-VM-BACKUPS
comment
path //192.168.10.5/volume1/VM-BACKUPS

datastore: BTN-VM-Backups
comment
path //192.168.2.8/mnt/zfs

datastore: NBS_CIFS_Backup
comment
path //192.168.10.5/VM-BACKUPS
actually those are not cifs/nfs mounts

the datastore path is always local since using local storage is the primary (and recommended) way
if you want to use nfs/cifs for a datastore, for now you have to manually mount them (or put them in a mount-unit/fstab) and select that path
pbs will not mount any nfs or cifs for you
 
Ok many thx. Well local storage make this errors, as on PVE NFS works i tryed to mount NFS share to test it on PBS. but it worked on 1st installation now on 2nd installation this not works. Weird is same backup on PVE makes around 300mb data and on PBS it was 800mb i thing the compression is much other on PVE was ZSTD but on PBS you can not change it, is greyed out, but a huge difference it is around 2,5 times bigger the same vm backup.
On the otherside you have on PBS even not way to remove the added datastores. Well thing this need more improved for now. Will give a try on later time again. Many thx for help
 
Last edited:
Got the same Problem with backup of LXC containers (KVM working fine)

INFO: Starting Backup of VM 106 (lxc)
INFO: Backup started at 2020-09-29 11:18:33
INFO: status = running
INFO: CT Name: InfluxDB
INFO: including mount point rootfs ('/') in backup
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: create storage snapshot 'vzdump'
/dev/rbd3
INFO: creating Proxmox Backup Server archive 'ct/106/2020-09-29T09:18:33Z'
INFO: run: lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- /usr/bin/proxmox-backup-client backup --crypt-mode=encrypt --keyfd=14 pct.conf:/var/tmp/vzdumptmp1005927/etc/vzdump/pct.conf root.pxar:/mnt/vzsnap0 --include-dev /mnt/vzsnap0/./ --skip-lost-and-found --backup-type ct --backup-id 106 --backup-time 1601371113 --repository admin@pbs@10.13.0.164:TEST
INFO: Starting backup: ct/106/2020-09-29T09:18:33Z
INFO: Client name: GS-PVE01
INFO: Starting backup protocol: Tue Sep 29 11:18:34 2020
INFO: Error: parse_rfc_3339 failed - wrong length at line 1 column 60
INFO: remove vzdump snapshot
Removing snap: 100% complete...done.
ERROR: Backup of VM 106 failed - command 'lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- /usr/bin/proxmox-backup-client backup '--crypt-mode=encrypt' '--keyfd=14' pct.conf:/var/tmp/vzdumptmp1005927/etc/vzdump/pct.conf root.pxar:/mnt/vzsnap0 --include-dev /mnt/vzsnap0/./ --skip-lost-and-found --backup-type ct --backup-id 106 --backup-time 1601371113 --repository admin@pbs@10.13.0.164:TEST' failed: exit code 255
INFO: Failed at 2020-09-29 11:18:35
INFO: Backup job finished with errors
TASK ERROR: job errors

LXC is locatet @ CEPH/RDB

no entry @ /var/log/proxmox-backup/tasks/active
Server: 0.8-21
Client 0.8.21-1
 
Got the same Problem with backup of LXC containers (KVM working fine)
the log does not seem to have the same issue so i doubt it is the same problem

Ok many thx. Well local storage make this errors, as on PVE NFS works i tryed to mount NFS share to test it on PBS. but it worked on 1st installation now on 2nd installation this not works. Weird is same backup on PVE makes around 300mb data and on PBS it was 800mb i thing the compression is much other on PVE was ZSTD but on PBS you can not change it, is greyed out, but a huge difference it is around 2,5 times bigger the same vm backup.
On the otherside you have on PBS even not way to remove the added datastores. Well thing this need more improved for now. Will give a try on later time again. Many thx for help
pbs did never mount any nfs or cifs automatically, if it worked, it would have some other reason

pbs backups are by default compressed with zstd. not the whole archive as one part but as chunks. that can be the reason why a single backup is bigger than the default vzdump. the advantage is that only the different chunks on the following backups are using additional storage, so if you have 10 backups, pbs might very well use less space than 'old-style' vzdump backups
 
Hello,

I'm having the exact same issue as the OP. Backups were working until I updated recently, I want to say 5 days ago.

I have a Synology NAS CIFS share mounted to a vm running proxmox backup server. The share is automounted via fstab. The entry looks like this:
//nas-01/pbs /mnt/nas-01/pbs cifs credentials=/root/.nas-01.auth,uid=34,gid=34 0 0

UID/GID 34 are the backup user on this system, which appears to run the proxmox backup server processes.

The mount looks like this:
//nas-01/pbs on /mnt/nas-01/pbs type cifs (rw,relatime,vers=3.1.1,cache=strict,username=pbs,uid=34,forceuid,gid=34,forcegid,addr=10.10.10.60,file_mode=0755,dir_mode=0755,soft,nounix,serverino,mapposix,rsize=4194304,wsize=4194304,bsize=1048576,echo_interval=60,actimeo=1)

As I mentioned, backups were working fine up until 09/24. When backups run now, they appear to run and complete, and then fail on the upload log step.
INFO: starting new backup job: vzdump 107 --mode snapshot --storage pbs --remove 0 --node pve-01 INFO: Starting Backup of VM 107 (qemu) INFO: Backup started at 2020-09-30 01:24:51 INFO: status = running INFO: VM Name: bwhost INFO: include disk 'scsi0' 'vmstore:vm-107-disk-0' 32G INFO: backup mode: snapshot INFO: ionice priority: 7 INFO: creating Proxmox Backup Server archive 'vm/107/2020-09-30T05:24:51Z' INFO: issuing guest-agent 'fs-freeze' command INFO: enabling encryption INFO: issuing guest-agent 'fs-thaw' command INFO: started backup task 'c167bdbb-e06d-4069-9eb9-9328ec45659d' INFO: resuming VM again INFO: scsi0: dirty-bitmap status: created new INFO: 1% (496.0 MiB of 32.0 GiB) in 3s, read: 165.3 MiB/s, write: 86.7 MiB/s INFO: 3% (1.0 GiB of 32.0 GiB) in 6s, read: 176.0 MiB/s, write: 58.7 MiB/s . . . INFO: 86% (27.6 GiB of 32.0 GiB) in 1m 18s, read: 441.3 MiB/s, write: 56.0 MiB/s INFO: 100% (32.0 GiB of 32.0 GiB) in 1m 24s, read: 744.7 MiB/s, write: 0 B/s INFO: backup is sparse: 18.19 GiB (56%) total zero data INFO: backup was done incrementally, reused 28.10 GiB (87%) INFO: transferred 32.00 GiB in 84 seconds (390.1 MiB/s) INFO: Finished Backup of VM 107 (00:01:25) INFO: Backup finished at 2020-09-30 01:26:16
ERROR: Backup job failed - upload log failed: Error: parse_rfc_3339 failed - wrong length at line 1 column 60 TASK ERROR: upload log failed: Error: parse_rfc_3339 failed - wrong length at line 1 column 60

My version info:
|01:38:57|root@pve-01:[~]> proxmox-backup-client version --repository root@pam@pbs:nas-01 client version: 0.8.21 server version: 0.8.21

Thank you,

Al

EDIT: Added the corresponding log from Proxmox Backup Server for the task log above:
2020-09-30T01:24:52-04:00: starting new backup on datastore 'nas-01': "vm/107/2020-09-30T05:24:51Z" 2020-09-30T01:24:52-04:00: download 'index.json.blob' from previous backup. 2020-09-30T01:24:52-04:00: register chunks in 'drive-scsi0.img.fidx' from previous backup. 2020-09-30T01:24:52-04:00: download 'drive-scsi0.img.fidx' from previous backup. 2020-09-30T01:24:52-04:00: created new fixed index 1 ("vm/107/2020-09-30T05:24:51Z/drive-scsi0.img.fidx") 2020-09-30T01:24:52-04:00: add blob "/mnt/nas-01/pbs/vm/107/2020-09-30T05:24:51Z/qemu-server.conf.blob" (353 bytes, comp: 353) 2020-09-30T01:26:11-04:00: Upload statistics for 'drive-scsi0.img.fidx' 2020-09-30T01:26:11-04:00: UUID: 6fc357c6c68041b7bf61397ffefeb7b6 2020-09-30T01:26:11-04:00: Checksum: 815436eae81e2c5714270c39165bcc1dcf71404d40ae4cf0c7f19a7dc143e41e 2020-09-30T01:26:11-04:00: Size: 34359738368 2020-09-30T01:26:11-04:00: Chunk count: 8192 2020-09-30T01:26:11-04:00: Upload size: 4194304000 (12%) 2020-09-30T01:26:11-04:00: Duplicates: 7192+84 (88%) 2020-09-30T01:26:11-04:00: Compression: 27% 2020-09-30T01:26:11-04:00: successfully closed fixed index 1 2020-09-30T01:26:11-04:00: add blob "/mnt/nas-01/pbs/vm/107/2020-09-30T05:24:51Z/index.json.blob" (363 bytes, comp: 363) 2020-09-30T01:26:16-04:00: successfully finished backup 2020-09-30T01:26:16-04:00: backup finished successfully 2020-09-30T01:26:16-04:00: TASK OK
 
hmm after looking at the code, i honestly do not know how this error could pop up in such a way...
do you get an error when looking at the task list in the pbs gui?
 
I do not get any errors frm either the Proxmox task view or the Proxmox Backup Server task view. I'm going to install a bare metal device today with Proxmox Backup Server with physical drives instead of CIFS and see if I get the error on that install also.

EDIT: I just noticed something from this attempt. While there's an error on the Proxmox Cluster side, from the Proxmox Backup Server side there is no indication of an error, and a backup is listed in the VM Backup Group section. Unfortunately what's listed fails to restore.

From the PVE Task:
Error: parse_rfc_3339 failed - wrong length at line 1 column 60 TASK ERROR: command '/usr/bin/proxmox-backup-client restore '--crypt-mode=encrypt' '--keyfd=13' vm/101/2020-09-30T13:22:13Z index.json /var/tmp/vzdumptmp1607236/index.json --repository root@pam@pbs:nas-01' failed: exit code 255

There was nothing on the PBS server to indicate that a restore was event attempted.

One a side note, it would be nice when restoring to restore to a different VMID
 
Last edited:
So, I installed PBS onto a machine by itself with a local ZFS pool. Added it as a storage source to my PVE cluster. I backed up a single VM, it ran without error.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!