[SOLVED] Huge number in size column

Aug 7, 2017
44
3
48
54
We test proxmox-backup-client on some VM (Rocky 8, Debian 10/11) and experience a strange behaviour in the Size column of the GUI.

All VM should have around 5GiB size but some have ~322GiB.

1646398507660.png








In the respective snapshot folder the pxar-file of the VM with 322 GiB is ~1.7Mib and the pxar-file of the VM with ~5 GiB is 62K.

proxmox-backup-proxy[2706849]: zfs_dataset_stats("tank/vm-backup") failed - could not parse 'objset-objset-0x305c' stat file

Restore of single files with catalog shell works for all.

In the syslog we find this messages during backups:

Code:
Mar 04 14:00:42 io proxmox-backup-proxy[2706849]: zfs_dataset_stats("tank/vm-backup") failed - could not parse 'objset-objset-0x305c' stat file
Mar 04 14:00:52 io proxmox-backup-proxy[2706849]: zfs_dataset_stats("tank/fileserver") failed - could not parse 'objset-objset-0x31ec' stat file
Mar 04 14:00:52 io proxmox-backup-proxy[2706849]: zfs_dataset_stats("tank/vm-backup") failed - could not parse 'objset-objset-0x305c' stat file
Mar 04 14:01:02 io proxmox-backup-proxy[2706849]: zfs_dataset_stats("tank/vm-backup") failed - could not parse 'objset-objset-0x305c' stat file
Mar 04 14:01:02 io proxmox-backup-proxy[2706849]: zfs_dataset_stats("tank/fileserver") failed - could not parse 'objset-objset-0x31ec' stat file

Ist this only a warning or somethin serious?

For each PBS Datastore we have a dedicate ZFS dataset.

Zpool are clean, no errors.
 
This is the output of the garbage collection. At the end:

Code:
2022-03-04T14:55:43+01:00: Original data usage: 972.506 GiB
2022-03-04T14:55:43+01:00: On-Disk usage: 7.048 GiB (0.72%)
2022-03-04T14:55:43+01:00: On-Disk chunks: 5710
2022-03-04T14:55:43+01:00: Deduplication factor: 137.98
2022-03-04T14:55:43+01:00: Average chunk size: 1.264 MiB

The four machines which are currently in the datastore are definitely not 972 GiB big. They are all 8GiG Machines... Something is really crooked here!

Code:
ProxmoxBackup Server 2.1-5
()
2022-03-04T14:54:28+01:00: starting garbage collection on store vm-backup
2022-03-04T14:54:28+01:00: Start GC phase1 (mark used chunks)
2022-03-04T14:54:28+01:00: marked 12% (1 of 8 index files)
2022-03-04T14:54:52+01:00: marked 25% (2 of 8 index files)
2022-03-04T14:55:10+01:00: marked 37% (3 of 8 index files)
2022-03-04T14:55:10+01:00: marked 50% (4 of 8 index files)
2022-03-04T14:55:10+01:00: marked 62% (5 of 8 index files)
2022-03-04T14:55:22+01:00: marked 75% (6 of 8 index files)
2022-03-04T14:55:22+01:00: marked 87% (7 of 8 index files)
2022-03-04T14:55:23+01:00: marked 100% (8 of 8 index files)
2022-03-04T14:55:23+01:00: Start GC phase2 (sweep unused chunks)
2022-03-04T14:55:25+01:00: processed 1% (57 chunks)
2022-03-04T14:55:25+01:00: processed 2% (108 chunks)
2022-03-04T14:55:25+01:00: processed 3% (155 chunks)
[...]
2022-03-04T14:55:42+01:00: processed 98% (5700 chunks)
2022-03-04T14:55:42+01:00: processed 99% (5761 chunks)
2022-03-04T14:55:43+01:00: Removed garbage: 0 B
2022-03-04T14:55:43+01:00: Removed chunks: 0
2022-03-04T14:55:43+01:00: Pending removals: 201.701 MiB (in 113 chunks)
2022-03-04T14:55:43+01:00: Original data usage: 972.506 GiB
2022-03-04T14:55:43+01:00: On-Disk usage: 7.048 GiB (0.72%)
2022-03-04T14:55:43+01:00: On-Disk chunks: 5710
2022-03-04T14:55:43+01:00: Deduplication factor: 137.98
2022-03-04T14:55:43+01:00: Average chunk size: 1.264 MiB
2022-03-04T14:55:43+01:00: TASK OK
 
the big files are probably sparse files (which appear as their non-sparse size in the backup)
nothing to worry though, since on restore we restore them sparsely again
e.g. some users reported that for /var/log/lastlog
notice the difference in size between those two commands:
Code:
ls -lh /var/log/lastlog
du -h /var/log/lastlog

Mar 04 14:00:42 io proxmox-backup-proxy[2706849]: zfs_dataset_stats("tank/vm-backup") failed - could not parse 'objset-objset-0x305c' stat file
these should not happen though... (but are also not fatal errors, this means that we could not parse/find the correct file in proc to parse the dataset statistics for the zfs dataset)

could you post the output of
Code:
ls -lh /proc/spl/kstat/zfs/tank
cat /proc/spl/kstat/zfs/tank/objset-0x305c
?
 
Code:
root@io:~ # ls -lh /proc/spl/kstat/zfs/tank
total 0
-rw-r--r-- 1 root root 0 Mar  4 16:02 dmu_tx_assign
-rw-r--r-- 1 root root 0 Mar  4 16:02 iostats
-rw-r--r-- 1 root root 0 Mar  4 16:02 multihost
-rw-r--r-- 1 root root 0 Mar  4 16:02 objset-0x1a8b
-rw-r--r-- 1 root root 0 Mar  4 16:02 objset-0x202
-rw-r--r-- 1 root root 0 Mar  4 16:02 objset-0x2e66
-rw-r--r-- 1 root root 0 Mar  4 16:02 objset-0x2e9a
-rw-r--r-- 1 root root 0 Mar  4 16:02 objset-0x2f4f
-rw-r--r-- 1 root root 0 Mar  4 16:02 objset-0x2f8a
-rw-r--r-- 1 root root 0 Mar  4 16:02 objset-0x30de
-rw-r--r-- 1 root root 0 Mar  4 16:02 objset-0x36
-rw-r--r-- 1 root root 0 Mar  4 16:02 objset-0x4238
-rw-r--r-- 1 root root 0 Mar  4 16:02 objset-0x6cd3
-rw-r--r-- 1 root root 0 Mar  4 16:02 objset-0x6d58
-rw-r--r-- 1 root root 0 Mar  4 16:02 objset-0x6d70
-rw-r--r-- 1 root root 0 Mar  4 16:02 objset-0x6dd0
-rw-r--r-- 1 root root 0 Mar  4 16:02 objset-0x78
-rw------- 1 root root 0 Mar  4 16:02 reads
-rw-r--r-- 1 root root 0 Mar  4 16:02 state
-rw-r--r-- 1 root root 0 Mar  4 16:02 txgs

root@io:~ # cat /proc/spl/kstat/zfs/tank/objset-0x305c
cat: /proc/spl/kstat/zfs/tank/objset-0x305c: No such file or directory
 
the big files are probably sparse files (which appear as their non-sparse size in the backup)
nothing to worry though, since on restore we restore them sparsely again
e.g. some users reported that for /var/log/lastlog
notice the difference in size between those two commands:
Code:
ls -lh /var/log/lastlog
du -h /var/log/lastlog

Cool, this is precisely the case. Thx.
 
Code:
root@io:~ # ls -lh /proc/spl/kstat/zfs/tank
total 0
-rw-r--r-- 1 root root 0 Mar  4 16:02 dmu_tx_assign
-rw-r--r-- 1 root root 0 Mar  4 16:02 iostats
-rw-r--r-- 1 root root 0 Mar  4 16:02 multihost
-rw-r--r-- 1 root root 0 Mar  4 16:02 objset-0x1a8b
-rw-r--r-- 1 root root 0 Mar  4 16:02 objset-0x202
-rw-r--r-- 1 root root 0 Mar  4 16:02 objset-0x2e66
-rw-r--r-- 1 root root 0 Mar  4 16:02 objset-0x2e9a
-rw-r--r-- 1 root root 0 Mar  4 16:02 objset-0x2f4f
-rw-r--r-- 1 root root 0 Mar  4 16:02 objset-0x2f8a
-rw-r--r-- 1 root root 0 Mar  4 16:02 objset-0x30de
-rw-r--r-- 1 root root 0 Mar  4 16:02 objset-0x36
-rw-r--r-- 1 root root 0 Mar  4 16:02 objset-0x4238
-rw-r--r-- 1 root root 0 Mar  4 16:02 objset-0x6cd3
-rw-r--r-- 1 root root 0 Mar  4 16:02 objset-0x6d58
-rw-r--r-- 1 root root 0 Mar  4 16:02 objset-0x6d70
-rw-r--r-- 1 root root 0 Mar  4 16:02 objset-0x6dd0
-rw-r--r-- 1 root root 0 Mar  4 16:02 objset-0x78
-rw------- 1 root root 0 Mar  4 16:02 reads
-rw-r--r-- 1 root root 0 Mar  4 16:02 state
-rw-r--r-- 1 root root 0 Mar  4 16:02 txgs

root@io:~ # cat /proc/spl/kstat/zfs/tank/objset-0x305c
cat: /proc/spl/kstat/zfs/tank/objset-0x305c: No such file or directory
We managed about around destroyed and created again zfs datasets with the same name... And removed and created them agaign as a datastore in PBS. Maybe we have to restart proxmox-backup-proxy to inform PBS about this changes ... :-? Currentlythere is a long running backup job active...
 
We managed about around destroyed and created again zfs datasets with the same name... And removed and created them agaign as a datastore in PBS. Maybe we have to restart proxmox-backup-proxy to inform PBS about this changes ... :-? Currentlythere is a long running backup job active...
This restart seemed to have solved the problem. No zfs error messages anymore.
 
We managed about around destroyed and created again zfs datasets with the same name... And removed and created them agaign as a datastore in PBS. Maybe we have to restart proxmox-backup-proxy to inform PBS about this changes ... :-? Currentlythere is a long running backup job active...
yes that case could lead to the seen behaviour, workaround is to reload (no need to restart) the proxmox-backup-proxy, it will find the correct file then
nonetheless i'll send a patch for this behaviour so that a reload/restart should not be necessary anymore
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!