hi, i have a question:Currently, we run on a Proxmox VPS with the following specs:
4x KVM vCore (E52620v3@2.4 Ghz underneath)
16GB RAM
Zpool pbs:
- 2TB Ceph RBD disk on dadup (Spinning disks, 25km away)
- 20GB Ceph RBD disk on SSD as 'special device'
- 20GB Ceph RBD disk on SSD as 'log device' (unused due to the way PBS issues writes)
- 100GB Ceph RBD disk on SSD as 'cache device' (mostly useless due to the fact that 1TB of data is present which is only periodically read once, so next time you need it it has been pushed out)
i agree that zfs is very nice, thats the reason we recommend using it, but with local hardwareI like ZFS for it's simplicity in growing, the possibility of fast disks in front of a slow disk, the filesystem per user, including quota, compression (although not very useful in this case indeed).
why not?Also, the way PBS is built, I don't think anyone will be happy of you have a lot of users/backups with all those files in a ext4 filesystem.
i do not see the point in having checksums on filesystem level. you now have 3 checksums (that all have to be calculated)Even though Ceph does do checksumming, it does not checksum the filesystem. ZFS does checksumming, and still PBS runs verify's, to check the checksummed chunks.
in such a case i'd use lvm + 1 lv per datastore + ext4 (maybe xfs?)So ZFS is very easy to run with a lot of different datastores/users, that is the main reason. It also scales better than ext4/xfs.
~30GiB VM | ~28GiB Directory | |
ZFS on Ceph | ~60MiB/s | ~30MiB/s |
Ext4 on Ceph | ~220MiB/s | ~120MiB/s |
should be (just make sure you sync the '.chunks' folder), alternatively, you can add your local ip (or localhost) as a remote and use a sync jobwould an rsync be sufficient?
No.Hi,
I thought ZFS was mandatory in order to use compression and dedup...
Seems not.I misunderstood something.
So, PBS datastore over EXT4 instead ZFS should be more fast I guess.
Then I don't know why recommend ZFS for local storage, maybe via syncjobs ?
So, syncing is done. Somehow (never seen this in my life) with rsync not all files properly synced. Which wasn't intentional, but might serve as a nice test to see how PBS handles failed verification...did it work? how does the performance look?
Absolutely horrible. Unreadable errors on client sidebut might serve as a nice test to see how PBS handles failed verification...
command 'lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- /usr/bin/proxmox-backup-client backup '--crypt-mode=encrypt' '--keyfd=11' pct.conf:/var/tmp/vzdumptmp20837/etc/vzdump/pct.conf fw.conf:/var/tmp/vzdumptmp20837/etc/vzdump/pct.fw root.pxar:/var/tmp/vzdumptmp20837 --include-dev /var/tmp/vzdumptmp20837/. --skip-lost-and-found --backup-type ct --backup-id 112 --backup-time 1602289095 --repository DB0623@pbs@pbs.tuxis.nl:DB0623_proxmox' failed: exit code 255
102: 2020-10-10 02:15:42 INFO: Error: parse_rfc_3339 failed - wrong length at line 1 column 60
2020-10-10T09:57:45+02:00: can't verify chunk, load failed - store 'DB0623_proxmox', unable to load chunk '9c8f3533157797a2b1212118b5fcd53a3ac047bb15769121c49724a4894e76b3' - No such file or directory (os error 2)
102: 2020-10-10 02:15:42 INFO: Error: parse_rfc_3339 failed - wrong length at line 1 column 60
proxmox-backup-client=0.8.16-1
(from 0.9.0-2) fixes the issue with creating new backups. This is absolutely horrible, a single client update has destroyed all backup chains and made impossible to create new ones. What's the deal with it?server version: 0.9.0
We use essential cookies to make this site work, and optional cookies to enhance your experience.