Dedicated Proxmox Backup Server

impressive, if it's true it writes a lot !

you can check how fast those logs grow / how much the proxmox-backup-api and proxmox-backup-proxy processes write to disk top verify ;) if no tasks are running, there shouldn't be any other write activity by PBS itself..
 
I disabled PBS on all nodes. still have writes
seems that writes are caused by txg_sync

Code:
root@th3-pbs-01:/var/log/proxmox-backup# free -h
              total        used        free      shared  buff/cache   available
Mem:           62Gi        40Gi        19Gi        25Mi       2.2Gi        21Gi
Swap:         8.0Gi       0.0Ki       8.0Gi

Code:
root@th3-pbs-01:/var/log/proxmox-backup# awk '/^size/ { print $1 " " $3 / 1048576 }' < /proc/spl/kstat/zfs/arcstats
size 31935.4
 
no, txg_sync is just the ZFS kernel thread responsible for syncing writes out to the disks..
 
Writes are mainly related to the special device (special_small_blocks 4K)

1605265306312.png
 
that would indicate metadata updates.. running sync or GC?
 
ah, it could also be small blocks if that is still active.. can you try disabling using the special device(s) for small blocks and see whether the writes shift to the regular disks?
 
thanks
do you know how to do this ?

my zpool create :
zpool create -f -o ashift=12 zfs_datastore raidz /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg special mirror /dev/sda /dev/sdb
 
you wrote that you did the following as well: zfs set special_small_blocks=4K <pool>

setting that to 0 should disable it again for newly written data..
 
Still writes on special device

Code:
root@th3-pbs-01:/var/log# zfs get all zfs_datastore | grep special_small_blocks
zfs_datastore  special_small_blocks  0                      local
root@th3-pbs-01:/var/log#
root@th3-pbs-01:/var/log#
root@th3-pbs-01:/var/log#
root@th3-pbs-01:/var/log# zpool iostat 2 -v
                 capacity     operations     bandwidth
pool           alloc   free   read  write   read  write
-------------  -----  -----  -----  -----  -----  -----
zfs_datastore  3.61T  14.9T    428    371  12.2M  11.8M
  raidz1       3.59T  14.6T    400      9  12.0M  3.25M
    sdc            -      -     79      1  2.40M   665K
    sdd            -      -     79      1  2.40M   665K
    sde            -      -     80      1  2.41M   665K
    sdf            -      -     80      1  2.41M   665K
    sdg            -      -     80      1  2.41M   665K
special            -      -      -      -      -      -
  mirror       24.7G   347G     28    362   136K  8.58M
    sda            -      -     14    181  68.3K  4.29M
    sdb            -      -     14    180  68.2K  4.29M
-------------  -----  -----  -----  -----  -----  -----
                 capacity     operations     bandwidth
pool           alloc   free   read  write   read  write
-------------  -----  -----  -----  -----  -----  -----
zfs_datastore  3.61T  14.9T      0      0      0      0
  raidz1       3.59T  14.6T      0      0      0      0
    sdc            -      -      0      0      0      0
    sdd            -      -      0      0      0      0
    sde            -      -      0      0      0      0
    sdf            -      -      0      0      0      0
    sdg            -      -      0      0      0      0
special            -      -      -      -      -      -
  mirror       24.7G   347G      0      0      0      0
    sda            -      -      0      0      0      0
    sdb            -      -      0      0      0      0
-------------  -----  -----  -----  -----  -----  -----
                 capacity     operations     bandwidth
pool           alloc   free   read  write   read  write
-------------  -----  -----  -----  -----  -----  -----
zfs_datastore  3.61T  14.9T      0      0      0      0
  raidz1       3.59T  14.6T      0      0      0      0
    sdc            -      -      0      0      0      0
    sdd            -      -      0      0      0      0
    sde            -      -      0      0      0      0
    sdf            -      -      0      0      0      0
    sdg            -      -      0      0      0      0
special            -      -      -      -      -      -
  mirror       24.7G   347G      0      0      0      0
    sda            -      -      0      0      0      0
    sdb            -      -      0      0      0      0
-------------  -----  -----  -----  -----  -----  -----
                 capacity     operations     bandwidth
pool           alloc   free   read  write   read  write
-------------  -----  -----  -----  -----  -----  -----
zfs_datastore  3.61T  14.9T      0  1.26K      0  22.6M
  raidz1       3.59T  14.6T      0      9      0  40.0K
    sdc            -      -      0      1      0  7.99K
    sdd            -      -      0      1      0  7.99K
    sde            -      -      0      1      0  7.99K
    sdf            -      -      0      1      0  7.99K
    sdg            -      -      0      1      0  7.99K
special            -      -      -      -      -      -
  mirror       24.7G   347G      0  1.25K      0  22.6M
    sda            -      -      0    638      0  11.3M
    sdb            -      -      0    644      0  11.3M
-------------  -----  -----  -----  -----  -----  -----
 
can you confirm that the writes go down almost completely when you temporarily stop proxmox-backup-proxy? I can see around 4-5MB/s writes to the special vdev here as well..

edit: did some more investigation - the writes go away when I stop client access including pvestatd and any open GUI tabs, or when I stop proxmox-backup-proxy altogether. so in my case, it's most likely the status calls which currently still trigger a scan of the datastore to get snapshot counts. the latter part should get better with https://lists.proxmox.com/pipermail/pbs-devel/2020-November/001518.html , as does enabling 'relatime' on the datastore datasets.
 
Last edited:
  • Like
Reactions: TwiX
you could also try enabling relatime on your datastore dataset(s). the only thing that cares about atime is garbage collection, and that is written with relatime in mind (hence the 24h delay when collecting unreferenced chunks).
 
  • Like
Reactions: Cookiefamily
This was an interesting read. Thanks for the discussion. Not to jack the thread, but I'm looking to build a dedicated PBS using ZFS as well. @aaron, do you have recommendation for high endurance SSDs? Something that's a tiny bit budget friendly? I've already lined up an Optane (the less expensive 32GB m.2 one) for my SLOG. I may kick myself later, but I'd rather not drop gobs of cash on SSDs for the special device.

@TwiX, post performance stats when you're finished with your build? Do you have any recommendations for SSDs?
I had the Optane 64GB in a FreeNAS Box, with about 24 10TB disks. These consumer devices have a max TBW limit, they will not write more than that! That was reached in less than 1 year. On the other hand, they are cheap and work great!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!