For testing I added a special device (mirror of 2 256GB NVMe) to the pool on PBS site and did set the small files limit to 32k on the dataset.
It helped a bit. I did get an overall backup speed of 100MiB/s.
I will leave it added for now until a better solution is available.
Warning: A special...
A Special Device is not a write cache but a "normal" vdev. Meaning, the data stays there and doesn't get flushed to the spinning disks.
We need more something like bcache in front of the zfs pool but this introduces a complex setup and i personally think it would make way more sense to implement...
Thanks but I suspect this will slow down backups and sounds more like a workaround.
Why not implement a SSD/NVMe Write-Cache in PBS?
Add a zpool of mirrored fast devices with enough capacity, write there then flush to spinning disks in background.
That way one can have fast and save backups...
All of your points are valid but in the end the Cluster has to be fast enough to fit the needs and with Cache "None" it's not.
I generally favor safety over performance but in this case the numbers are far to bad and users have to do their work without going to have a coffee until the...
True but i would appreciate if instead of just saying "Not true" you would provide usable information. :)
I wrongly interpreted the Discard Option! It is about thin provisioning and not about flash storage.
"Disk images in Proxmox are sparse regardless of the image type, meaning the disk image...
Sure, that's what writeback does. :)
Ok, so the usual writeback drawbacks. Thanks.
Yes, but this only applies to flash storage. I just wanted to point out that discard impacts performance. In my case i need discard but this does not automatically applies to others. I've tested on flash...
Till now a few suggestion have been provided by the community like
- manually increase the memory buffer size
- add a fast and large enough buffering device like a PCI NVMe or a ZFS mirror of them
- use storage snapshots if available (ZFS/CEPH)
- use backup fleecing
Is Proxmox working on any...
I know this is an old thread but i'm currently investigating IOPs on our Ceph cluster.
Test VM Debian 10 (Kernel 4.19)
Test Suite fio with 4k randrw test
Every test was repeated 3 times
First tests to raw rdb block devices from within the VM to identify the best BUS (SCSI vs Virtio) with no...
And in case someone needs a script to list OSDs with corresponding devices (incl. multipath and their member disks) as well as not used multipath devices.
data=$(ceph-volume lvm list --format json)
ar=($(echo $data | jq -r 'to_entries | sort_by(.key | tonumber) | map(.key + ":" +...
Btw, i changed my script a little bit. My multipath devices are named "mpathXX". If yours are named different, you need to change the script.
if [[ "$1" == "--dryrun" ]]
readarray -t cdevs < <(ceph-volume lvm list | grep -o -E "/dev/mapper/mpath.*"...
We are having a hard time too. The combination of dirty maps in conjunction with a mid-fast PBS is giving us nightmares. We have > 40 VMs and also have a few VMs which are quite big database servers (>3TB). Sometimes they need updates and a reboot is required, which in turn invalidates the dirty...
Thank you, but that script is intended to be run over all snapshots of all repositories at a defined Intervall.
We have a lot of snapshots and as we snapshot every 15 minutes, that script would become a bottleneck.
We definitely need a way to do this either by hook scripts or as general Option...
If i could, i would run everything on Linux but sadly that's not possible. Mixing Windows Installations with different Languages in the same environment is a pain and will bite you hard in many ways. That's why i'm using a Clone of a working VM right now until this gets (hopefully) sorted.
Two things here:
1. You confirm that there are issues with the German version.
2. If this is a Windows issue then the issue should show up on any PVE version. But it doesn't.
Btw: I agree, MS is doing a lot of wired stuff ;)