Well, I meant if it only affects the benchmark, or if it also does anything in terms of the ZFS pool. A fast benchmark is useless if the VMs are still slow.
But I don't think I'll put any more effort into it, since I don't get any benefit from ZFS anyway.
@Falk R. No, would this affect the benchmark, or could this also be used to tweak the datastore itself? Because the VMs are running very slow on a ZFS pool based on PM893.
I ended up using mdadm software raid instead, now the VMs are running very fast
I have now removed the zpool to be able to test different things.
When I run the fio test directly on the SSD, the speed is actually quite good.
Or at least much better than with ZFS.
But what I still don't quite understand: ZFS is just as fast in write on a single disk as it is in ZFS RAID...
Ich schleppe das selbe Problem seit paar Monaten vor mir her, heute mach ich das hier auf, und sehe dann deins :) https://forum.proxmox.com/threads/zfs-slow-writes-on-samsung-pm893.131949/
ZFS arbeitet mit 8k blöcken, daher solltest du dein Benchmark auch mit 8k machen
Hello,
I have a Dell R630 server (without HW storage controller) with 4x Samsung PM893 480GB, on which I run a ZFS.
Unfortunately I have very poor write performance:
fio --ioengine=libaio --filename=/ZFS-2TB_RAID0_SSD/fiofile --direct=1 --sync=1 --rw=write --bs=8K --numjobs=1 --iodepth=1...
Seems like exclusion is not supported...
wanted to use "vm/(?!762\b)\d+" to exclude vm/762
parameter verification errors
group-filter: regex parse error: vm/(?!762\b)\d+ ^^^ error: look-around, including look-ahead and look-behind, is not supported
Okay, then I have to find a date to go to the server, and then I should build a PiKVM as soon as possible
I think posting the container config is not helping here, as this affects all unprotected containers, and especially completely new created containers with default settings, so they just...
I did apt update && apt dist-upgrade to see if it resolves the problem, but it didn't, maybe thats the cause for the different kernels?
The last reboot was approx. 1 month ago, I have not run the server for a year without rebooting, there have been a few reboots and upgrades in that time :D
I...
Hi,
My setup has been running for almost a year without any problems, but all of a sudden lxc goes crazy.
I know, I still use version 6, an upgrade is still pending. But in this state I would not like to upgrade it.
How do these problems manifest themselves:
I cannot connect to a...
Unfortunately that didn't work either :(
root@pve-lab:~# zfs get all NVMe/vm-901-disk-0 | grep used
NVMe/vm-901-disk-0 used 82.5G -
NVMe/vm-901-disk-0 usedbysnapshots 0B -
NVMe/vm-901-disk-0 usedbydataset 12.9G...
Ok looks like migrating to another storage and back does not do the trick
before:
root@pve-lab:~# zfs get all NVMe/vm-580-disk-0 | grep used
NVMe/vm-580-disk-0 used 10.3G -
NVMe/vm-580-disk-0 usedbysnapshots 0B -
NVMe/vm-580-disk-0...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.