Search results

  1. L

    ZFS slow writes on Samsung PM893

    Well, I meant if it only affects the benchmark, or if it also does anything in terms of the ZFS pool. A fast benchmark is useless if the VMs are still slow. But I don't think I'll put any more effort into it, since I don't get any benefit from ZFS anyway.
  2. L

    ZFS slow writes on Samsung PM893

    @Falk R. No, would this affect the benchmark, or could this also be used to tweak the datastore itself? Because the VMs are running very slow on a ZFS pool based on PM893. I ended up using mdadm software raid instead, now the VMs are running very fast
  3. L

    ZFS slow writes on Samsung PM893

    I have now removed the zpool to be able to test different things. When I run the fio test directly on the SSD, the speed is actually quite good. Or at least much better than with ZFS. But what I still don't quite understand: ZFS is just as fast in write on a single disk as it is in ZFS RAID...
  4. L

    ZFS Raid 10 mit 6 Samsung PM893 1,92TB Sata Schreckliche Schreibraten

    Ich schleppe das selbe Problem seit paar Monaten vor mir her, heute mach ich das hier auf, und sehe dann deins :) https://forum.proxmox.com/threads/zfs-slow-writes-on-samsung-pm893.131949/ ZFS arbeitet mit 8k blöcken, daher solltest du dein Benchmark auch mit 8k machen
  5. L

    ZFS slow writes on Samsung PM893

    Hello, I have a Dell R630 server (without HW storage controller) with 4x Samsung PM893 480GB, on which I run a ZFS. Unfortunately I have very poor write performance: fio --ioengine=libaio --filename=/ZFS-2TB_RAID0_SSD/fiofile --direct=1 --sync=1 --rw=write --bs=8K --numjobs=1 --iodepth=1...
  6. L

    Exclude groups in sync

    Seems like exclusion is not supported... wanted to use "vm/(?!762\b)\d+" to exclude vm/762 parameter verification errors group-filter: regex parse error: vm/(?!762\b)\d+ ^^^ error: look-around, including look-ahead and look-behind, is not supported
  7. L

    [SOLVED] Various (interrelated?) LXC errors

    I did the restart today, now it's working again. I could not upgrade to version 7 due to lack of time, maybe next week.
  8. L

    [SOLVED] Various (interrelated?) LXC errors

    Okay, then I have to find a date to go to the server, and then I should build a PiKVM as soon as possible I think posting the container config is not helping here, as this affects all unprotected containers, and especially completely new created containers with default settings, so they just...
  9. L

    [SOLVED] Various (interrelated?) LXC errors

    Usually I do, but in this case it's not the cause of the problem, because I upgraded it after the error occured.
  10. L

    [SOLVED] Various (interrelated?) LXC errors

    I did apt update && apt dist-upgrade to see if it resolves the problem, but it didn't, maybe thats the cause for the different kernels? The last reboot was approx. 1 month ago, I have not run the server for a year without rebooting, there have been a few reboots and upgrades in that time :D I...
  11. L

    [SOLVED] Various (interrelated?) LXC errors

    Hi, My setup has been running for almost a year without any problems, but all of a sudden lxc goes crazy. I know, I still use version 6, an upgrade is still pending. But in this state I would not like to upgrade it. How do these problems manifest themselves: I cannot connect to a...
  12. L

    [SOLVED] Enable sparse on existing ZFS storage

    The correct command is "zfs set refreservation=0G NVMe/vm-901-disk-0" my bad *facepalm*
  13. L

    [SOLVED] Enable sparse on existing ZFS storage

    Unfortunately that didn't work either :( root@pve-lab:~# zfs get all NVMe/vm-901-disk-0 | grep used NVMe/vm-901-disk-0 used 82.5G - NVMe/vm-901-disk-0 usedbysnapshots 0B - NVMe/vm-901-disk-0 usedbydataset 12.9G...
  14. L

    [SOLVED] Enable sparse on existing ZFS storage

    Ok looks like migrating to another storage and back does not do the trick before: root@pve-lab:~# zfs get all NVMe/vm-580-disk-0 | grep used NVMe/vm-580-disk-0 used 10.3G - NVMe/vm-580-disk-0 usedbysnapshots 0B - NVMe/vm-580-disk-0...
  15. L

    [SOLVED] Enable sparse on existing ZFS storage

    I've set it to sparse 1 now, and migrate the disks from that datastore to another and then back, so it's a new disk, right? (zfs set reservation=0 NVMe/vm-540-disk-1 doesn't change anything)
  16. L

    [SOLVED] Enable sparse on existing ZFS storage

    Hi, I noticed my manually created zfs pool is not sparse (thin). Is there an easy way to activate it afterwards? I could easily add "sparse 1" to my storage config, but would that work, and most important, not destroy the data?
  17. L

    No network after upgrading to latest kernel (pve 5.4)

    Okay the upgrade went fine, but for some reason all my post-up rules don't work anymore, e.g. auto vmbr0 iface vmbr0 inet static address 10.10.10.1 netmask 255.255.255.0 bridge-ports none bridge-stp off bridge-fd 0 post-up iptables -t nat -A POSTROUTING -s...
  18. L

    No network after upgrading to latest kernel (pve 5.4)

    Hmm I didnt change anything, the content is [Link] NamePolicy=kernel database onboard slot path MACAddressPolicy=persistent I'll just see what happens after upgrading to 6 :)
  19. L

    No network after upgrading to latest kernel (pve 5.4)

    Thanks, it changed to eth0. I don't know if there was a bios update, as it's just a kvm which i run proxmox on, maybe the host did the updates. I saw in dmesg (i think) that eth0 was renamed to ens3, so i thought everything is as it's supposed to be. ip a didnt show ens3 or eth0, probably...