Search results

  1. N

    Help fix my ZFS Pool

    One I had a problem / unknow error with ZFS. It started then I wanted to clean old data, old snapshots. ZFS started to hang up. Nothing helped. Tried to import the pool with -N - still the same. I did not know what ZFS was trying to do. Only importing pool in read-only mode allowed to see...
  2. N

    Zfs has encountered an uncorrectable i/o failure and has been suspended

    ZFS is not suitable to run on single disk then problems. Specially for system OS partition. If you could put 2 disks for ZFS mirror to handle this type of errors.
  3. N

    Proxmox Rebooting constantly on BOXNUC7I5BNK i5 NUC

    My server did the same ( or at least I think so ). I use Corsair Commander Pro to control the speed of ventilators. After speeding it up CPU is not getting hot and do not do auto system reboot by hardware. Try to limit CPU speed if you cannot cool your NUC.
  4. N

    Yet another "ZFS on HW-RAID" Thread (with benchmarks)

    If you want to know then yes, keep it on. In the same situation others FS will continue to operate. But keep in mind your VM will be broken or you movie will be half way watchable.
  5. N

    Sanity Check on volblocksize and recordsize settings for VMs and supporting datasets on ZFS 2.2.0 Mirror VDEV Pool?

    I will ask a question for thought. If recordsize or volblocksize is set to 1M, what is the minimum read IO for that block even need part of it only? For example MariaDB with 16K database storage on recordsize/volblocksize with 1M. Will ZFS read 16K of request or all 1M of the block before...
  6. N

    Yet another "ZFS on HW-RAID" Thread (with benchmarks)

    This thread started as performance comprising between native ZFS raid and ZFS on top of H.Raid and half leg is it good to run ZFS on H.Raid. I can tell you this - How you will set up that's way it will work. If you want H.Raid to monitor HDD, I can suggest to set ZFS checksum=off otherwise ZFS...
  7. N

    Yet another "ZFS on HW-RAID" Thread (with benchmarks)

    How HWRAID protect from data distortion or bit rot ?
  8. N

    RaidZ1 performance ZFS on host vs VM

    I start this software #top -f -F 1 And then with "L" I change to see all disks and CPU. In the benchmark time you can see disk activity and you can compare do all disk are busy equally and what is CPU activity.
  9. N

    Howto remove disk from ZFS

    ZFS was not designed to allow remove disk. It is very complex system. Yes you can remove disk from the mirror. Maybe one day it will be possible.
  10. N

    Howto remove disk from ZFS

    You cannot remove disk in your situation right now. Plan how to backup your data and recreate ZFS.
  11. N

    RaidZ1 performance ZFS on host vs VM

    ZFS is very complex system. It is COW system. If you don't care compression, encryption, snapshot, data integrity then use old file systems. Use #atop to see CPU and disk usage. Maybe it will show something interesting.
  12. N

    Howto remove disk from ZFS

    Backup your data and recreate ZFS
  13. N

    RaidZ1 performance ZFS on host vs VM

    ZFS flush data in sync mode. It will wait until slowest disk finish write and then it will push another batch to write. And of course all metadata / transaction updates.
  14. N

    VPS Hosting providers - why no zfs ?

    If RAM is too expenses then don't use ZFS
  15. N

    VPS Hosting providers - why no zfs ?

    I did not find my talk in #IRC about ARC problem but I found the same situation mentioned here - https://github.com/openzfs/zfs/discussions/11676 It is good if they fixed it.
  16. N

    VPS Hosting providers - why no zfs ?

    ARC min and max should be not equal, max should be min+1, or min = max-1 Otherwise it will not limit
  17. N

    RaidZ1 performance ZFS on host vs VM

    If you want to speed up write you can set up sync=disabled But keep in mind you can lose some data in the power outage as of VM look here - https://kb.blockbridge.com/technote/proxmox-optimizing-windows-server/part-2.html
  18. N

    ghostly reboot at midnight

    Server is running for 19 days without random reboot. I changed network card to Intel 10GB and speeded CPU cooler. One reason in my mind was CPU temp spike.
  19. N

    USB3 passthrough to vm with 10G

    I'm curious, have you tried speed test?
  20. N

    HBA card borked or am I an idiot?

    Try to add mpt3sas.max_queue_depth=10000 to your kernel boot line in /etc/default/grub or /etc/kernel/cmdline

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!