Recent content by chrisu

  1. Increase RAIDZ2 pool by replacing all disks

    Hi, finally got the issue solved. The guides found for pool expansion are in most cases wrong as they activate autoexpand after the last drive replacement. But Autoexpand fires only when you online a device. So after turning autoexpand on I had to offline one disk and online it again (zpool...
  2. Increase RAIDZ2 pool by replacing all disks

    Hello, well, thats the difference between net and gross values oft the storage. In raidz2 your are loosing two devices from the gross capacity for parity. In my case I have a four disk setup with 4TB each disk. This results in 8TB net (aka usable space) and 16TB gross capacity. It seems that...
  3. Increase RAIDZ2 pool by replacing all disks

    Hi, thank you, but I though the capacity should be 4x3.6 TB about 14 TB as the gross values are displayed. I have another system here that was initial setup with 4x4TB disks in raidz2 and it shows 14.5 TB: NAME PROPERTY VALUE SOURCE data size...
  4. Increase RAIDZ2 pool by replacing all disks

    Hi, zpool list -v: NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT data 7.25T 5.07T 2.18T - 47% 69% 1.00x ONLINE - raidz2 7.25T 5.07T 2.18T - 47% 69% ata-WDC_WD4003FFBX-68MU3N0_VBG08PSR - - - - -...
  5. Increase RAIDZ2 pool by replacing all disks

    Hi, thanks for the feedback. In contrary to adding partitions, this pool was setup by adding whole disks leading to the behavior that part1 will always cover nearly the whole disk capacity. Thank you for the hint with gdisk. This shows 4TB assigned to part1 Number Start (sector) End...
  6. Increase RAIDZ2 pool by replacing all disks

    Hello, I have been searching for a reason an solution for autoexpand not working at my PVE 5.3-5 host. I intented to increase the local ZFS pool by replacing all 2 TB disks in the pool by 4TB ones. So I replaced every single disk an resilvered the RAIDZ2 pool after each disk swap. Finally a...
  7. Change NIC for VM Replication

    Thanks a lot. Was able to find some additional threads for this with your input and thank you for the remark regarding sharing replication and corosync link. I had this concerns to but finally decided the the dedicated back to back link ensure a lower latency because there are no additional...
  8. Change NIC for VM Replication

    Hello, I have been search for this topic but was not abler to find any answer, forgive me if this question has already been answered. I have setup a two node cluster with a dedicated cluster link network that I want to use for replication and for corosync. The cluster was created in the GUI...
  9. Slow io and high io waits

    Hello, shouldn't a vm move/clone between different pools do the same thing? Greetings Chris
  10. Slow io and high io waits

    Thanks for the answer. I was thinking of the fact that ZFS is COW, shouldn't address the fragmentation issue? Would be interesting in to get some methods to analyze, if fragmentation could be cause. Greetings Chris
  11. Slow io and high io waits

    Hi, smartctl doesn't show any errors, firmware should be up to date (using the onboard 8x Sata Controller). I stopped using the virio drivers as they cause time drifts. I checked cabling an could not find any issues. Strange thing. I am think of moving some VMs to an external storage to compare...
  12. Slow io and high io waits

    Hi, thank you for your tips. I am going to replace the cables in a first step and will report back. But this will that some days as I am not on site. Memory should not be an issue as the system is built with ECC ram. PSU I guess would be an issue if it is not able to deliver constant power to...
  13. Slow io and high io waits

    Hello, I have setup this host with PVE 4.3. and it was running smooth. From the update to 4.4 on the problems started. First I was facing really bad time drifts and the VMs got extremly slow (especially during backup). Did some tuning but it didn't solve the time problem. Finally the VMs are...
  14. Slow io and high io waits

    Hello, I got some updates. acrstats shows io errors and bad checksums for L2ARC. cat /proc/spl/kstat/zfs/arcstats 6 1 0x01 91 4368 2512048036 3392252486704591 name type data hits 4 6567751182 misses 4...
  15. Slow io and high io waits

    Hello, thank for the feedback. Sure I know that raidz (like normal parity RAID do to) is always bound to the slowest device when doing write ios. From the read ios point of view RAIDs could give some kind of performance increase as you do not need to read all drives to complete the read...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!