Search results

  1. C

    Increase RAIDZ2 pool by replacing all disks

    Hi, finally got the issue solved. The guides found for pool expansion are in most cases wrong as they activate autoexpand after the last drive replacement. But Autoexpand fires only when you online a device. So after turning autoexpand on I had to offline one disk and online it again (zpool...
  2. C

    Increase RAIDZ2 pool by replacing all disks

    Hello, well, thats the difference between net and gross values oft the storage. In raidz2 your are loosing two devices from the gross capacity for parity. In my case I have a four disk setup with 4TB each disk. This results in 8TB net (aka usable space) and 16TB gross capacity. It seems that...
  3. C

    Increase RAIDZ2 pool by replacing all disks

    Hi, thank you, but I though the capacity should be 4x3.6 TB about 14 TB as the gross values are displayed. I have another system here that was initial setup with 4x4TB disks in raidz2 and it shows 14.5 TB: NAME PROPERTY VALUE SOURCE data size...
  4. C

    Increase RAIDZ2 pool by replacing all disks

    Hi, zpool list -v: NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT data 7.25T 5.07T 2.18T - 47% 69% 1.00x ONLINE - raidz2 7.25T 5.07T 2.18T - 47% 69% ata-WDC_WD4003FFBX-68MU3N0_VBG08PSR - - - - -...
  5. C

    Increase RAIDZ2 pool by replacing all disks

    Hi, thanks for the feedback. In contrary to adding partitions, this pool was setup by adding whole disks leading to the behavior that part1 will always cover nearly the whole disk capacity. Thank you for the hint with gdisk. This shows 4TB assigned to part1 Number Start (sector) End...
  6. C

    Increase RAIDZ2 pool by replacing all disks

    Hello, I have been searching for a reason an solution for autoexpand not working at my PVE 5.3-5 host. I intented to increase the local ZFS pool by replacing all 2 TB disks in the pool by 4TB ones. So I replaced every single disk an resilvered the RAIDZ2 pool after each disk swap. Finally a...
  7. C

    Change NIC for VM Replication

    Thanks a lot. Was able to find some additional threads for this with your input and thank you for the remark regarding sharing replication and corosync link. I had this concerns to but finally decided the the dedicated back to back link ensure a lower latency because there are no additional...
  8. C

    Change NIC for VM Replication

    Hello, I have been search for this topic but was not abler to find any answer, forgive me if this question has already been answered. I have setup a two node cluster with a dedicated cluster link network that I want to use for replication and for corosync. The cluster was created in the GUI...
  9. C

    Slow io and high io waits

    Hello, shouldn't a vm move/clone between different pools do the same thing? Greetings Chris
  10. C

    Slow io and high io waits

    Thanks for the answer. I was thinking of the fact that ZFS is COW, shouldn't address the fragmentation issue? Would be interesting in to get some methods to analyze, if fragmentation could be cause. Greetings Chris
  11. C

    Slow io and high io waits

    Hi, smartctl doesn't show any errors, firmware should be up to date (using the onboard 8x Sata Controller). I stopped using the virio drivers as they cause time drifts. I checked cabling an could not find any issues. Strange thing. I am think of moving some VMs to an external storage to compare...
  12. C

    Slow io and high io waits

    Hi, thank you for your tips. I am going to replace the cables in a first step and will report back. But this will that some days as I am not on site. Memory should not be an issue as the system is built with ECC ram. PSU I guess would be an issue if it is not able to deliver constant power to...
  13. C

    Slow io and high io waits

    Hello, I have setup this host with PVE 4.3. and it was running smooth. From the update to 4.4 on the problems started. First I was facing really bad time drifts and the VMs got extremly slow (especially during backup). Did some tuning but it didn't solve the time problem. Finally the VMs are...
  14. C

    Slow io and high io waits

    Hello, I got some updates. acrstats shows io errors and bad checksums for L2ARC. cat /proc/spl/kstat/zfs/arcstats 6 1 0x01 91 4368 2512048036 3392252486704591 name type data hits 4 6567751182 misses 4...
  15. C

    Slow io and high io waits

    Hello, thank for the feedback. Sure I know that raidz (like normal parity RAID do to) is always bound to the slowest device when doing write ios. From the read ios point of view RAIDs could give some kind of performance increase as you do not need to read all drives to complete the read...
  16. C

    Slow io and high io waits

    Thanks for the feedback Manu, I have taken that iostat snapshot to see, if only one disk is suffering io problems. But what I saw is that all pool member disk are suddenly facing high wait times with extremely low throughput. zpool iostat is showing similar results (two drives of 4 sum up to...
  17. C

    Slow io and high io waits

    Hello, I am still quarreling around with some wired io performance issues where I could not find any likly issue match in this forum. Since Proxmox 4.4 or so (now running Proxmox 5) the disk io of our seems to be incredible bad. iostat show that the disk with the zfs pools for our vms drop to...
  18. C

    Time drift during backup

    Hi folks, I am back on this topic again. I have upgraded the host to PVE5 but the time drift problem still occurs. So far what I can see is that: with PVE4.3 there have been no problem since PVE 4.4 high io delays and time drifts occur PVE 5 is not better, same behaviour the backup get slower...
  19. C

    Time drift during backup

    Hi all, thanks for all the feedback. This indicates that the storage delay couses the time drift not the local IO load. I believe that the overall performance increased with the switch to Raid10. Interesting result. In my scenario the time drift is 3-4 hours during a 6 hour backup cycle...
  20. C

    Time drift during backup

    Hi Rhinox, thanks for the reply. But all my investigations showed that it is some kind of normal in KVM that the time in Windows VM drifts when the host is under heavy IO load. The is no specification off heavy IO mentioned anywhere. All sources are handling the problem only by offering...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!