Hi,
finally got the issue solved. The guides found for pool expansion are in most cases wrong as they activate autoexpand after the last drive replacement. But Autoexpand fires only when you online a device. So after turning autoexpand on I had to offline one disk and online it again (zpool...
Hello,
well, thats the difference between net and gross values oft the storage. In raidz2 your are loosing two devices from the gross capacity for parity. In my case I have a four disk setup with 4TB each disk. This results in 8TB net (aka usable space) and 16TB gross capacity.
It seems that...
Hi, thank you, but I though the capacity should be 4x3.6 TB about 14 TB as the gross values are displayed. I have another system here that was initial setup with 4x4TB disks in raidz2 and it shows 14.5 TB:
NAME PROPERTY VALUE SOURCE
data size...
Hi,
thanks for the feedback. In contrary to adding partitions, this pool was setup by adding whole disks leading to the behavior that part1 will always cover nearly the whole disk capacity.
Thank you for the hint with gdisk. This shows 4TB assigned to part1
Number Start (sector) End...
Hello,
I have been searching for a reason an solution for autoexpand not working at my PVE 5.3-5 host.
I intented to increase the local ZFS pool by replacing all 2 TB disks in the pool by 4TB ones. So I replaced every single disk an resilvered the RAIDZ2 pool after each disk swap. Finally a...
Thanks a lot. Was able to find some additional threads for this with your input and thank you for the remark regarding sharing replication and corosync link. I had this concerns to but finally decided the the dedicated back to back link ensure a lower latency because there are no additional...
Hello,
I have been search for this topic but was not abler to find any answer, forgive me if this question has already been answered.
I have setup a two node cluster with a dedicated cluster link network that I want to use for replication and for corosync. The cluster was created in the GUI...
Thanks for the answer. I was thinking of the fact that ZFS is COW, shouldn't address the fragmentation issue? Would be interesting in to get some methods to analyze, if fragmentation could be cause.
Greetings
Chris
Hi,
smartctl doesn't show any errors, firmware should be up to date (using the onboard 8x Sata Controller). I stopped using the virio drivers as they cause time drifts.
I checked cabling an could not find any issues. Strange thing. I am think of moving some VMs to an external storage to compare...
Hi,
thank you for your tips. I am going to replace the cables in a first step and will report back. But this will that some days as I am not on site. Memory should not be an issue as the system is built with ECC ram. PSU I guess would be an issue if it is not able to deliver constant power to...
Hello,
I have setup this host with PVE 4.3. and it was running smooth. From the update to 4.4 on the problems started. First I was facing really bad time drifts and the VMs got extremly slow (especially during backup). Did some tuning but it didn't solve the time problem. Finally the VMs are...
Hello,
I got some updates. acrstats shows io errors and bad checksums for L2ARC.
cat /proc/spl/kstat/zfs/arcstats
6 1 0x01 91 4368 2512048036 3392252486704591
name type data
hits 4 6567751182
misses 4...
Hello,
thank for the feedback. Sure I know that raidz (like normal parity RAID do to) is always bound to the slowest device when doing write ios. From the read ios point of view RAIDs could give some kind of performance increase as you do not need to read all drives to complete the read...
Thanks for the feedback Manu,
I have taken that iostat snapshot to see, if only one disk is suffering io problems. But what I saw is that all pool member disk are suddenly facing high wait times with extremely low throughput. zpool iostat is showing similar results (two drives of 4 sum up to...
Hello,
I am still quarreling around with some wired io performance issues where I could not find any likly issue match in this forum.
Since Proxmox 4.4 or so (now running Proxmox 5) the disk io of our seems to be incredible bad. iostat show that the disk with the zfs pools for our vms drop to...
Hi folks,
I am back on this topic again. I have upgraded the host to PVE5 but the time drift problem still occurs. So far what I can see is that:
with PVE4.3 there have been no problem
since PVE 4.4 high io delays and time drifts occur
PVE 5 is not better, same behaviour
the backup get slower...
Hi all,
thanks for all the feedback.
This indicates that the storage delay couses the time drift not the local IO load. I believe that the overall performance increased with the switch to Raid10. Interesting result.
In my scenario the time drift is 3-4 hours during a 6 hour backup cycle...
Hi Rhinox,
thanks for the reply. But all my investigations showed that it is some kind of normal in KVM that the time in Windows VM drifts when the host is under heavy IO load. The is no specification off heavy IO mentioned anywhere. All sources are handling the problem only by offering...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.