Hi,
I seem to be having some issues with gaps in our graph system status pages.
Our nodes are using 8x 960gb Samsung pm863a ZFS in raidz2 configuration w/ Dual E5-2630v4 CPU's.
I'm moving some VM's from internal ZFS storage on node 4 to a VM, I setup on node 6 which is sharing out a virtual disk via NFS to do the hop between nodes.
I'm using live storage migration to move the virtual disks from node 4 to the NFS Share and then move the VM to node 5 and migrate the storage back to the ZFS target on node 5.
I can see this is creating quite a bit of load on the node 6 (io wait) however i'm not quite sure why its seems to affect the graphs across all of the nodes at the same time gaps.
Thanks,
Quenten

I seem to be having some issues with gaps in our graph system status pages.
Our nodes are using 8x 960gb Samsung pm863a ZFS in raidz2 configuration w/ Dual E5-2630v4 CPU's.
I'm moving some VM's from internal ZFS storage on node 4 to a VM, I setup on node 6 which is sharing out a virtual disk via NFS to do the hop between nodes.
I'm using live storage migration to move the virtual disks from node 4 to the NFS Share and then move the VM to node 5 and migrate the storage back to the ZFS target on node 5.
I can see this is creating quite a bit of load on the node 6 (io wait) however i'm not quite sure why its seems to affect the graphs across all of the nodes at the same time gaps.
Thanks,
Quenten
