That's weird. That's gotta be a problem with the Live CD. Must be somehow performance limited, because those numbers are terrible.
I'm starting to think this is a much deeper problem. ZFS is such a mess.
For the record, the same configuration in a Solaris zpool grants MUCH...
I've been struggling with getting good performance out of ZFS in proxmox. I have noticed that there is a huge performance deficit in random read and write performance.
8 Samsung 970 PRO 1TB drives in span of mirrors. Basically Raid 10.
I have a cluster node that's in the UI only. Node had its hardware killed off. pvecm delnode got rid of it in the config, but the UI still shows it. Corosync config doesn't have the dead node. Any way to remove? Its not that annoying, but still somewhat annoying.
It appears I did an upgrade rather than a dist-upgrade.
Start-Date: 2018-02-27 18:54:09
Commandline: apt-get upgrade
Upgrade: libdns-export162:amd64 (1:9.10.3.dfsg.P4-12.3+deb9u3, 1:9.10.3.dfsg.P4-12.3+deb9u4), libpve-access-control:amd64 (5.0-7, 5.0-8), libisccfg140:amd64...
When I did an "apt-get dist-upgrade" the following 2 packages were attempted to be updated
zfs-initramfs & zfsutils-linux. It failed on the first machine, and passed on the second. Version numbering changed on the second. I'm not sure how to repeat the procedure as it seems to think that it...
Found this thread after encountering this problem.
I have 2 proxmox installs. I updated both of them hoping to resolve this issue.
1. I had a /etc/modprobe.d/zfs.conf file configured as documented here https://pve.proxmox.com/wiki/ZFS_on_Linux. After running apt-get...