Search results

  1. mir

    ZFS over ISCSI + Omnios problem

    I only run LTS releases so I haven't tried any later versions than 151022. You could try asking for help here: https://illumos.topicbox.com/groups/omnios-discuss I think you are facing a network/comstar related problem.
  2. mir

    ZFS over ISCSI + Omnios problem

    What version of omnios do you use?
  3. mir

    ZFS over ISCSI + Omnios problem

    If you run this command on omnios: stmfadm list-lu -v Does all your Luns show this? Writeback Cache : Enabled
  4. mir

    ZFS over ISCSI + Omnios problem

    Your storage box seems to be working perfectly. The only thing I can think of is network related since iSCSI performance is very picky when it comes to network configuration and especially MTU. What MTU are you using on the network handling iSCSI traffic? I personally use infiniband so on my...
  5. mir

    ZFS over ISCSI + Omnios problem

    Could you show output from zpool get all pool1 zfs get all pool1 on omnios
  6. mir

    ZFS over ISCSI + Omnios problem

    Your Intel Optane 900p should not be where your worries are: https://www.servethehome.com/exploring-best-zfs-zil-slog-ssd-intel-optane-nand/
  7. mir

    ZFS over ISCSI + Omnios problem

    Try using fio for disk benchmark using this (copy below as input to a file to use as input to fio) # This job file tries to mimic the Intel IOMeter File Server Access Pattern [global] description=Emulation of Intel IOmeter File Server Access Pattern [iometer]...
  8. mir

    ZFS over ISCSI + Omnios problem

    Try this command on omnios while running your benchmarks in the client: zpool iostat -v pool1 3
  9. mir

    Ceph/Hardware - Looking to build out a sizable Proxmox Hosting Cluster

    Simply configure HA groups. See pve-docs/chapter-ha-manager.html#ha_manager_groups
  10. mir

    RTL8111E NIC Support

    There is no need to use anything else than the r8169(intree) driver provided you have installed this package: firmware-realtek This package contains all binary firmware for any realtek nic which the driver loads on-demand. The intree r8169 driver is co-maintained by realtek which also makes all...
  11. mir

    ZFS over ISCSI + Omnios problem

    What hardware (cpu, ram, nics, disks) is on the omnios server? And how is the zfs pool configured?
  12. mir

    [SOLVED] Searching for real HBA for Dellserver

    Dell PERC H310 can be flashed to LSI 9211-8i: https://forums.servethehome.com/index.php?threads/bit-the-bullet-on-a-h310.5775/#post-50068
  13. mir

    What open source solutions are available to use "ZFS over iSCSI with Proxmox"?

    Raid level and number of disks in your Freenas? Disk type in Freenas, HDD, SSD etc?
  14. mir

    Proof of concept at my workplace

    Debian has tools to create your own local copy of proxmox enterprise repo: https://wiki.debian.org/DebianRepository#Set_up_and_maintain_a_repository
  15. mir

    Has anybody setup VMware ESXi with Proxmox 5.3?

    What would be the point of doing so?
  16. mir

    Debian router on proxmox, network performance issues

    Are you sure it is Debian 7? Debian 7 has been EOL for several years!!
  17. mir

    Whole cluster fenced when 1 node fails

    What does sudo corosync-quorumtool show on your proxmox nodes?
  18. mir

    Shared storage recommendations.

    If you have cash to spend (RSF-1): https://www.youtube.com/watch?v=JZ2PK-AXZ0A Otherwise use UPS to guard your storage server and Cheap: multiple nics using LACP Less cheap: Configure multipath on proxmox to storage server preferable using LACP on each path
  19. mir

    Incredibly slow I/O on installation drive of Proxmox across multiple servers

    From a single SSD: Jobs: 1 (f=1): [m(1)] [97.8% done] [78518KB/26377KB/0KB /s] [19.7K/6594/0 iops] [eta 00m:01s] test: (groupid=0, jobs=1): err= 0: pid=30053: Wed Dec 5 07:43:33 2018 read : io=3070.4MB, bw=68841KB/s, iops=17210, runt= 45670msec write: io=1025.8MB, bw=22998KB/s, iops=5749...
  20. mir

    Incredibly slow I/O on installation drive of Proxmox across multiple servers

    This simply means that LVM is used (dm == DeviceMapper)