Search results

  1. 6uellerbpanda

    About high load and hard drives

    zpool status and arc_summary output pl the problem was always there or occurred after day/change x ?
  2. 6uellerbpanda

    [SOLVED] ZFS Raid 10 with 4 SSD and cache...SLOW.

    how did you observe speed and what speed do you expect and with what workload ? I'm sorry but can you actually post a real scenario ? copying one file from A to B and observe with zpool iostat. also pl tell us your current config - do you've a HBA ? arc summary output ? the usual zfs stuff :)
  3. 6uellerbpanda

    [SOLVED] ZFS Raid 10 with 4 SSD and cache...SLOW.

    When you have a zpool with ssd you're l2arc is useless. Remove it. You're only capping the arc. I don't really see the actual problem to be honest. Do you experience latency of some vm's ? Slow write IO ? You're also referring to zpool iostat but what where the numbers with the hdd pool and...
  4. 6uellerbpanda

    Help interprete ZFS stats (Grafana / Telegraf metrics)

    if you've a lot of sync writes a slog can of course help when slog fails it will use the zil on the disks but you won't lose any data, except in that time frame you also loose the whole storage and tgx hasn't flushed the data to the zil but this is very unlikely, I guess ;)
  5. 6uellerbpanda

    Help interprete ZFS stats (Grafana / Telegraf metrics)

    as already explained by the whitestareof for random io you need a mirrored vdev zpool
  6. 6uellerbpanda

    Help interprete ZFS stats (Grafana / Telegraf metrics)

    Ok but is/was there an actual problem ?? Except the false positive ??
  7. 6uellerbpanda

    Help interprete ZFS stats (Grafana / Telegraf metrics)

    what's the output of: zpool status and arc_summary.py I'm not sure if I understand it correctly but is there any problem at all or is it just about what the metrics mean ?
  8. 6uellerbpanda

    ZIL / L2ARC Question

    Why do you need a slog ?? You have a lot of sync writes on your datapool?? Do you even need a l2arc ??? Check https://forum.proxmox.com/threads/zfs-worth-it-tuning-tips.45262/page-2#post-217209
  9. 6uellerbpanda

    VLAN aware example

    I've this iface enp9s0 inet manual auto vmbr0...
  10. 6uellerbpanda

    ZFS worth it? Tuning tips?

    zfs needs some time to make your caching "optimal". few weeks i suggest depending on your workloads. ARC Total accesses: 182.53M Cache Hit Ratio: 93.06% 169.85M Cache Miss Ratio: 6.94% 12.67M Actual Hit Ratio: 92.62% 169.05M as you...
  11. 6uellerbpanda

    ZFS worth it? Tuning tips?

    what's your arc_summaryoutput ?
  12. 6uellerbpanda

    ZFS worth it? Tuning tips?

    if the read req won't get cached in ARC it also won't get into L2ARC if your ARC hit ratio is low in general L2ARC is useless at all the index is a map of what is in the L2ARC and that is stored in the ARC itself for performance reasons. the bigger the L2ARC the bigger the index on freebsd I've...
  13. 6uellerbpanda

    ZFS worth it? Tuning tips?

    don't use any hw raid for zfs - only hba's good not good. L2ARC index will be taken from ARC. only add L2ARC when you really need it not before.
  14. 6uellerbpanda

    NFS Shares: storage is not online (500). Why?

    putting the dns server virtualized on a freenas share can lead to a "chicken egg problem". freenas can be picky about it especially if you've all this active directory, samba crap enabled. if you often get these storage <STORAGE> is not online stuff check also on the pve host: nfsstat -r and...
  15. 6uellerbpanda

    [SOLVED] api endpoint rrddata is the same for lxc and qemu

    true any plans to change this in the future ? I'm only asking 'cause if the endpoints are the same for both it will be easier to adapt this at my script but if you intend to change it then I will write it already the "correct" way (/lxc/.., /qemu/...) and don't need to fix it later
  16. 6uellerbpanda

    [SOLVED] api endpoint rrddata is the same for lxc and qemu

    I just noticed that when I use following api path (also no error or data null) qemu/121/rrddata?timeframe=hour&cf=AVERAGE and lxc/121/rrddata?timeframe=hour&cf=AVERAGE it shows the same data vmid 121 is a lxc container although but this seems only true for 'rrddata' that both qemu and lxc...
  17. 6uellerbpanda

    ZFS write (High IO and High Delay)

    but you've different ssd in the servers isn't it ?
  18. 6uellerbpanda

    ZFS write (High IO and High Delay)

    # zpool config zpool status # server 2 is the sector size really correct ? did you check with smartctl ? 512b and ashift 9 shouldn't make any impact but the other way around would
  19. 6uellerbpanda

    ZFS write (High IO and High Delay)

    you zpool config is ? how much memory do you've in general ? ashift 9 = 512 bytes ashift 12 = 4096 bytes and what is the diff between the 2 setups ?