Search results

  1. C

    SSD for log and cache how BIG?

    if i have 2x4TB zfsradi1 and want to log/cache on separate SSD to speed up the system how BIG should i buy? and which one do you prefer?
  2. C

    Poor ZFS performance On Supermicro vs random ASUS board

    root@pve-klenova:~# pveperf CPU BOGOMIPS: 38401.52 REGEX/SECOND: 456470 HD SIZE: 680.38 GB (rpool/ROOT/pve-1) FSYNCS/SECOND: 74.37 DNS EXT: 72.99 ms DNS INT: 20.93 ms (elson.sk) root@pve-klenova:~# fio testdisk iometer: (g=0): rw=randrw...
  3. C

    Poor ZFS performance On Supermicro vs random ASUS board

    ok i ran it on the VM root@merkur:~# fio testdisk iometer: (g=0): rw=randrw, bs=512-64K/512-64K/512-64K, ioengine=libaio, iodepth=64 fio-2.2.10 Starting 1 process iometer: Laying out IO file(s) (1 file(s) / 4096MB) Jobs: 1 (f=1): [m(1)] [100.0% done] [1140KB/268KB/0KB /s] [279/76/0 iops] [eta...
  4. C

    Poor ZFS performance On Supermicro vs random ASUS board

    ok i make i file named testdisk and fill with your code after that i run fio root@pve-klenova:~# fio testdisk iometer: (g=0): rw=randrw, bs=512-64K/512-64K/512-64K, ioengine=libaio, iodepth=64 fio-2.16 Starting 1 process iometer: Laying out IO file(s) (1 file(s) / 4096MB) fio: looks like your...
  5. C

    Poor ZFS performance On Supermicro vs random ASUS board

    integral have you solved the problem? i have the same problem, Supermicro MB, WD RED NAS 4x1TB 5400-7200rpm with ZFS RAID10 32GB RAM ECC 16GB dedicated to ARC i have no disk for log or cache... the performacne is TOTALY POOR and i am helpless root@pve-klenova:~# pveperf CPU BOGOMIPS...
  6. C

    please advise new hdds for my zfs raid1 array for storage

    my old disks are gone and i need to replace them. i have 4xWD RED intelipower 5400-7200 SATAII drives in ZFS raid10 and i need to add two another for storage zfs raid1. can somebody advise me fast and reliable SATA3 HDDS? 4TB is enough...
  7. C

    [SOLVED] zfs Raid 10 with different hard drives layout

    maxprox can you please post pveperf? i have also 4 disks connected directly to MB sata2 but have TOTALY bad performance. I have 4x WD RED 5400-72000 1TB disks and this is my performance so i am corrious about yours... root@pve-klenova:~# pveperf CPU BOGOMIPS: 38401.52 REGEX/SECOND...
  8. C

    please help raid0 no files

    i need to buy two new hdds as replace. i would mirror them. can somebody please advise my fast and reliable disks? 2-4TB... normaly i would buy WD RED drives, but i have thous disks in my servers and the performance is poor, i dont know if its disk problem but both servers have the same...
  9. C

    please help raid0 no files

    ok this means that one of the disk is BAD, HW is gone em i right? what will happen if i have raid0 and new 100% working disks and a power failure will occure? will i also lost my data?
  10. C

    please help raid0 no files

    after power failure i got problem with one of my pool... root@pve-klenova:~# zpool status -v pool: public state: ONLINE status: One or more devices are faulted in response to IO failures. action: Make sure the affected devices are connected, then run 'zpool clear'. see...
  11. C

    proxmox 5.0-30 free RAM SWAP full why?

    root@pve-klenova:~# grep -i 'buff\|cach' /proc/meminfo; Buffers: 13645320 kB Cached: 66188 kB SwapCached: 113680 kB what is this code you posted? i dont understand root@pve-klenova:~# rados bench -p rbd 120 write --no-cleanup # MBps throughput: 337/824/0 latency...
  12. C

    proxmox 5.0-30 free RAM SWAP full why?

    thank you for trying to help me... i have NO LXC containers only 4 VM's. i really dont know how to find out whats eating the swap... root@pve-klenova:~# pidof memcached root@pve-klenova:~# pgrep memcached command "for file in /proc/*/status ; do awk '/VmSwap|Name/{printf $2 " " $3}END{ print...
  13. C

    proxmox 5.0-30 free RAM SWAP full why?

    edit: few days before i have upgraded RAM from 16GB to 32GB, is there something i have to change according swap after RAM added?
  14. C

    proxmox 5.0-30 free RAM SWAP full why?

    suddenly totaly slow system, IO delay 30% RAM is free, SWAP full why? how to fix it without restarting the server? CPU usage 13.86% of 8 CPU(s) IO delay 29.92% Load average 4.91,4.75,4.48 RAM usage 51.51% (16.18 GiB of 31.41 GiB) KSM sharing 1.56 GiB HD space(root) 15.03% (100.95 GiB of...
  15. C

    Poor performance with ZFS

    @Nemesiz is this normal in your opinion? old 2x 500GB drives in public with sync standard much faster than 4x 1TB relativy NEW WD RED NAS drives? there must be problem, i dont believe that the performance for rpool is normal and i need to buy another SSD ZIL 2,5TB drive... root@pve-klenova:~#...
  16. C

    Poor performance with ZFS

    1.) i have 4xWD RED 1TB drives in raid10, do i really need another SSD drive? If so how BIG? 2.) Is the problem in my server LOW RAM or is THIS a standard performance? 3.) when I disable ZFS SYNC, can I expect an unstable system or data loss in case of power failure and so on? 4.) if i...
  17. C

    Poor performance with ZFS

    so why i have such slow perf? root@pve-klenova:~# pveperf CPU BOGOMIPS: 38401.52 REGEX/SECOND: 430906 HD SIZE: 654.48 GB (rpool/ROOT/pve-1) FSYNCS/SECOND: 53.92 DNS EXT: 196.53 ms DNS INT: 18.91 ms (elson.sk)
  18. C

    Poor performance with ZFS

    sda sdb sdf sdg are the drives in raid 10 seems to be similar Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00...
  19. C

    vms lags during vm cloning

    /public is local storage... 2x sata2 500GB drives... i dont remeber how it was before i add /public, but i dont think it was better...
  20. C

    Poor performance with ZFS

    iostat during vm clooning Every 1.0s: iostat pve-klenova: Sun Oct 15 00:15:46 2017 Linux 4.10.17-2-pve (pve-klenova) 10/15/2017 _x86_64_ (8 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 15.32 0.00 3.69 1.68 0.00...