root@pve-klenova:~# pveperf
CPU BOGOMIPS: 38401.52
REGEX/SECOND: 456470
HD SIZE: 680.38 GB (rpool/ROOT/pve-1)
FSYNCS/SECOND: 74.37
DNS EXT: 72.99 ms
DNS INT: 20.93 ms (elson.sk)
root@pve-klenova:~# fio testdisk
iometer: (g=0): rw=randrw...
ok i ran it on the VM
root@merkur:~# fio testdisk
iometer: (g=0): rw=randrw, bs=512-64K/512-64K/512-64K, ioengine=libaio, iodepth=64
fio-2.2.10
Starting 1 process
iometer: Laying out IO file(s) (1 file(s) / 4096MB)
Jobs: 1 (f=1): [m(1)] [100.0% done] [1140KB/268KB/0KB /s] [279/76/0 iops] [eta...
ok i make i file named testdisk and fill with your code after that i run fio
root@pve-klenova:~# fio testdisk
iometer: (g=0): rw=randrw, bs=512-64K/512-64K/512-64K, ioengine=libaio, iodepth=64
fio-2.16
Starting 1 process
iometer: Laying out IO file(s) (1 file(s) / 4096MB)
fio: looks like your...
integral have you solved the problem? i have the same problem, Supermicro MB, WD RED NAS 4x1TB 5400-7200rpm with ZFS RAID10 32GB RAM ECC 16GB dedicated to ARC i have no disk for log or cache... the performacne is TOTALY POOR and i am helpless
root@pve-klenova:~# pveperf
CPU BOGOMIPS...
my old disks are gone and i need to replace them. i have 4xWD RED intelipower 5400-7200 SATAII drives in ZFS raid10 and i need to add two another for storage zfs raid1. can somebody advise me fast and reliable SATA3 HDDS? 4TB is enough...
maxprox can you please post pveperf? i have also 4 disks connected directly to MB sata2 but have TOTALY bad performance. I have 4x WD RED 5400-72000 1TB disks and this is my performance so i am corrious about yours...
root@pve-klenova:~# pveperf
CPU BOGOMIPS: 38401.52
REGEX/SECOND...
i need to buy two new hdds as replace. i would mirror them. can somebody please advise my fast and reliable disks? 2-4TB... normaly i would buy WD RED drives, but i have thous disks in my servers and the performance is poor, i dont know if its disk problem but both servers have the same...
ok this means that one of the disk is BAD, HW is gone em i right? what will happen if i have raid0 and new 100% working disks and a power failure will occure? will i also lost my data?
after power failure i got problem with one of my pool...
root@pve-klenova:~# zpool status -v
pool: public
state: ONLINE
status: One or more devices are faulted in response to IO failures.
action: Make sure the affected devices are connected, then run 'zpool clear'.
see...
thank you for trying to help me... i have NO LXC containers only 4 VM's. i really dont know how to find out whats eating the swap...
root@pve-klenova:~# pidof memcached
root@pve-klenova:~# pgrep memcached
command "for file in /proc/*/status ; do awk '/VmSwap|Name/{printf $2 " " $3}END{ print...
suddenly totaly slow system, IO delay 30% RAM is free, SWAP full why? how to fix it without restarting the server?
CPU usage
13.86% of 8 CPU(s)
IO delay
29.92%
Load average
4.91,4.75,4.48
RAM usage
51.51% (16.18 GiB of 31.41 GiB)
KSM sharing
1.56 GiB
HD space(root)
15.03% (100.95 GiB of...
@Nemesiz is this normal in your opinion? old 2x 500GB drives in public with sync standard much faster than 4x 1TB relativy NEW WD RED NAS drives? there must be problem, i dont believe that the performance for rpool is normal and i need to buy another SSD ZIL 2,5TB drive...
root@pve-klenova:~#...
1.) i have 4xWD RED 1TB drives in raid10, do i really need another SSD drive? If so how BIG?
2.) Is the problem in my server LOW RAM or is THIS a standard performance?
3.) when I disable ZFS SYNC, can I expect an unstable system or data loss in case of power failure and so on?
4.) if i...
so why i have such slow perf?
root@pve-klenova:~# pveperf
CPU BOGOMIPS: 38401.52
REGEX/SECOND: 430906
HD SIZE: 654.48 GB (rpool/ROOT/pve-1)
FSYNCS/SECOND: 53.92
DNS EXT: 196.53 ms
DNS INT: 18.91 ms (elson.sk)