Search results

  1. M

    ZFS Performance on SATA Disks

    I want to say that native ZFS on Oracle Solaris works much faster than Linux ZFS. That's why I say that ZFS on Linux is not ready yet as production storage. Also BTRFS in terms of speed can be more likely equal compared to ext4+RAW. And you can disable whole bunch of that features - ZoL will be...
  2. M

    ZFS Performance on SATA Disks

    I'm talking about LOCAL target/storage. Just compare benchmarks RAW+ext4 with ZFS on same hardware. ZFS will be much slower and there's nothing you can do with that - there's tons of complaints on official ZoL tracker (github.com/zfsonlinux) and devs have not focused yet on that problems. +...
  3. M

    ZFS Performance on SATA Disks

    It won't help, ZFS is not ready for production VM storage.
  4. M

    LVM, LVM Thin vs qcow2

    LVM Thin + ext4 + QCOW2 with cache=writeback: is it bad idea?
  5. M

    LVM, LVM Thin vs qcow2

    In terms of performance?
  6. M

    Average "Load Average" with ZFS (ZVOL)

    @mir Thanks. @LnxBil yeah, I mean discard=on on PVE. @melanch0lia VM 10+ means more than 10 VMs regardless of the load. I don't have any big iowait as well (0-8% depending on load) and load average 2-5, but I have some peaks to 7-10 while I'm doing dd and something else or just booting up...
  7. M

    Real pve-zsync examples

    Thanks. This should be in official wiki. Small question - pve-zsync creates snapshots itself? I currently use zfs-auto-snapshot https://github.com/zfsonlinux/zfs-auto-snapshot/blob/master/src/zfs-auto-snapshot.sh Will pve-zsync create snapshot if it will not reach other host/fail to replicate...
  8. M

    pve 4.2 and iothread

    Plain VirtIO # fio --description="Emulation of Intel IOmeter File Server Access Pattern" --name=iometer --bssplit=512/10:1k/5:2k/5:4k/60:8k/2:16k/4:32k/4:64k/10 --rw=randrw --rwmixread=80 --direct=1 --size=4g --ioengine=libaio --iodepth=8 iometer: (g=0): rw=randrw, bs=512-64K/512-64K/512-64K...
  9. M

    Average "Load Average" with ZFS (ZVOL)

    Also, should cache=writeback and discard=on be used with ZVOL?
  10. M

    Real pve-zsync examples

    Hello, What is pve-zsync usage in real world? Can somebody show off real scheme of usage between two hosts, including crontab?
  11. M

    Average "Load Average" with ZFS (ZVOL)

    Hello, Guys, share your average "Load Average" or iowait on production host with ZFS (ZVOL) as storage for 10+ VMs? Also, what "Load Average" should be considered as critical or problematic?..
  12. M

    Stopped working OS-configured VLAN tagged network inside Linux guests

    Reverting to 2.6.32-39-pve from 2.6.32-41-pve fixes the problem. Couldn't even think that this is related. Repeatedly reproduced on two separate Proxmox VEs. The only changes I see: pve-kernel-2.6.32 (2.6.32-164) unstable; urgency=low * update to latest stable...
  13. M

    Stopped working OS-configured VLAN tagged network inside Linux guests

    Remark: VLAN is configured INSIDE linux guests, not in host Debian. Hello, Linux host 2.6.32-41-pve #1 SMP Sat Sep 12 13:10:00 CEST 2015 x86_64 GNU/Linux pve-manager/3.4-11/6502936f (running kernel: 2.6.32-41-pve) After latest apt-get upgrade stopped working VLAN tagged network inside KVM...
  14. M

    High Load Average on host while guest is copying files

    Hm, probably, you're right. What is recommended average CAP to keep? :)
  15. M

    High Load Average on host while guest is copying files

    # zpool list NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT POOL 3.62T 3.03T 607G - 57% 83% 1.00x ONLINE -
  16. M

    High Load Average on host while guest is copying files

    Here's iostat -kxz 1 during file copy. It seems problem with disk load, but it shouldn't be that... Also, should write cache on controller (LSI Logic / Symbios Logic SAS1064ET PCI-Express Fusion-MPT SAS) be turned ON or OFF? NCQ? Device: rrqm/s wrqm/s r/s w/s rkB/s...
  17. M

    High Load Average on host while guest is copying files

    No, copying started at 1st-2nd line really. You can see jump to 31% I/O wait earlier 2 2 0 10998664 137136 222376 0 0 18507 76895 35992 73596 6 9 54 31 I have lowered ARC cache on the fly to 24 GB and will check iostat. What may be the problem to drives slow to write? scsi...
  18. M

    High Load Average on host while guest is copying files

    Problem reproduced during a file copy. Here is my "vmstat 1" when I started copying. Before starting - load average ~2. After start - load average ~8-9 As I can see there's some peaks: # vmstat 1 procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free...
  19. M

    High Load Average on host while guest is copying files

    If I send drop_caches to 2 (to free reclaimable slab objects (includes dentries and inodes)), my RAM frees up to 52 GB. After a while it is being used to ~82-84 GB again. Here is my vmstat 1 for load average: 1.82, 1.90, 1.98 procs -----------memory---------- ---swap-- -----io---- -system--...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!