Search results

  1. C

    zfs raid mirror 1 to raid 10

    gulez thanks but its too complicated... i just want to use as much space from the two 500GB drives in stripped mode as possible... iv made public pool with after that i add to the vm new hdd and max was 871GB... after format i have only 858GB which is poor i think...
  2. C

    Poor performance with ZFS

    the drives are 4x WD RED 1TB NAS... how can i check every single hdd from the zfs pool to find out which one is slow?
  3. C

    Poor performance with ZFS

    root@pve-klenova:~# smartctl --all /dev/sda | grep Short Short self-test routine # 1 Short offline Completed without error 00% 26667 - root@pve-klenova:~# smartctl --all /dev/sdb | grep Short Short self-test routine # 1 Short offline Completed without error...
  4. C

    vms lags during vm cloning

    hello root@pve-klenova:~# zdb -C rpool | grep ashift ashift: 12 ashift: 12 root@pve-klenova:~# zdb -C public | grep ashift ashift: 12 ashift: 12
  5. C

    vms lags during vm cloning

    here the results, please see, its a shame i think: root@pve-klenova:~# zpool status pool: public state: ONLINE scan: scrub repaired 0 in 0h0m with 0 errors on Sun Oct 8 00:24:02 2017 config: NAME STATE READ WRITE CKSUM public ONLINE...
  6. C

    vms lags during vm cloning

    well i dont know but it seems totaly bad root@pve-klenova:~# pveperf CPU BOGOMIPS: 38401.52 REGEX/SECOND: 450140 HD SIZE: 745.21 GB (rpool/ROOT/pve-1) FSYNCS/SECOND: 58.78 DNS EXT: 29.48 ms DNS INT: 18.56 ms (elson.sk)
  7. C

    vms lags during vm cloning

    ok i did: root@pve-klenova:~# cat /sys/module/zfs/parameters/zfs_arc_max 7516192768 root@pve-klenova:~# cat /sys/module/zfs/parameters/zfs_arc_min 4294967296 root@pve-klenova:~# free -h total used free shared buff/cache available Mem: 15G...
  8. C

    vms lags during vm cloning

    i dont understand this formula So the formula is: total_ram - 1 GB - expected_GB_for_vm/ct = zfs_arc_max; zfs_arc_max >= 4 GB. i have 16GB so 16GB - 1GB - 8GB = 7GB so how to set the zfs arc?
  9. C

    vms lags during vm cloning

    edit: when coping files inside vm 2MB/s :(
  10. C

    vms lags during vm cloning

    during vm migration all vms totaly lagging... ssh very slow, some of the vms didnt works well... cpu usage during clone show about 10percent usage, but IO delay 28percent... is it normal on a raid 10 zfs proxmox Virtual Environment 5.0-30 with 16GB RAM???? i'v had proxmox v3 with raid 1 also...
  11. C

    zfs raid mirror 1 to raid 10

    i just want to use the whole space of the zfs pool public in vm200... i need to create storage of zfs with container and disk image and than add HDD in VM as SCSI and calculate disk size of GB? or how to do it please?
  12. C

    zfs raid mirror 1 to raid 10

    ok i have done zpool create -f -o ashift=12 public /dev/disk/by-id/ata-MB0500EBZQA_Z1M0EHYH /dev/disk/by-id/ata-MB0500EBZQA_Z1M0EGEJ now i have root@pve-klenova:~# zpool status pool: public state: ONLINE scan: none requested config: NAME STATE READ WRITE...
  13. C

    Poor performance with ZFS

    yes proxmox 5 fresh install with 4 1TB WD RED sata2 disks and the performace is very poor. copiing from one hp sata2 drive to rpool 20MB/s :( direct in pve not in VM!!! pool: rpool state: ONLINE scan: none requested config: NAME STATE...
  14. C

    Poor performance with ZFS

    if i have poor r/w with zfs raid 10 4x1TB sata2 disks, i should buy another 1TB SSD disk and add it for ZIL? :(
  15. C

    zfs raid mirror 1 to raid 10

    yes i need the capacity... following this wiki https://pve.proxmox.com/wiki/ZFS_on_Linux i will do: zpool create -f -o ashift=12 tank <device1> <device2> is that correct? why ashift and why 12?
  16. C

    zfs raid mirror 1 to raid 10

    ok now i have 4x1TB HDD in pool root@pve-klenova:~# zpool status pool: rpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0...
  17. C

    zfs raid mirror 1 to raid 10

    ok can somebody help me with this? isnt it possible to use qcow images in new proxmox 5? i have made backup of /var/lib/vz/images from old proxmox 3.x and now i want it to use it in new 5... do i have to create new VMs and than somehow import/convert qcow2 images to zfs? can you provide me step...
  18. C

    zfs raid mirror 1 to raid 10

    ok after reboot it seems to have the disks imported by ids root@pve-klenova:~# zpool status pool: rpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM rpool...
  19. C

    zfs raid mirror 1 to raid 10

    i was folliwing you post for pool with /dev/disk/by-id/ but ended with error, see screen shot please
  20. C

    zfs raid mirror 1 to raid 10

    my bad sorry... isnt it possible to use qcow images in new proxmox 5? i have made backup of /var/lib/vz/images from old proxmox 3.x and now i want it to use it in new 5... do i have to create new VMs and than somehow import/convert qcow2 images to zfs? can you provide me step by step how to make...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!