gulez thanks but its too complicated... i just want to use as much space from the two 500GB drives in stripped mode as possible... iv made public pool with
after that i add to the vm new hdd and max was 871GB... after format i have only 858GB which is poor i think...
root@pve-klenova:~# smartctl --all /dev/sda | grep Short
Short self-test routine
# 1 Short offline Completed without error 00% 26667 -
root@pve-klenova:~# smartctl --all /dev/sdb | grep Short
Short self-test routine
# 1 Short offline Completed without error...
here the results, please see, its a shame i think:
root@pve-klenova:~# zpool status
pool: public
state: ONLINE
scan: scrub repaired 0 in 0h0m with 0 errors on Sun Oct 8 00:24:02 2017
config:
NAME STATE READ WRITE CKSUM
public ONLINE...
well i dont know but it seems totaly bad
root@pve-klenova:~# pveperf
CPU BOGOMIPS: 38401.52
REGEX/SECOND: 450140
HD SIZE: 745.21 GB (rpool/ROOT/pve-1)
FSYNCS/SECOND: 58.78
DNS EXT: 29.48 ms
DNS INT: 18.56 ms (elson.sk)
ok i did:
root@pve-klenova:~# cat /sys/module/zfs/parameters/zfs_arc_max
7516192768
root@pve-klenova:~# cat /sys/module/zfs/parameters/zfs_arc_min
4294967296
root@pve-klenova:~# free -h
total used free shared buff/cache available
Mem: 15G...
i dont understand this formula
So the formula is: total_ram - 1 GB - expected_GB_for_vm/ct = zfs_arc_max; zfs_arc_max >= 4 GB.
i have 16GB so 16GB - 1GB - 8GB = 7GB so how to set the zfs arc?
during vm migration all vms totaly lagging... ssh very slow, some of the vms didnt works well... cpu usage during clone show about 10percent usage, but IO delay 28percent... is it normal on a raid 10 zfs proxmox Virtual Environment 5.0-30 with 16GB RAM????
i'v had proxmox v3 with raid 1 also...
i just want to use the whole space of the zfs pool public in vm200... i need to create storage of zfs with container and disk image and than add HDD in VM as SCSI and calculate disk size of GB? or how to do it please?
ok i have done
zpool create -f -o ashift=12 public /dev/disk/by-id/ata-MB0500EBZQA_Z1M0EHYH /dev/disk/by-id/ata-MB0500EBZQA_Z1M0EGEJ
now i have
root@pve-klenova:~# zpool status
pool: public
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE...
yes proxmox 5 fresh install with 4 1TB WD RED sata2 disks and the performace is very poor. copiing from one hp sata2 drive to rpool 20MB/s :( direct in pve not in VM!!!
pool: rpool
state: ONLINE
scan: none requested
config:
NAME STATE...
yes i need the capacity... following this wiki https://pve.proxmox.com/wiki/ZFS_on_Linux i will do:
zpool create -f -o ashift=12 tank <device1> <device2>
is that correct? why ashift and why 12?
ok now i have 4x1TB HDD in pool
root@pve-klenova:~# zpool status
pool: rpool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0...
ok can somebody help me with this?
isnt it possible to use qcow images in new proxmox 5? i have made backup of /var/lib/vz/images from old proxmox 3.x and now i want it to use it in new 5... do i have to create new VMs and than somehow import/convert qcow2 images to zfs? can you provide me step...
ok after reboot it seems to have the disks imported by ids
root@pve-klenova:~# zpool status
pool: rpool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
rpool...
my bad sorry... isnt it possible to use qcow images in new proxmox 5? i have made backup of /var/lib/vz/images from old proxmox 3.x and now i want it to use it in new 5... do i have to create new VMs and than somehow import/convert qcow2 images to zfs? can you provide me step by step how to make...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.