helloo im confused :) i put 4 wd red 1TB sata disk in my server, installed proxmox 5 with zfs raid 10 and i have
root@pve-klenova:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 9.25G 1.75T 96K /rpool
rpool/ROOT 764M 1.75T 96K /rpool/ROOT...
so zfs raid 10 is not working like normal raid10? normal raid 10 imho create a mirror that is stripped, im i right? zfs raid 10 creates two mirrored vdevs with no stripping so no performance boost?
my last question... will be the array the same (when i first install proxmox with 2disk only and raid1 (mirror) and after that i will add two more disks for raid 10) the same i would add 4disk at the beginning and make proxmox install with raid 10?
thank you very much now i have done
sudo zfs set mountpoint=/media/chalan/proxmox/pve-1 rpool/ROOT/pve-1
and my data from pve are there...
after i make backup, how can i remove the zfs pool from my desktop?
i did something wrong :( at the beginnig a have done
sudo zpool import rpool -f
and now its look like
chalan@chalan-Desktop:~$ sudo zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 893G 6,31G 96K /rpool
rpool/ROOT 885G 6,31G 96K /rpool/ROOT...
my hw crashed... i have to pull out one hdd and put to usb docking station conected to my desktop pc for saving data from zfs pool... but
chalan@chalan-Desktop:~$ sudo zpool status
pool: rpool
state: DEGRADED
status: One or more devices could not be used because the label is missing or...
first please how will look like the config if i have two subnets on the same network and default gw 192.168.1.1? like this?
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet static
address 172.17.200.254
netmask 255.255.255.0
gateway 172.17.200.1
auto vmbr0...
but i need to have the vm's direct on the local subnet 192.168.1.0/24, there is file server samba running as vm... don't need the traffic from 192.168.1.32 to 192.168.1.251 to go through the router... the subnet
172.17.200.0/24 should be just for cluster with another pve...
hello i have a "server" with only one lan port. i need to have pve on subnet 172.17.200.0/24 and vms on another subnet 192.168.1.0/24 (local lan) with gw 192.168.1.1. my config
auto lo
iface lo inet loopback
auto vmbr0
iface vmbr0 inet static
address 192.168.1.251
netmask...
i need to migrate old server witch is running v4 to new one with v5. I want to create cluster and add node to it. is it safer to create cluster on old one and add new one as node or vice versa? if i create cluster on old one and node on new one, is it possible to remove the node from the old...
ok, i install proxmox 5 on the new server with zfs raid1 2disk. now i have this
root@pve-klenova:~# zpool status
pool: rpool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0...
what i need to do is set a new proxmox server with just 2disks in raid 1 or raid 0. than put the old and new together (nodes) and migrate all vms from old to new. after that i want to take out the two disks from old server and put them in the new server and add them to existing raid 1 (or raid...
thank you i will try... is it necessery to write grub to the new drives, so they will be bootable? or the sync will copy it from the original mirrored drives?
hello, i have two exact disk in raid 1 mirror zfs pool
root@pve:~# zpool status
pool: rpool
state: ONLINE
scan: resilvered 2,21M in 0h0m with 0 errors on Sat Jul 16 20:32:11 2016
config:
NAME STATE READ WRITE CKSUM
rpool...
please help, i created a dir storage for backup in /mnt/backup i didnt realize that there are not enought disk space... i run backup, it ended with this error:
INFO: status: 36% (61855105024/171798691840), sparse 4% (8253263872), duration 4884, 10/10 MB/s
gzip: stdout: No space left on device...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.