i need to install proxmox with raid1 (mirror) and use only first disk... i will add the second hdd later after i solve some problems with backup.... but the installer wont let me choose only first disc(second "do not use")...
weird thing is that the installer let zfs raid0 (stripped) install...
ok so i try to zpool import but it complains that there is a damaged file... i dont care about the file i just need to import that pool again, install grub and boot so i can can rescue as many datas as posible... how can i do this???
i have joined new hdd, so now i have both disks and after boot i got this in grub...
should i try to change only the prefix and trying insmod normal?
but i got only 2hdds in sata3 why is there hd2 and hd3?
is there ANYBODY who can help me with this? really don't have a clue how to start... i NEED to rescue the data... i don't understand how can this happend on a normal file system...
its starts with disc failure... no problem with data, only fuses on both HDDs has gone (mirror pool)...
one HDD...
ok after ctrl+D and abort instalation i was able to end in prompt with zpool command, BUT
zpool import -a
cannot import 'rpool' : no such pool or dateset
Destroy and re-create the pool from
a backup source.
is this serious? how could i damage the rpool????
what can i...
things cat more complicated.... i didn't made the backups (ordered new 4TB usb drive, but still dont have it), i needed the server to reboot and ended with grub unknown fille system... i try to boot from both disks but no luck... i removed the new one and try to boot with the old, which was...
i dont understand the error, i have checked the qcow2 file
root@pve-klenova:~# qemu-img check /var/lib/vz/images/200/vm-200-disk-2.qcow2
No errors were found on the image.
16777216/16777216 = 100.00% allocated, 0.04% fragmented, 0.00% compressed clusters
Image end offset: 1102665351168
no...
after i ran scrub
root@pve-klenova:~# zpool status -v
pool: rpool
state: DEGRADED
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire...
after disk failure i have done this steps...
1.) zpool offline rpool /dev/disk/by-id/wwn-0x5000cca269c4bd82-part2
2.) From the WebUI, Servername -> Disks -> Initialize Disk with GPT (/dev/sdb)
3.) sgdisk --replicate=/dev/sdb /dev/sda
4.) sgdisk --randomize-guids /dev/sdb
5.) grub-install...
can somebody please help me with the the degraded pool? see above... i have make thease steps and still have degraded pool...
1.) zpool offline rpool /dev/disk/by-id/wwn-0x5000cca269c4bd82-part2
2.) From the WebUI, Servername -> Disks -> Initialize Disk with GPT (/dev/sdb)
3.) sgdisk...
what does the permanent erros means? and how to get rid of this:
replacing-1 DEGRADED 1008 0 0
12706416511818272176 OFFLINE 0 0 0 was /dev/disk/by-id/wwn-0x5000cca269c4bd82-part2
root@pve-klenova:~# zpool status -v
pool...
ok i have done
zpool replace rpool 12706416511818272176 /dev/disk/by-id/wwn-0x5000cca269e871c7-part2
now i have
root@pve-klenova:~# zpool status -v
pool: rpool
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a...
hello i have 2xhdd in raid1 (mirror) zfs in proxmox. one hdd was gone, replace with new one, now i have
root@pve-klenova:~# fdisk -l
Disk /dev/sda: 3,7 TiB, 4000787030016 bytes, 7814037168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O...
i have done some test inside my VM (ftp/samba/fileserver)
root@pve-klenova:~# arcstat
time read miss miss% dmis dm% pmis pm% mmis mm% arcsz c
21:42:44 0 0 0 0 0 0 0 0 0 2.0G 2.0G
hdd driver test inside this VM
root@merkur:~# uname -a
Linux merkur 4.4.0-101-generic #124-Ubuntu SMP Fri Nov 10...
root@pve-klenova:~# arcstat
time read miss miss% dmis dm% pmis pm% mmis mm% arcsz c
21:42:44 0 0 0 0 0 0 0 0 0 2.0G 2.0G
hdd driver test inside this VM
root@merkur:~# uname -a
Linux merkur 4.4.0-101-generic #124-Ubuntu SMP Fri Nov 10...
i dont plan to buy such big ssd, i was just asking... so i have to buy 2xssd and put it in zfs raid1 for log for security? but you said that l2arc cannot be mirrored so iam confused :( or sould i use the ssd only for l2arc?
and how can i find out my ARC size?
my goal is to improve performance...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.