[SOLVED] Help needed in recovering node with ZFS mirror

Vasilij Lebedinskij

Renowned Member
Jan 30, 2016
65
3
73
40
Hello! I have a problem with one node. After reboot it failed to start pve-manager and the services it said there is no free space left on /. So i managed to free about 300Mb removing unnecessary caches and packages but df said root is still 100% full. Then i took another drive installed clean PVE on it and imported my ZFS pool as second storage. I can see disk image of my vm now so i created VM and added this image as second drive. When i try to move it to local storage i get an error "dataset does not exist" however zfs mirror is online. Please help....
 

Attachments

  • 2016-05-11 15-31-16 Create Thread | Proxmox Support Forum.png
    2016-05-11 15-31-16 Create Thread | Proxmox Support Forum.png
    14.5 KB · Views: 8
  • 2016-05-11 15-33-07 ivoryblade — ssh 192.168.100.168 -l root — 80×24.png
    2016-05-11 15-33-07 ivoryblade — ssh 192.168.100.168 -l root — 80×24.png
    34 KB · Views: 8
  • 2016-05-11 15-33-57 Banners and Alerts.png
    2016-05-11 15-33-57 Banners and Alerts.png
    18.3 KB · Views: 8
Yes. I successfully added my pool to new node but i can't move disk image to another storage to free space and recover old node. What does error "dataset does not exist" mean?
 
Today i'm trying to expand ZFS pool so i can recover old node.
So now i have degraded pool with only /dev/sda2 and missing /dev/sdc2
I detached /dev/sdc drive and pluged drive with more capacity 320GB (/dev/sdb), cloned partition layout and now resilvering my pool. After that i want to add another drive with same capacity as new one (dev/sdc), clone partition layout on it, detach first drive (dev/sda) and resize partition with zfs on last one (dev/sdc), after resilver and detach second drive (dev/sdb). So i should have degraded pool with a lot of free space in the end.... i hope. Is this right way i'm going?
 
please post the output of "zfs list -t all" "zpool status" and "pveversion -v" and the content of "/etc/pve/storage.cfg" and the VM config as text inside [ code][ /code] tags.
 
Today i'm trying to expand ZFS pool so i can recover old node.
So now i have degraded pool with only /dev/sda2 and missing /dev/sdc2
I detached /dev/sdc drive and pluged drive with more capacity 320GB (/dev/sdb), cloned partition layout and now resilvering my pool. After that i want to add another drive with same capacity as new one (dev/sdc), clone partition layout on it, detach first drive (dev/sda) and resize partition with zfs on last one (dev/sdc), after resilver and detach second drive (dev/sdb). So i should have degraded pool with a lot of free space in the end.... i hope. Is this right way i'm going?

This procedure should work. I did it a while ago, but I did not replace the small drive in the mirror and just left it as a single drive.

You will need to issue this command to tell ZFS to use the new full size of your drives after the small drive is removed:

Code:
zpool online -e rpool /dev/sda2 /dev/sdb2

assuming you are left with those two devices as your pool mirror members.
 
This procedure should work. I did it a while ago, but I did not replace the small drive in the mirror and just left it as a single drive.

You will need to issue this command to tell ZFS to use the new full size of your drives after the small drive is removed:

Code:
zpool online -e rpool /dev/sda2 /dev/sdb2

assuming you are left with those two devices as your pool mirror members.

Thanks for advise but i gone another way to solve my problem! I just added another mirror and upgraded my pool to raid 10. My pool is up again.