[TL;DR]
[The whole story ]
I have (or better: had) 2 home servers running in a cluster , both on version 7. Waited for my summer holiday to upgrade these.
For both systems (HP Proliant Gen8, (
I upgraded
On the
Checked a couple of things, did not make any changes and performed a reboot....This reboot ended in a Grub rescue prompt. So on my desktop system I performed a
So then decided to follow the part of the 7 to 8 upgrade documentation that says: "New installation/Bypassing backup". Took the storage and virtual machine information from the
I do see the partitions are still there
Reading up on zfs I found the -d option and see this:
Because I have the idea that:
Might work but would like to make sure if that indeed is the case and if also make sure that it does not cause an (avoidable rebuild of the mirror to l
Hope Someone can shed some light on this and help me get my servers online after 2 days of downtime ...
- Proxmox running on a separate disk (USB stick)
- Upgraded from 7 to 8 by reinstalling that disk and taking pointers from this document: bypassing backup.
- Used
zpool import -f NAME
for my storages - All ZFS pools are available again except the mirror that I had from 2 4TB disks:
Safe-Data
- The data for
Safe-Data
is still there but the mirror pool does not exist
- How can I get this mirror pool back online?
- Can I recreate the mirror from 2 partitions (
sdb1
&sdd1
) without losing the data?
(secondary: in the most optimal way: without needing a complete rebuild)
If yes: how?
[The whole story ]
I have (or better: had) 2 home servers running in a cluster , both on version 7. Waited for my summer holiday to upgrade these.
For both systems (HP Proliant Gen8, (
gen8
) HP Proliant Gen10Plus (pve
)) ProxMox is running off a USB stick. To me the one with hostname gen8
is for smaller VM's and containers (small web server, ldap server etc). The one with hostname pve
is more important and is running my nextcloud server (among others). Imported data is stored in 2 disks that I mirrored using an ZFS mirror. I upgraded
gen8
without any issues (it is the least important one, the one that I use to " try things first") . The smaller vms on the gen8
node are backuped to my PBS server (an old laptop with a large USB disk) so there is little to no risk for me to do that upgrade. Because that worked fine I started preparing pve
(making sure it is updated to the latest version, connect a keyboard and monitor for local access etc.)On the
pve
local console repeated error messages where shown informing me that one sector on the USB stick is bad and cannot be read:
Code:
blk_update_request: critical medium error, dev sde, sector 1050624 op 0x0:(READ) flags 0x0 phys_seg 26 prio class 0
ddrescue
of the USB stick. It appears the first sector of the USB stick has errors . Bought a new USB stick of the same size and tried a number of things to revive the data, but nothing worked. So then decided to follow the part of the 7 to 8 upgrade documentation that says: "New installation/Bypassing backup". Took the storage and virtual machine information from the
gen8
node that is running just fine. Adding the pve
node to the cluster by removing the "old" pve
first also worked fine. After editing storage.cfg
, all storages are defined but the actual data is not available, googled a bit and found I needed to do a zpool import
first. No all pools are back online except for the one that I mirror using 2 x 4TB. This mirror contains the important data (more importantly: a lot of data that will take a long time to reproduce) i am super cautious as to not loose any of it.I do see the partitions are still there
Code:
NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINTSsda
├─sda1 zfs_member 5000 OneTB 1098022942506576788
└─sda9
sdb
├─sdb1 VMFS_volume_member 5 5bcd8030-bb860790-d611-3ca82aa06328
└─sdb9
sdc
├─sdc1 zfs_member 5000 FastData 9715153901773720425
└─sdc9
sdd
├─sdd1 VMFS_volume_member 5 5bcd8030-bb860790-d611-3ca82aa06328
Reading up on zfs I found the -d option and see this:
Code:
root@pve:~# zpool import -d /dev/sdb1
pool: Safe-Data
id: 13987065176003032201
state: ONLINE
status: The pool was last accessed by another system.
action: The pool can be imported using its name or numeric identifier and
the '-f' flag.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-EY
config:
Safe-Data ONLINE
mirror-0 ONLINE
sdb ONLINE
ata-ST4000VN008-2DR166_ZGY310WR ONLINE
root@pve:~# zpool import -d /dev/sdd1
pool: Safe-Data
id: 13987065176003032201
state: ONLINE
status: The pool was last accessed by another system.
action: The pool can be imported using its name or numeric identifier and
the '-f' flag.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-EY
config:
Safe-Data ONLINE
mirror-0 ONLINE
ata-ST4000VN008-2DR166_ZGY2TS29 ONLINE
sdd ONLINE
Because I have the idea that:
Bash:
zpool import -d /dev/sdb1 -f Safe-Data
zpool attach /dev/sdb1 /dev/sdd1
Might work but would like to make sure if that indeed is the case and if also make sure that it does not cause an (avoidable rebuild of the mirror to l
/dev/sdd1
Hope Someone can shed some light on this and help me get my servers online after 2 days of downtime ...