Dataset merging after failed recovery

societus

Member
Feb 19, 2023
11
1
8
I'd like to start this out with this not being a mission critical issue, since I restored to a backup to fix my immediate problems, but it led me to thinking of some potential scenarios and I would like to know what people here think about it.

I ran PBS in a VM on my cluster with the primary disk on Ceph. I had a major stroke of bad luck, where the power surged enough to cause my UPS to not maintain power to everything without a flicker. As one would expect, this power failure short enough to cause a reboot perfectly coincided with PBS finishing an update with GRUB, so it failed to boot.

I took a bunch of steps to try and restore it, since recovering a 4TB drive would be a PITA time wise. BTW, the PBS install image rescue mode should be able to restore non-ZFS installs (especially with the default install being lvm) but never got past GRUB failing with an "unable to read or write device 'hd0'" despite me being able to fdisk and read data on it from a livecd with no errors.

So I have the "old disk" with about an extra 2 weeks of backups that cannot boot, and my restored backup. Here's where I am curious. Since both disks have the same pv and lv name, I can't mount both as a datastore, but is there an effective way to move data from one drive to another in a way that won't break PBS? For example, you probably can't just copy all the chunks and expect them to all show up correctly right? I thought about spinning up a new instance with no datastore using zfs so I can import the lvm datastore and sync it as a remote, but that seems like an unnecessary number of steps, and am not sure if it would work.
 
Hi,
So I have the "old disk" with about an extra 2 weeks of backups that cannot boot, and my restored backup. Here's where I am curious. Since both disks have the same pv and lv name, I can't mount both as a datastore, but is there an effective way to move data from one drive to another in a way that won't break PBS? For example, you probably can't just copy all the chunks and expect them to all show up correctly right? I thought about spinning up a new instance with no datastore using zfs so I can import the lvm datastore and sync it as a remote, but that seems like an unnecessary number of steps, and am not sure if it would work.
you could rename the volume group and activate it, then mount the filesystem the datastore is located on. Given that, you can either add the datastore to the new PBS instance as pre-existing datastore and sync your snapshots to the other datastore, or rsync the contents to a new datastore (make sure that no concurrent actions operate on that datastore and that you start with an empty store for latter).
 
Hi,

you could rename the volume group and activate it, then mount the filesystem the datastore is located on. Given that, you can either add the datastore to the new PBS instance as pre-existing datastore and sync your snapshots to the other datastore, or rsync the contents to a new datastore (make sure that no concurrent actions operate on that datastore and that you start with an empty store for latter).
I had thought of that but hesitated because I didn't know if the dataset contained all of the metadata for what it contained, or if part of it relied on data kept inside of PBS root disk. I'll give that a test run later on and see how it goes. Thanks!