I'd like to start this out with this not being a mission critical issue, since I restored to a backup to fix my immediate problems, but it led me to thinking of some potential scenarios and I would like to know what people here think about it.
I ran PBS in a VM on my cluster with the primary disk on Ceph. I had a major stroke of bad luck, where the power surged enough to cause my UPS to not maintain power to everything without a flicker. As one would expect, this power failure short enough to cause a reboot perfectly coincided with PBS finishing an update with GRUB, so it failed to boot.
I took a bunch of steps to try and restore it, since recovering a 4TB drive would be a PITA time wise. BTW, the PBS install image rescue mode should be able to restore non-ZFS installs (especially with the default install being lvm) but never got past GRUB failing with an "unable to read or write device 'hd0'" despite me being able to fdisk and read data on it from a livecd with no errors.
So I have the "old disk" with about an extra 2 weeks of backups that cannot boot, and my restored backup. Here's where I am curious. Since both disks have the same pv and lv name, I can't mount both as a datastore, but is there an effective way to move data from one drive to another in a way that won't break PBS? For example, you probably can't just copy all the chunks and expect them to all show up correctly right? I thought about spinning up a new instance with no datastore using zfs so I can import the lvm datastore and sync it as a remote, but that seems like an unnecessary number of steps, and am not sure if it would work.
I ran PBS in a VM on my cluster with the primary disk on Ceph. I had a major stroke of bad luck, where the power surged enough to cause my UPS to not maintain power to everything without a flicker. As one would expect, this power failure short enough to cause a reboot perfectly coincided with PBS finishing an update with GRUB, so it failed to boot.
I took a bunch of steps to try and restore it, since recovering a 4TB drive would be a PITA time wise. BTW, the PBS install image rescue mode should be able to restore non-ZFS installs (especially with the default install being lvm) but never got past GRUB failing with an "unable to read or write device 'hd0'" despite me being able to fdisk and read data on it from a livecd with no errors.
So I have the "old disk" with about an extra 2 weeks of backups that cannot boot, and my restored backup. Here's where I am curious. Since both disks have the same pv and lv name, I can't mount both as a datastore, but is there an effective way to move data from one drive to another in a way that won't break PBS? For example, you probably can't just copy all the chunks and expect them to all show up correctly right? I thought about spinning up a new instance with no datastore using zfs so I can import the lvm datastore and sync it as a remote, but that seems like an unnecessary number of steps, and am not sure if it would work.