Message: /sbin/zpool : symbol lookup error: /sbin/zpool: undefined symbol: thread_init Error:127

ryantboldt

Member
May 10, 2019
27
3
23
37
Hello all, I'm back again because I've managed to brick my promox again. I was attempting to finally update my proxmox 5 setup... didn't realize I was so outdated. I was getting numerous issues resolving the repositories... even just updating the 5 to the latest version before attempting to update to 6. I kept trying various things and eventually after 2 hours got it to update....something...

After a reboot I am presenting with the error message of this post. I found this thread....

https://forum.proxmox.com/threads/zfs-problem-sbin-zpool-undefined-symbol-thread_init.38500/

which similarly explains the situation. I was trying to follow these steps after booting into a ubuntu 16.04 live usb.

zpool import -N 'rpool'
zpool set mountpoint=/mnt rpool/RPOOL/pve-1
zfs mount rpool/ROOT/pve-1
mount -t proc /proc /mnt/proc
mount --rbind /dev /mnt/dev


#Enter the chroot
chroot /mnt /bin/bash
source /etc/profile
apt update && apt upgrade & apt dist-upgrade <-- this installed new kernel
zpool status <-- test that zpool can be ran ok
zfs set mountpoint=/ rpool/RPOOL/pve-1
exit

after installing zfsutils-linux i noticed that the rpool was already imported... I noticed this after trying zpool import -n rpool. In fact most of the rpool including pve-1 and root were mounted.

I created directories for /mnt/proc and /mnt/dev and was able to rbind /proc and /dev to them.

My issue now is when i try to chroot /mnt /bin/bash... I am getting chroot: failed to run command '/bin/bash' no such file or directory

I wasn't sure where the bash should be located... i see it in the root of the usb flash drive in /bin/bash... i copied this to /mnt/bin/bash to see if it would help. It did not.

I apologize in advance for the embarrassment of my failures. I am just hoping to repair my rpool so that it can boot again. Thank you in advance for the help.
 
I rebooted the live usb to try again... i was able to mount the pool pve1 to /mnt but still get stuck at chroot... no matter what i do i keep getting no such file or directory
 
I made it a bit further... i found this..
mount -o bind /lib /mnt/lib and mount -o bind /lib64 /mnt/lib64

which allowed me to successfully chroot into /mnt

however i am getting error while loading shared libraries libapt-pkg.so.5.0 cannot open shared object file: no such file or directory now...
 
i've made more progress.. i decided to switch to a debian 9 live usb
following the steps listed here https://openzfs.github.io/openzfs-d...tch Root on ZFS.html#rescuing-using-a-live-cd

# zpool export -a
# zpool import -N -R /mnt rpool
# zpool import -N -R /mnt bpool
# zfs mount rpool/ROOT/debian
# zfs mount -a

# mount --rbind /dev /mnt/dev
# mount --rbind /proc /mnt/proc
# mount --rbind /sys /mnt/sys
# chroot /mnt /bin/bash --login

i've been able to chroot into the rpool... i seem to be back at my old issues now of troubleshooting updates / repositories. I am unsure what condition my install is in... if parts of it made it to buster or not. I am trying to update the stretch install and getting the attached image
 

Attachments

  • i_have_no_idea_what_im_doing_meme_640_07.jpg
    i_have_no_idea_what_im_doing_meme_640_07.jpg
    46.2 KB · Views: 7
  • 20230508_104124.jpg
    20230508_104124.jpg
    368.6 KB · Views: 7
I managed to get something to update while in chroot.. now when i try to boot proxmox it makes it past importing the pool i think.. at least im not getting that error anymore... now its just a kernel panic.
 

Attachments

  • 20230508_121702(1).jpg
    20230508_121702(1).jpg
    725.9 KB · Views: 3
i went back into the debian usb live and got a pve version. i assume its not good to be showing a proxmox-ve version of 7.4.1 while the manager is showing 5.4.3 but what do i know. i also see that zfsutils-linux is not correctly installed. i'm gonna wait and see if anyone here has any intelligent ideas instead of me just jabbing at the problem to make it worse.
 

Attachments

  • 20230508_130950.jpg
    20230508_130950.jpg
    466.6 KB · Views: 5
i decided to keep jabbing at it to make it worse... at least i didn't succeed in making it worse. i went back into the debian live usb and updated from 5 to 6 and then 6 to 7 using chroot.. my pveversion -v at least now shows everything updated and even zfs-utils-linux is installed properly now... however when i try to boot i am still getting a kernel panic. i am officially out of ideas now. thanks to anyone who offers any support.
 

Attachments

  • 20230508_152159.jpg
    20230508_152159.jpg
    506.1 KB · Views: 6
  • 20230508_152153.jpg
    20230508_152153.jpg
    832.2 KB · Views: 6
  • 20230508_152506(1).jpg
    20230508_152506(1).jpg
    649.8 KB · Views: 6
My entire house has been down for three days with this. Security cameras, television, and home automation. I wish I could figure this out.
 
however when i try to boot i am still getting a kernel panic.

Do you have a empty disk laying around, that you could replace (to be safe) your current one(s) with and test with a fresh PVE 7.4 installation on it?
With this, you could at least find out, if it is a problem with the newer kernel and your hardware or your messed up upgrade.

My entire house has been down for three days with this. Security cameras, television, and home automation. I wish I could figure this out.

My humble guess is, that at this point, no one is really able (at least without investing a big amount of time and maybe even the need for remote access and of course, given the knowledge) to comprehend anymore, what you all did and especially what all went wrong and so on.

So, the most reasonable might be, to do a fresh PVE 7.4 installation and restore from backups.
To be honest, this also might have been the best way three days ago, shortly after the upgrade mess-up started, instead of fiddling around for so long with your home-productive system, that you depend on...

PS.: The reason for your Debian 9/Stretch repository problem is this fact:
https://forum.proxmox.com/threads/a...longer-have-a-release-file.126447/post-552229
 
Last edited:
Do you have a empty disk laying around, that you could replace (to be safe) your current one(s) with and test with a fresh PVE 7.4 installation on it?
With this, you could at least find out, if it is a problem with the newer kernel and your hardware or your messed up upgrade.



My humble guess is, that at this point, no one is really able (at least without investing a big amount of time and maybe even the need for remote access and of course, given the knowledge) to comprehend anymore, what you all did and especially what all went wrong and so on.

So, the most reasonable might be, to do a fresh PVE 7.4 installation and restore from backups.
To be honest, this also might have been the best way three days ago, shortly after the upgrade mess-up started, instead of fiddling around for so long with your home-productive system, that you depend on...

PS.: The reason for your Debian 9/Stretch repository problem is this fact:
https://forum.proxmox.com/threads/a...longer-have-a-release-file.126447/post-552229
thank you for taking the time to reply. i will gladly try a separate drive to ensure it is not a hardware issue. i feel like i'm so close to resolving this and that is is something to do with grub (i could be wrong)

I was looking into https://pve.proxmox.com/wiki/ZFS:_Switch_Legacy-Boot_to_Proxmox_Boot_Tool

I dont know if the fact that i had legacy grub with zfs as root could cause this issue. When I run the boot tool status i'm seeing no uuid's... not sure if this is an issue or not.
 

Attachments

  • 20230508_213257.jpg
    20230508_213257.jpg
    338.2 KB · Views: 4
and then 6 to 7 using chroot.. my pveversion -v at least now shows everything updated and even zfs-utils-linux is installed properly now... however when i try to boot i am still getting a kernel panic. i am officially out of ideas now. thanks to anyone who offers any support.
sadly it's really hard to see and compare the versions from the pictures you posted (although probably not easily possible) - in general - text copies are much preferred (in code tags) for such issues.

A general recommendation (as @Neobin also suggested) - make a backup (dd-image) of the current disk-state - that way at least you have something to go back to if things break even more!

From what I see - the issue could really just be a incompatibility between your hardware and the newer kernel-version (a lot has happened in the kernel since the days of pve-5.4)

Usually PVE keeps older kernel versions installed - and you should be able to select them in the grub-screen (just enter the Advanced Options)
* Try booting an older kernel - and see if things improve

If this does not work (question would be why the old kernels are not available anymore) - you can try installing the other kernel-series we've had over the course of PVE 7:
* pve-kernel-5.11 (EOL and not supported anymore - but still would be a better take than nothing if it boots)
* pve-kernel-5.13
* pve-kernel-5.19 (this one was a opt-in kernel, which also was dropped)
* pve-kernel-6.1
* pve-kernel-6.2

Just install the meta-packages (they are named as I wrote them above - e.g. pve-kernel-5.11) in the chroot-environment (since you now seem able to get into that) - and again select them in the grub menu

If this still does not work - The option of installing a fresh 7.4 ISO on a fresh disk is a valid option (you can then rename your current rpool and import it, to use it as another storage in the new install)

Addtionally upgrading the BIOS is something that works quite often to fix such kernel-panics

Any chance to get some more context on when and during which phase the kernel-panic happens?

I hope this helps!
 
  • Like
Reactions: ryantboldt
sadly it's really hard to see and compare the versions from the pictures you posted (although probably not easily possible) - in general - text copies are much preferred (in code tags) for such issues.

A general recommendation (as @Neobin also suggested) - make a backup (dd-image) of the current disk-state - that way at least you have something to go back to if things break even more!

From what I see - the issue could really just be a incompatibility between your hardware and the newer kernel-version (a lot has happened in the kernel since the days of pve-5.4)

Usually PVE keeps older kernel versions installed - and you should be able to select them in the grub-screen (just enter the Advanced Options)
* Try booting an older kernel - and see if things improve

If this does not work (question would be why the old kernels are not available anymore) - you can try installing the other kernel-series we've had over the course of PVE 7:
* pve-kernel-5.11 (EOL and not supported anymore - but still would be a better take than nothing if it boots)
* pve-kernel-5.13
* pve-kernel-5.19 (this one was a opt-in kernel, which also was dropped)
* pve-kernel-6.1
* pve-kernel-6.2

Just install the meta-packages (they are named as I wrote them above - e.g. pve-kernel-5.11) in the chroot-environment (since you now seem able to get into that) - and again select them in the grub menu

If this still does not work - The option of installing a fresh 7.4 ISO on a fresh disk is a valid option (you can then rename your current rpool and import it, to use it as another storage in the new install)

Addtionally upgrading the BIOS is something that works quite often to fix such kernel-panics

Any chance to get some more context on when and during which phase the kernel-panic happens?

I hope this helps!
I didn't realize the photos were hard to read as I had reduced their filesize to fit as an attachment to this forum. I know that the grub menu shows a pve-kernel of 4.x something (off of memory and I'm not in front of it now) I am happy to copy the output of pveversion -v if you think it would help.. I was just taking pictures as this was in the debian live instance.

If this still does not work - The option of installing a fresh 7.4 ISO on a fresh disk is a valid option (you can then rename your current rpool and import it, to use it as another storage in the new install)
Since my current rpool seems to be intact would I theoretically be able to access the guests as they were once imported on the new 7.4 install?

Any chance to get some more context on when and during which phase the kernel-panic happens?
I can get more specific later today when I am in front of the system but it happens very early in the boot process. I am able to see the grub menu / proxmox splash screen and then it says "loading linux pve version" (based on memory)... the screen resolution changes briefly and then the text dumps with the kernel panic. It only takes 10-15 seconds after the initial grub screen for the panic to happen.
 
sadly it's really hard to see and compare the versions from the pictures you posted (although probably not easily possible) - in general - text copies are much preferred (in code tags) for such issues.

A general recommendation (as @Neobin also suggested) - make a backup (dd-image) of the current disk-state - that way at least you have something to go back to if things break even more!

From what I see - the issue could really just be a incompatibility between your hardware and the newer kernel-version (a lot has happened in the kernel since the days of pve-5.4)

Usually PVE keeps older kernel versions installed - and you should be able to select them in the grub-screen (just enter the Advanced Options)
* Try booting an older kernel - and see if things improve

If this does not work (question would be why the old kernels are not available anymore) - you can try installing the other kernel-series we've had over the course of PVE 7:
* pve-kernel-5.11 (EOL and not supported anymore - but still would be a better take than nothing if it boots)
* pve-kernel-5.13
* pve-kernel-5.19 (this one was a opt-in kernel, which also was dropped)
* pve-kernel-6.1
* pve-kernel-6.2

Just install the meta-packages (they are named as I wrote them above - e.g. pve-kernel-5.11) in the chroot-environment (since you now seem able to get into that) - and again select them in the grub menu

If this still does not work - The option of installing a fresh 7.4 ISO on a fresh disk is a valid option (you can then rename your current rpool and import it, to use it as another storage in the new install)

Addtionally upgrading the BIOS is something that works quite often to fix such kernel-panics

Any chance to get some more context on when and during which phase the kernel-panic happens?

I hope this helps!
And thank you for the ideas!
 
didn't realize the photos were hard to read
They are readable well enough - but it's quite painful to manually scan and compare version strings (vs. copying the text into a file and running `diff` on it ;)

Since my current rpool seems to be intact would I theoretically be able to access the guests as they were once imported on the new 7.4 install?
Yes and no:
* The guest's disk-data should be available in the zpool (if it is still ok - check the `zpool status` and `zfs list` outputs for some more details - also consider running a scrub (after the system boots up by itself))
** Depending on if your guests are Qemu VMs or LXC containers - they are either present as disk-images (zvols, named vm-<VMID>-disk-<diskid>), or as subvolumes (directory trees, named subvol-<VMID>-disk-<diskid>) - these you can copy, rename, plug into another VM's config
* the guest-configs themselves are stored in the pmxcfs (see the reference documentation for details - https://pve.proxmox.com/pve-docs/chapter-pmxcfs.html) - which itself is backed by a sqlite-db - the easiest way to access that from a broken system might be to copy the sqlite-database into a VM (running PVE), replacing it with the live-version there (after stopping pmxcfs), and starting pmxcfs again - but as said - work with copies in a VM, as this has the potential to break the running system (thus VM), and the database (thus a copy)

I can get more specific later today when I am in front of the system but it happens very early in the boot process. I am able to see the grub menu / proxmox splash screen and then it says "loading linux pve version" (based on memory)... the screen resolution changes briefly and then the text dumps with the kernel panic. It only takes 10-15 seconds after the initial grub screen for the panic to happen.
hm - can (sadly) mean many things - which hardware are you running? - check/search the forums for similar hardware - maybe someone has already run into this issue - and has found a workaround

Good luck!
 
hm - can (sadly) mean many things - which hardware are you running? - check/search the forums for similar hardware - maybe someone has already run into this issue - and has found a workaround

Sadly this hardware is extremely old... I was given a 24 thread system from a friend years ago and the system has been running 24/7 for some time. I obviously regret ever typing anything into terminal a few days ago. If I make it out of this I plan to upgrade to more modern hardware and refresh my backup strategy... This is already costing me sleep and it's a mistake I dont want to make again.

I currently plan to investigate trying different kernels as you suggested. Thank you for your support.
 
  • Like
Reactions: Stoiko Ivanov
Sadly this hardware is extremely old...
Hmm - rather dated hardware does exhibit issues with newer kernels sometimes (mostly because most kernel-developers do not run such things and thus also don't run into the issues, additionally older hardware is less likely to get BIOS updates which fix issues (but in any case checking if there is any update available is always worth it, and typing in the CPU-Model (or Server model if applicable) in the forum search also should not hurt)

I currently plan to investigate trying different kernels as you suggested. Thank you for your support.
Sounds good - let us know how it goes!
 
Hmm - rather dated hardware does exhibit issues with newer kernels sometimes (mostly because most kernel-developers do not run such things and thus also don't run into the issues, additionally older hardware is less likely to get BIOS updates which fix issues (but in any case checking if there is any update available is always worth it, and typing in the CPU-Model (or Server model if applicable) in the forum search also should not hurt)


Sounds good - let us know how it goes!
I successfully added all of the suggested kernels in chroot and was able to select them in the proxmox boot menu.. each one kernel panicked at the same spot. i am now going to leave the server running memtest overnight. i have noticed that there are at least 2 newer versions of the motherboard bios that i plan to upgrade to tomorrow to see if that helps.

i will also attempt a fresh proxmox 7 install on a spare hdd to see if even a new install kernel panics on this hardware.

after that i think i may want to try building an entirely new server and renaming my old rpool and then importing it into the new server. I am interested in locating my guest data sets... i know i've seen on the existing rpool (while in the debian live usb) 3 subvolumes showing up. i cannot remember if I had any containers... I've had this setup for so long and add and take away so many guests it's hard to remember (and I didn't have proper documentation of what i've built) I can think off the top of my head no less than 4 guests that I was actively using... one of them may have been a container.

If I am able to build a new server, import a renamed old rpool, copy / move over the subvolumes and any container files... I would like to try the pmxcfs cloning as described earlier. My plan at that point would be to re-run some backups to an external backup server on the guests if everything moves over properly with the subvolumes and the pmxcfs cloning. once i have successfully backed up the guests on this frankenserver with cloned pmxcfs I would start with a fresh install of proxmox and restore from the newly created backups.

would it matter to physically connect the drives of the old rpool to the new server to move the subvolumes and pmxcfs over to the new server.. or could i leave the drives in the existing hardware, boot into debian as I have been doing, import the existing rpool into debian live.. and move the files to a intermediary file server to then be copied down to the new server / new rpool?

thank you
 
Just backup your VMs / Containers in PM 5, set up PM 7.4 and import your Machines
 
Hmm - rather dated hardware does exhibit issues with newer kernels sometimes (mostly because most kernel-developers do not run such things and thus also don't run into the issues, additionally older hardware is less likely to get BIOS updates which fix issues (but in any case checking if there is any update available is always worth it, and typing in the CPU-Model (or Server model if applicable) in the forum search also should not hurt)


Sounds good - let us know how it goes!
I might type a more clarified response later as I'm kinda mentally drained right now... but somehow against all odds I SUCCESSFULLY RECOVERED MY 9 GUESTS FROM MY OLD SERVER!!!!!! i couldn't / wouldn't give up hope.. i've spent so much time over the last week trying to recover my guests which had hours and hours of work on them... i tried so many things... i updated my bios, i installed every kernel known to man... i ran memtest.. no matter what I did the old proxmox install would not boot.

I purchased hardware for a new server (much overdo as now i'll admit my old server hardware came out in 2010 and drew nearly 300 watts on the dual cpus)

I spent some time last night building the new server without thinking i'd be able to recover my guests on it.. it was more of a "lets move on with life" and maybe restore from some guest backups of a few of my guests that were months old..

then I had a thought... maybe the old proxmox pool would boot on new hardware... worth a shot right? well it too kernel panicked on every kernel i had installed. interestingly enough i had the thought... my original pool was striped and mirrored... so i decided i'd take one pair of the striped drives and essentially have 2 sets of the same pool... one i left for safe keeping...and one i'd mess with on the new hardware.

i ended up installing proxmox 7.4 on the new hardware on new drives.. and began trying to import my old rpool under a new name into the system... i stumbled about doing that for awhile... somehow managed to do it... somehow managed to clone over my old config.db... somehow managed to manually edit my .conf files for each guest to point to the newly mounted old rpool... THEY BOOTED... then i "moved disk" to the new drives and then proceeded to immediately backup the guests!

now i just need to decide if i want to start over on the new server and reinstall promox and then restore from these backups. I still have some work to do configuring my quad nic and cleaning up the hardware on the new box.. i also have to figure out if i can convert my new proxmox install from a raid 1 to a raid 10 as i ultimately want striping and mirroring and have the new drives to do it (configured raid 1 just to get started on the new install) if i can't convert then i guess i'm reinstalling proxmox and choosing raid10 from the start.

the bottom line is that if you mess up your proxmox install but your zpool is in good shape... it IS POSSIBLE to recover from this.. you can move your config.db and edit your conf files to point to the imported old rpool and save your precious guests. there is hope if you put the time in and you get very lucky..

in the meantime i'll let this be a lesson for me to never sleep on your backup strategy.. ultimately what started this for me was one of my drives on the old server started showing sector errors (i didn't realize this until last night when working with the old drives) which caused proxmox to freeze on me last saturday morning which started this whole mess)


I want to thank Stoiko for responding to my email and this thread and to the others that offered advice. What a great feeling!
 
  • Like
Reactions: Stoiko Ivanov

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!