could not activate storage after a dist-upgrade [SOLVED]

novafreak69

Member
Nov 29, 2019
42
2
13
49
Virtual Environment 6.4-13
after a reboot on the dist-upgrade, none of my VMS start

on the web console under Pool, VM Disks I see this error 'could not activate storage 'Pool_1', zfs error: cannot import 'Pool_1': I/O error (500)"

JOURNAL OUTPUT:
-- Logs begin at Sat 2021-10-23 21:42:02 CDT, end at Sat 2021-10-23 21:57:22 CDT. --
Oct 23 21:57:22 novafreakVM zed[183450]: eid=638 class=zpool pool='Pool_1'
Oct 23 21:57:22 novafreakVM zed[183447]: eid=637 class=data pool='Pool_1' priority=2 err=52 flags=0x808881 bookmark=271:1:2:8
Oct 23 21:57:22 novafreakVM systemd[1]: Started Session 5 of user root.
Oct 23 21:57:22 novafreakVM systemd[1]: Started User Manager for UID 0.
Oct 23 21:57:22 novafreakVM systemd[183425]: Startup finished in 50ms.
Oct 23 21:57:22 novafreakVM systemd[183425]: Reached target Default.
Oct 23 21:57:22 novafreakVM systemd[183425]: Reached target Basic System.
Oct 23 21:57:22 novafreakVM systemd[183425]: Reached target Paths.
Oct 23 21:57:22 novafreakVM systemd[183425]: Reached target Sockets.
Oct 23 21:57:22 novafreakVM systemd[183425]: Listening on GnuPG network certificate management daemon.
Oct 23 21:57:22 novafreakVM systemd[183425]: Listening on GnuPG cryptographic agent and passphrase cache.
Oct 23 21:57:22 novafreakVM systemd[183425]: Listening on GnuPG cryptographic agent and passphrase cache (access for web browsers).
Oct 23 21:57:22 novafreakVM systemd[183425]: Reached target Timers.
Oct 23 21:57:22 novafreakVM systemd[183425]: Listening on GnuPG cryptographic agent and passphrase cache (restricted).
Oct 23 21:57:22 novafreakVM systemd[183425]: Listening on GnuPG cryptographic agent (ssh-agent emulation).
Oct 23 21:57:21 novafreakVM systemd[183425]: pam_unix(systemd-user:session): session opened for user root by (uid=0)
Oct 23 21:57:21 novafreakVM systemd[1]: Starting User Manager for UID 0...
Oct 23 21:57:21 novafreakVM systemd[1]: Started User Runtime Directory /run/user/0.
Oct 23 21:57:21 novafreakVM systemd-logind[4582]: New session 5 of user root.
Oct 23 21:57:21 novafreakVM systemd[1]: Starting User Runtime Directory /run/user/0...
Oct 23 21:57:21 novafreakVM systemd[1]: Created slice User Slice of UID 0.
Oct 23 21:57:21 novafreakVM sshd[181632]: pam_unix(sshd:session): session opened for user root by (uid=0)
Oct 23 21:57:21 novafreakVM sshd[181632]: Accepted password for root from 192.168.1.73 port 62177 ssh2
Oct 23 21:57:21 novafreakVM zed[183416]: eid=636 class=zpool pool='Pool_1'
Oct 23 21:57:21 novafreakVM zed[183409]: eid=635 class=data pool='Pool_1' priority=2 err=52 flags=0x808881 bookmark=271:1:2:8
Oct 23 21:57:21 novafreakVM pvestatd[5210]: could not activate storage 'Pool_1', zfs error: cannot import 'Pool_1': I/O error
Oct 23 21:57:21 novafreakVM pvestatd[5210]: zfs error: cannot open 'Pool_1': no such pool
Oct 23 21:57:21 novafreakVM zed[183234]: eid=634 class=zpool pool='Pool_1'
Oct 23 21:57:20 novafreakVM zed[182461]: eid=633 class=data pool='Pool_1' priority=2 err=52 flags=0x808881 bookmark=271:1:2:8
Oct 23 21:57:19 novafreakVM pvestatd[5210]: zfs error: cannot open 'Pool_1': no such pool
Oct 23 21:57:18 novafreakVM ceph-crash[4580]: [errno 2] error connecting to the cluster
Oct 23 21:57:18 novafreakVM ceph-crash[4580]: 2021-10-23 21:57:18.681 7fc90cd5b700 -1 AuthRegistry(0x7fc90cd59f38) no keyring found at /etc/pve/priv/ceph.client.crash.keyring, disabling cephx
Oct 23 21:57:18 novafreakVM ceph-crash[4580]: 2021-10-23 21:57:18.681 7fc90cd5b700 -1 auth: unable to find a keyring on /etc/pve/priv/ceph.client.crash.keyring: (2) No such file or directory
Oct 23 21:57:18 novafreakVM ceph-crash[4580]: 2021-10-23 21:57:18.681 7fc90cd5b700 -1 AuthRegistry(0x7fc9080dd8c0) no keyring found at /etc/pve/priv/ceph.client.crash.keyring, disabling cephx
Oct 23 21:57:18 novafreakVM ceph-crash[4580]: 2021-10-23 21:57:18.681 7fc90cd5b700 -1 auth: unable to find a keyring on /etc/pve/priv/ceph.client.crash.keyring: (2) No such file or directory
Oct 23 21:57:18 novafreakVM ceph-crash[4580]: 2021-10-23 21:57:18.677 7fc90cd5b700 -1 AuthRegistry(0x7fc9080410d8) no keyring found at /etc/pve/priv/ceph.client.crash.keyring, disabling cephx
Oct 23 21:57:18 novafreakVM ceph-crash[4580]: WARNING:__main__:post /var/lib/ceph/crash/2021-10-24_02:42:37.304114Z_edb81a57-3fa7-4ed7-89e8-e3c18919bf6c as client.crash failed: 2021-10-23 21:57:18.677 7fc90cd5b700 -1 auth: unable to find
Oct 23 21:57:18 novafreakVM ceph-crash[4580]: [errno 2] error connecting to the cluster
Oct 23 21:57:18 novafreakVM ceph-crash[4580]: 2021-10-23 21:57:18.585 7f0fe25ea700 -1 AuthRegistry(0x7f0fe25e8f38) no keyring found at /etc/pve/priv/ceph.client.crash.novafreakVM.keyring, disabling cephx
Oct 23 21:57:18 novafreakVM ceph-crash[4580]: 2021-10-23 21:57:18.585 7f0fe25ea700 -1 auth: unable to find a keyring on /etc/pve/priv/ceph.client.crash.novafreakVM.keyring: (2) No such file or directory
Oct 23 21:57:18 novafreakVM ceph-crash[4580]: 2021-10-23 21:57:18.585 7f0fe25ea700 -1 AuthRegistry(0x7f0fdc0dd980) no keyring found at /etc/pve/priv/ceph.client.crash.novafreakVM.keyring, disabling cephx
Oct 23 21:57:18 novafreakVM ceph-crash[4580]: 2021-10-23 21:57:18.585 7f0fe25ea700 -1 auth: unable to find a keyring on /etc/pve/priv/ceph.client.crash.novafreakVM.keyring: (2) No such file or directory
Oct 23 21:57:18 novafreakVM ceph-crash[4580]: 2021-10-23 21:57:18.569 7f0fe25ea700 -1 AuthRegistry(0x7f0fdc083518) no keyring found at /etc/pve/priv/ceph.client.crash.novafreakVM.keyring, disabling cephx
Oct 23 21:57:18 novafreakVM ceph-crash[4580]: WARNING:__main__:post /var/lib/ceph/crash/2021-10-24_02:42:37.304114Z_edb81a57-3fa7-4ed7-89e8-e3c18919bf6c as client.crash.novafreakVM failed: 2021-10-23 21:57:18.569 7f0fe25ea700 -1 auth: un
Oct 23 21:57:18 novafreakVM ceph-crash[4580]: WARNING:__main__:post /var/lib/ceph/crash/2021-10-24_02:42:47.550725Z_d28908de-7ce0-46fc-9441-5e68d2519434 as client.admin failed:
Oct 23 21:57:13 novafreakVM zed[181631]: eid=632 class=zpool pool='Pool_1'
Oct 23 21:57:13 novafreakVM zed[181628]: eid=631 class=data pool='Pool_1' priority=2 err=52 flags=0x808881 bookmark=271:1:2:8
Oct 23 21:57:12 novafreakVM zed[181619]: eid=630 class=zpool pool='Pool_1'
Oct 23 21:57:12 novafreakVM zed[181616]: eid=629 class=data pool='Pool_1' priority=2 err=52 flags=0x808881 bookmark=271:1:2:8
Oct 23 21:57:12 novafreakVM pvestatd[5210]: could not activate storage 'Pool_1', zfs error: cannot import 'Pool_1': I/O error
Oct 23 21:57:12 novafreakVM pvestatd[5210]: zfs error: cannot open 'Pool_1': no such pool
Oct 23 21:57:11 novafreakVM zed[181493]: eid=628 class=zpool pool='Pool_1'
Oct 23 21:57:11 novafreakVM zed[180749]: eid=627 class=data pool='Pool_1' priority=2 err=52 flags=0x808881 bookmark=271:1:2:8
Oct 23 21:57:10 novafreakVM pvestatd[5210]: zfs error: cannot open 'Pool_1': no such pool
Oct 23 21:57:03 novafreakVM zed[179942]: eid=626 class=zpool pool='Pool_1'
Oct 23 21:57:02 novafreakVM zed[179939]: eid=625 class=data pool='Pool_1' priority=2 err=52 flags=0x808881 bookmark=271:1:2:8
Oct 23 21:57:02 novafreakVM zed[179936]: eid=624 class=zpool pool='Pool_1'
Oct 23 21:57:01 novafreakVM zed[179929]: eid=623 class=data pool='Pool_1' priority=2 err=52 flags=0x808881 bookmark=271:1:2:8
Oct 23 21:57:01 novafreakVM systemd[1]: Failed to start Proxmox VE replication runner.
Oct 23 21:57:01 novafreakVM systemd[1]: pvesr.service: Failed with result 'exit-code'.
Oct 23 21:57:01 novafreakVM systemd[1]: pvesr.service: Main process exited, code=exited, status=255/EXCEPTION
Oct 23 21:57:01 novafreakVM pvesr[179002]: invalid json data in '/var/lib/pve-manager/pve-replication-state.json'
Oct 23 21:57:01 novafreakVM pvestatd[5210]: could not activate storage 'Pool_1', zfs error: cannot import 'Pool_1': I/O error
Oct 23 21:57:01 novafreakVM pvestatd[5210]: zfs error: cannot open 'Pool_1': no such pool
Oct 23 21:57:01 novafreakVM zed[179803]: eid=622 class=zpool pool='Pool_1'
Oct 23 21:57:00 novafreakVM systemd[1]: Started Cleanup of Temporary Directories.
Oct 23 21:57:00 novafreakVM systemd[1]: systemd-tmpfiles-clean.service: Succeeded.
Oct 23 21:57:00 novafreakVM systemd[1]: Starting Proxmox VE replication runner...
Oct 23 21:57:00 novafreakVM systemd[1]: Starting Cleanup of Temporary Directories...

root@novafreakVM:~# zpool status
no pools available

root@novafreakVM:~# zpool list
no pools available
root@novafreakVM:~# zpool import
pool: Pool_1
id: 5884478477829533997
state: ONLINE
status: Some supported features are not enabled on the pool.
action: The pool can be imported using its name or numeric identifier, though
some features will not be available without an explicit 'zpool upgrade'.
config:

Pool_1 ONLINE
sdb ONLINE
root@novafreakVM:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 136.1G 0 disk
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 512M 0 part
└─sda3 8:3 0 135.6G 0 part
├─pve-swap 253:0 0 8G 0 lvm [SWAP]
├─pve-root 253:1 0 33.8G 0 lvm /
├─pve-data_tmeta 253:2 0 1G 0 lvm
│ └─pve-data 253:4 0 75.9G 0 lvm
└─pve-data_tdata 253:3 0 75.9G 0 lvm
└─pve-data 253:4 0 75.9G 0 lvm
sdb 8:16 0 5.5T 0 disk
├─sdb1 8:17 0 5.5T 0 part
└─sdb9 8:25 0 8M 0 part
sr0 11:0 1 3.7G 0 rom
sr1 11:1 1 1024M 0 rom
root@novafreakVM:~#
 
Hi,

Can you post the answer of zfs status -v ?


Your logfile seems to speack of failed state of ceph storage... what kind of storage are you using?
 
I have since gotten the zpool to mount with a -f. I am not sure what happened but it said my pool was fragmented and it would have to go back to yesterday morning image or something... I dont recall exactly... but the zpool is not mounted, and my VMDisks are there... but they seem to be corrupt as my two ubuntu and one windows VM will not startup properly. My files erver VM (Ubuntu) is in Recovering Journal status now...


status does not seem to be a command argument for zfs.
 
root@novafreakVM:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
Pool_1 4.83T 505G 96K /Pool_1
Pool_1/vm-100-disk-0 103G 518G 90.7G -
Pool_1/vm-110-disk-0 103G 518G 90.6G -
Pool_1/vm-110-disk-1 4.53T 692G 4.35T -
Pool_1/vm-120-disk-0 103G 564G 44.2G -
root@novafreakVM:~#



I am running a SAS Raid5 on a Dell Poweredge 820
 
every time I try to scrub the zpool ... or get history on the zpool or export the zpool...the server seems to hang... and I have to reboot
 
Last edited:
Sorry, wanna say zpool status :s

Your zpool is in exported state? Did you try zpool import -a -f ( i think ...)
 
Have you seen this thread:
https://forum.proxmox.com/threads/howto-defrag-an-z

It says fragmentation isn't "real". The fragmentation state is only for news data writes.

What did you tried for scrub launching?

zpool scrub Pool_1

I could not find the definition of the switches for the scrub command. and No I have not seen that thread..I will go read it now. Thank you.

yes I did zpool import Pool_1 -f and it imported...
 
That did not really help me...

BUT I did find this...


root@novafreakVM:~# zpool get capacity,size,health,fragmentation
NAME PROPERTY VALUE SOURCE
Pool_1 capacity 83% -
Pool_1 size 5.45T -
Pool_1 health DEGRADED -
Pool_1 fragmentation 42% -

pool health is degraded.... Looking into that meow...
 
root@novafreakVM:~# zpool status -v
pool: Pool_1
state: DEGRADED
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
scan: scrub repaired 380K in 06:08:38 with 1 errors on Sun Oct 24 18:06:30 2021
config:

NAME STATE READ WRITE CKSUM
Pool_1 DEGRADED 0 0 0
sdb DEGRADED 0 0 48 too many errors

errors: Permanent errors have been detected in the following files:

Pool_1/vm-110-disk-0:<0x1>


I have cleared the alerts for SDB on Pool_1 but I would like to scan and repair the disk... is that possible?
 
Last edited:
UPDATE... My file server VM crashed...

Then I rebooted proxmox server .... and now this....

The pool will not import.....
 

Attachments

  • pool no import.JPG
    pool no import.JPG
    213.7 KB · Views: 23
I can load under version 5.0.21-5 and import the pool in read only...

am I just not waiting long enough?
 
At this point I would just like to be able to recover my data from my VMS... and rebuild... is that possible?
 
any time I try to do anything with a zpool or zfs command the session stops responding.... I can start another ssh session...
 
Hi,

your last zppol sattus -v seems saying you've a sdb dead....
If you're in mirror raid or raidz, you're OK, just change your disk and wait resilvering....
But if you're in stripping mode ... It's definitively loosed...
 
Hi,

your last zppol sattus -v seems saying you've a sdb dead....
If you're in mirror raid or raidz, you're OK, just change your disk and wait resilvering....
But if you're in stripping mode ... It's definitively loosed...
It was in a Hardware Raid 5... I ended up mounting the pool in read only and copying the VM Disks off to another server. Then I rebuilt my VE server and blew away the ZFS and mounted the disk as local storage... Then I recreated the VMs from the VM conf files that I also copied off... and then I copied in the recovered disks and changed the conf file to point to the copied over disk files and started them on the new host.

recovered all the data... :D I also now and a NEWLY created Proxmox Backup server... =P

This post can be marked resolved.
 
  • Like
Reactions: cakeoats
It was in a Hardware Raid 5... I ended up mounting the pool in read only and copying the VM Disks off to another server. Then I rebuilt my VE server and blew away the ZFS and mounted the disk as local storage... Then I recreated the VMs from the VM conf files that I also copied off... and then I copied in the recovered disks and changed the conf file to point to the copied over disk files and started them on the new host.

recovered all the data... :D I also now and a NEWLY created Proxmox Backup server... =P

This post can be marked resolved.
@novafreak69 please modify the state of your post to solved
 
  • Like
Reactions: novafreak69

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!