Accidentally broke zroot

Bobbbb

Well-Known Member
Jul 13, 2018
78
1
48
29
Hi everyone, I made a terrible mistake...I thought I was fixing a pool on my truenas server and was accidentally on my proxmox server,
I forced an import of a pool (both drives are 1TB ,same brand, also so didn't notice).

Anyways currently this is the status of my proxmox server, I think I messed up the boot zfs pool

root@pve01:/mnt# zpool status -v
pool: rpool
state: ONLINE
scan: scrub repaired 0B in 01:10:32 with 0 errors on Sun May 14 01:34:33 2023
config:

NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
nvme-eui.ace42e817000ba82-part3 ONLINE 0 0 0

errors: No known data errors

pool: zroot
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
config:

NAME STATE READ WRITE CKSUM
zroot ONLINE 0 0 0
zd240 ONLINE 0 0 0 block size: 4096B configured, 8192B native

errors: Permanent errors have been detected in the following files:

:<0x0>
:<0x1>
:<0x104>
zroot/var/crash:<0x0>
zroot/var/log:<0x0>
zroot/var/tmp:<0x0>
zroot:<0x0>
zroot/tmp:<0x1>
zroot/usr/home:<0x0>
zroot/usr/ports:<0x0>
zroot/usr/src:<0x0>
zroot/var/audit:<0x0>
zroot/var/mail:<0x0>
root@pve01:/mnt#


Not looking great!!!

However my proxmox is still working fine, all the vm's LXC are up but I assume the system won't boot up again?

any advise will be appreciated and maybe some off topic positive vibes as I am feeling very stupid right now...lol
 
Does proxmox maybe take automatic snapshots of root if I installed it in ZFS?
will a simple reinstall of the OS fix my problem?
 
Does proxmox maybe take automatic snapshots of root if I installed it in ZFS?
No.

will a simple reinstall of the OS fix my problem?
Proxmox is using rpool and that is healthy according to zpool status. Whats damaged is your zpool which is part of your TrueNAS VM?
In that case I would restore a backup of that TrueNAS VM or install TrueNAS again and restore your backupped TrueNAS config file.
 
Last edited:
Maybe I didn't explain properly.
Truenas is fine...lets not worry about that :-)

This is my concern:
root@pve01:~# zpool status -v
pool: rpool
state: ONLINE
scan: scrub repaired 0B in 01:10:32 with 0 errors on Sun May 14 01:34:33 2023
config:

NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
nvme-eui.ace42e817000ba82-part3 ONLINE 0 0 0

errors: No known data errors

pool: zroot
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
config:

NAME STATE READ WRITE CKSUM
zroot ONLINE 0 0 0
zd240 ONLINE 0 0 0 block size: 4096B configured, 8192B native

errors: Permanent errors have been detected in the following files:

:<0x0>
:<0x1>
:<0x104>
zroot/var/crash:<0x0>
zroot/var/log:<0x0>
zroot/var/tmp:<0x0>
zroot:<0x0>
zroot/tmp:<0x1>
zroot/usr/home:<0x0>
zroot/usr/ports:<0x0>
zroot/usr/src:<0x0>
zroot/var/audit:<0x0>
zroot/var/mail:<0x0>



Also as of today my webgui doesn't come up.
VM's seems ok
 
What is that "zroot" pool then used for?
If you don't want it to be mounted on your PVE host you could export it again with zpool export zroot, so PVE won't complain at boot about the degraded pool.
 
Maybe I didn't
What is that "zroot" pool then used for?
If you don't want it to be mounted on your PVE host you could export it again with zpool export zroot, so PVE won't complain at boot about the degraded pool.

I assumed it's part of proxmox?
but if not than i don't need it....
 
Maybe I didn't


I assumed it's part of proxmox?
but if not than i don't need it....
No, and it is using "zd240" as disk which is no physical disk but a zvol, so a VM virtual disk. So that pool is probably used by one of your VMs.
 
  • Like
Reactions: Neobin
Export zroot

zpool export zroot


Please do TWO scrubs on your pool, chances are the errors will be fixed

zpool scrub rpool
and again
zpool scrub rpool
zpool status -v

Good Luck
 
Last edited:
By the way...do you maybe run an OPNsense VM? Because pool name and folders would fit. Then your OPNsense VM got corrupted data.
 
Always import an additional pool to an alternative root, so that your filesystems are not overmountet

zpool import

pool: testpool-mirror
id: 17179772134263701461
state: ONLINE
status: The pool was last accessed by another system.
action: The pool can be imported using its name or numeric identifier and
the '-f' flag.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-EY
config:

testpool-mirror ONLINE
mirror-0 ONLINE


zpool import -f 17179772134263701461 -R /ALTROOT
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!