constant notifications from vm disk

Aluveitie

New Member
Sep 21, 2022
24
5
3
I started getting notifications every hour or so from my proxmox machine:

Code:
ZFS has finished a scrub:

  eid: 35
class: scrub_finish
 host: server
 time: 2022-11-13 00:24:01+0100
 pool: data
state: ONLINE
status: One or more devices has experienced an error resulting in data
    corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
    entire pool from backup.
  see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
 scan: scrub repaired 52K in 00:00:00 with 9 errors on Sun Nov 13 00:24:01 2022
config:

    NAME                    STATE     READ WRITE CKSUM
    data                    ONLINE       0     0     0
      pve-vm--102--disk--2  ONLINE       0     0    56

errors: 3 data errors, use '-v' for a list

That disk is assigned to a VM which uses ZFS on it. Proxmox itself has not mounted zfs volume and the vm itself reports no issues.
 
Looks like the Proxmox host mistakenly finds the virtual disk (that is used by ZFS inside the VM) and checked it. Probably, the errors it detected are false because it mistakenly assumes that nothing else is using the virtual disk. I would not have expected Proxmox's ZFS to scrub that virtual disk unless it was (manually or automatically) imported (zpool import data) on the Proxmox host. Sorry, but I don't know how to fix this.
 
I already disabled the cronjob for the bi-weekly scrub, but annoyingly the notification is still sent
 
I stopped and disabled zfs-zed but I still get the notifications - though not as often anymore...
 
Found the disk was not used/mounted by that VM anymore, probably the reason it got picked up.
I removed the disk and restarted the VM. But now I get this notification repeatedly:

Code:
ZFS has detected that a device was removed.

impact: Fault tolerance of the pool may be compromised.
   eid: 10
 class: statechange
 state: UNAVAIL
  host: server
  time: 2022-11-16 13:30:09+0100
 vpath: /dev/mapper/pve-vm--102--disk--2
 vguid: 0x38B5227D0F112D3D
  pool: 0x68BDA0B5C86550CD

But there is still no pool listed by zpool list...
 
This worked for me:
Code:
rm /etc/zfs/zpool.cache

Reboot -- though restarting zed may be enough. The reboot may take a minute longer than usual. If your pool(s) fails to import, this will import it and recreate the cache using updated values:
Code:
zpool import -a
 
  • Like
Reactions: Aluveitie

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!