Proxmox takes VM's RAID as his if its his own!

Hi all, quick question regarding Proxmox and Hard Drive passthrough followed from: https://pve.proxmox.com/wiki/Passthrough_Physical_Disk_to_Virtual_Machine_(VM)

I have made a VM which creates a software RAID (RAIDZ1) from the disk that are passed through. However when I boot proxmox, it reports a failing service:
Code:
root@Osiris:~# systemctl | grep pool
● zfs-import-scan.service                                                                                                             loaded failed     failed    Import ZFS pools by device scanning
  zfs-import.target                                                                                                                   loaded active     active    ZFS pool import target

So I followed the systemd logs and put in: zpool import -f and this reports:

Code:
root@Osiris:~# zpool import -f
   pool: MassPool
     id: 9405729839500056935
  state: UNAVAIL
status: The pool was last accessed by another system.
 action: The pool cannot be imported due to damaged devices or data.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-EY
 config:

        MassPool                                  UNAVAIL  unsupported feature(s)
          raidz1-0                                ONLINE
            4f487b84-73b2-4d12-8ea2-30c9684cdec7  ONLINE
            7dcae52f-bbbc-415a-866e-8f8695b05d63  ONLINE
            752d05df-ff4d-4c26-af3c-eacdda1abec1  ONLINE
            b8888e29-1547-42fb-8987-56f7abb5340f  ONLINE
            dd407d59-0599-48a9-bb8b-4ddf3d843c62  ONLINE
        logs
          e726ccc1-dd53-4479-b4d0-1e53920687dd    ONLINE

   pool: NVME
     id: 9763989788774294682
  state: UNAVAIL
status: The pool was last accessed by another system.
 action: The pool cannot be imported due to damaged devices or data.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-EY
 config:

        NVME                                    UNAVAIL  unsupported feature(s)
          1e1b365e-4213-4dce-9ef5-afd686738c1f  ONLINE

The thing is, this is the VM's thing! Not Proxmox... It should not even be busy with the drives. How can I disable this?
The pools are exported fine with NFS and are usable.

Code:
root@Osiris:~# pveversion -v
proxmox-ve: 8.3.0 (running kernel: 6.2.16-5-pve)
pve-manager: 8.3.1 (running version: 8.3.1/fb48e850ef9dde27)
proxmox-kernel-helper: 8.1.0
pve-kernel-6.1: 7.3-4
proxmox-kernel-6.8: 6.8.12-4
proxmox-kernel-6.8.12-4-pve-signed: 6.8.12-4
proxmox-kernel-6.8.12-3-pve-signed: 6.8.12-3
pve-kernel-6.2.16-5-pve: 6.2.16-6
pve-kernel-6.1.10-1-pve: 6.1.10-1
amd64-microcode: 3.20240820.1~deb12u1
ceph-fuse: 16.2.11+ds-2
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx11
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.1
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.4
libpve-access-control: 8.2.0
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.10
libpve-cluster-perl: 8.0.10
libpve-common-perl: 8.2.9
libpve-guest-common-perl: 5.1.6
libpve-http-server-perl: 5.1.2
libpve-network-perl: 0.10.0
libpve-rs-perl: 0.9.1
libpve-storage-perl: 8.2.9
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.5.0-1
proxmox-backup-client: 3.3.0-1
proxmox-backup-file-restore: 3.3.0-1
proxmox-firewall: 0.6.0
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.3.1
proxmox-mini-journalreader: 1.4.0
proxmox-widget-toolkit: 4.3.3
pve-cluster: 8.0.10
pve-container: 5.2.2
pve-docs: 8.3.1
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.2
pve-firewall: 5.1.0
pve-firmware: 3.14-1
pve-ha-manager: 4.0.6
pve-i18n: 3.3.2
pve-qemu-kvm: 9.0.2-4
pve-xtermjs: 5.3.0-3
qemu-server: 8.3.0
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.6-pve1