[SOLVED] VM lässt sich nicht löschen

Wendy2702

Member
May 1, 2020
25
3
8
Hi,

ich wollte eben kurz zum testen eine VM mit Debian 10 erstellen. Diese VM lässt sich nicht starten und auch nicht löschen.

Beim löschen per GUI bekomme ich diesen Fehler:

Code:
TASK ERROR: timeout: no zvol device link for 'vm-105-disk-0' found after 300 sec found.


Code:
root@pve:~# cat /etc/pve/qemu-server/105.conf
boot: order=scsi0;ide2;net0
cores: 1
cpu: x86-64-v2-AES
ide2: local:iso/debian-10.3.0-amd64-netinst.iso,media=cdrom,size=335M
memory: 2048
meta: creation-qemu=8.1.2,ctime=1706553067
name: test
net0: virtio=BC:24:11:0D:A7:CD,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: Motioneye:vm-105-disk-0,iothread=1,size=4G
scsihw: virtio-scsi-single
smbios1: uuid=afa5d787-5943-4eeb-bcfe-6fda58076ff9
sockets: 1
vmgenid: b82acd02-7c72-4583-af33-9b6b085890d6
root@pve:~#

Wie bekomme ich die VM denn jetzt gelöscht? Kann ich einfach das conf file löschen?

Danke und Gruß
 
Hi,
was sagen zpool status -v, zfs list, cat /etc/pve/storage.cfg? Gibt es irgendwelche Meldungen bezüglich ZFS oder dem Volumen in journalctl -b?
 
Hallo und Danke für deine Antwort.

Hier die Ausgaben der Befehle:

Code:
root@pve:~# zpool status -v
  pool: Motioneye
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
        The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: scrub repaired 0B in 05:06:53 with 0 errors on Sun Jan 14 05:30:54 2024
config:

        NAME                                        STATE     READ WRITE CKSUM
        Motioneye                                   ONLINE       0     0     0
          ata-WDC_WD40PURZ-85TTDY0_WD-WCC7K6NZ1XLY  ONLINE       0     0     0

errors: No known data errors

  pool: Nextcloud
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
        The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: scrub repaired 0B in 00:12:11 with 0 errors on Sun Jan 14 00:36:14 2024
config:

        NAME                                STATE     READ WRITE CKSUM
        Nextcloud                           ONLINE       0     0     0
          ata-CT2000BX500SSD1_2245E682AADB  ONLINE       0     0     0

errors: No known data errors

Code:
root@pve:~# zfs list
NAME                          USED  AVAIL  REFER  MOUNTPOINT
Motioneye                    2.01T  1.51T   104K  /Motioneye
Motioneye/subvol-100-disk-0  2.00T  1.51T  2.00T  /Motioneye/subvol-100-disk-0
Motioneye/subvol-109-disk-0  4.66G  3.34G  4.66G  /Motioneye/subvol-109-disk-0
Nextcloud                     265G  1.50T   104K  /Nextcloud
Nextcloud/subvol-111-disk-0   265G  1.50T   265G  /Nextcloud/subvol-111-disk-0

Code:
root@pve:~# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content vztmpl,iso,backup

lvmthin: local-lvm
        thinpool data
        vgname pve
        content images,rootdir

zfspool: Nextcloud
        pool Nextcloud
        content rootdir,images
        mountpoint /Nextcloud
        nodes pve

zfspool: Motioneye
        pool Motioneye
        content images,rootdir
        mountpoint /Motioneye
        nodes pve

zfspool: omv
        pool omv
        content images,rootdir
        mountpoint /omv
        sparse 1

nfs: NAS-6TB
        export /export/NAS-6TB/proxmox_backups_56/
        path /mnt/pve/NAS-6TB
        server 192.168.178.21
        content backup
        prune-backups keep-all=1

root@pve:~#

Code:
Jan 30 11:06:59 pve pvestatd[1403]: zfs error: cannot open 'omv': no such pool
Jan 30 11:07:00 pve pvestatd[1403]: zfs error: cannot open 'omv': no such pool
Jan 30 11:07:00 pve pvestatd[1403]: could not activate storage 'omv', zfs error: cannot import 'omv': no such pool available
Jan 30 11:07:09 pve pvestatd[1403]: zfs error: cannot open 'omv': no such pool
Jan 30 11:07:09 pve pvestatd[1403]: zfs error: cannot open 'omv': no such pool
Jan 30 11:07:09 pve pvestatd[1403]: could not activate storage 'omv', zfs error: cannot import 'omv': no such pool available
Jan 30 11:07:19 pve pvestatd[1403]: zfs error: cannot open 'omv': no such pool
Jan 30 11:07:19 pve pvestatd[1403]: zfs error: cannot open 'omv': no such pool
Jan 30 11:07:19 pve pvestatd[1403]: could not activate storage 'omv', zfs error: cannot import 'omv': no such pool available
Jan 30 11:07:30 pve pvestatd[1403]: zfs error: cannot open 'omv': no such pool
Jan 30 11:07:30 pve pvestatd[1403]: zfs error: cannot open 'omv': no such pool
Jan 30 11:07:30 pve pvestatd[1403]: could not activate storage 'omv', zfs error: cannot import 'omv': no such pool available
Jan 30 11:07:40 pve pvestatd[1403]: zfs error: cannot open 'omv': no such pool
Jan 30 11:07:40 pve pvestatd[1403]: zfs error: cannot open 'omv': no such pool
Jan 30 11:07:40 pve pvestatd[1403]: could not activate storage 'omv', zfs error: cannot import 'omv': no such pool available
Jan 30 11:07:49 pve pvestatd[1403]: zfs error: cannot open 'omv': no such pool
Jan 30 11:07:49 pve pvestatd[1403]: zfs error: cannot open 'omv': no such pool
Jan 30 11:07:49 pve pvestatd[1403]: could not activate storage 'omv', zfs error: cannot import 'omv': no such pool available

Die Platten für den ZFS Pool sind aktuell nicht eingebaut.

Löschen von einer VM hat allerdings gestern zum ersten mal nicht funktioniert obwohl die Platten schon seit Tagen nicht angeschlossen sind.

Mir ist nicht klar wo das mit der Harddisk Info für Motioneye herkommt.
 
Code:
root@pve:~# zfs list
NAME                          USED  AVAIL  REFER  MOUNTPOINT
Motioneye                    2.01T  1.51T   104K  /Motioneye
Motioneye/subvol-100-disk-0  2.00T  1.51T  2.00T  /Motioneye/subvol-100-disk-0
Motioneye/subvol-109-disk-0  4.66G  3.34G  4.66G  /Motioneye/subvol-109-disk-0
Nextcloud                     265G  1.50T   104K  /Nextcloud
Nextcloud/subvol-111-disk-0   265G  1.50T   265G  /Nextcloud/subvol-111-disk-0
Also die Disk vm-105-disk-0 gibt es anscheinend nicht mehr. Funktioniert es, wenn Du in der UI, bei der VM Hardware die Disk mit Detach/Aushängen aushängst und nachher die VM löscht?
Code:
Jan 30 11:06:59 pve pvestatd[1403]: zfs error: cannot open 'omv': no such pool
Jan 30 11:07:00 pve pvestatd[1403]: zfs error: cannot open 'omv': no such pool
Jan 30 11:07:00 pve pvestatd[1403]: could not activate storage 'omv', zfs error: cannot import 'omv': no such pool available
Jan 30 11:07:09 pve pvestatd[1403]: zfs error: cannot open 'omv': no such pool
Jan 30 11:07:09 pve pvestatd[1403]: zfs error: cannot open 'omv': no such pool
Jan 30 11:07:09 pve pvestatd[1403]: could not activate storage 'omv', zfs error: cannot import 'omv': no such pool available
Jan 30 11:07:19 pve pvestatd[1403]: zfs error: cannot open 'omv': no such pool
Jan 30 11:07:19 pve pvestatd[1403]: zfs error: cannot open 'omv': no such pool
Jan 30 11:07:19 pve pvestatd[1403]: could not activate storage 'omv', zfs error: cannot import 'omv': no such pool available
Jan 30 11:07:30 pve pvestatd[1403]: zfs error: cannot open 'omv': no such pool
Jan 30 11:07:30 pve pvestatd[1403]: zfs error: cannot open 'omv': no such pool
Jan 30 11:07:30 pve pvestatd[1403]: could not activate storage 'omv', zfs error: cannot import 'omv': no such pool available
Jan 30 11:07:40 pve pvestatd[1403]: zfs error: cannot open 'omv': no such pool
Jan 30 11:07:40 pve pvestatd[1403]: zfs error: cannot open 'omv': no such pool
Jan 30 11:07:40 pve pvestatd[1403]: could not activate storage 'omv', zfs error: cannot import 'omv': no such pool available
Jan 30 11:07:49 pve pvestatd[1403]: zfs error: cannot open 'omv': no such pool
Jan 30 11:07:49 pve pvestatd[1403]: zfs error: cannot open 'omv': no such pool
Jan 30 11:07:49 pve pvestatd[1403]: could not activate storage 'omv', zfs error: cannot import 'omv': no such pool available

Die Platten für den ZFS Pool sind aktuell nicht eingebaut.
In der UI unter Datacenter > Storage > omv > Edit, oder in der CLI mit pvesm set omv --disable 1 könntest Du die Storage deaktivieren, falls Du diese Fehler los werden willst. Sobald die Storage wieder aktiviert werden soll, in der UI oder mit pvesm set omv --delete disable die Änderung rückgängig machen.
 
Hi,

ich konnte die Disk detachen, die unused entfernen und dann die VM löschen.

Danke für die Hilfe!!!

Habe auch deinen Hinweis befolgt und den Pool disabled.

Auch dafür ein Dankeschön!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!