[SOLVED] VM not created (not shown) but disks are


New Member
Oct 21, 2023
Recently I have installed PVE (on HP DL380 no HW Raid) and tried to create vm but it didn't create (ZFS with RAID1).

Here is the output of one disk and snapshot

root@vm:~# zfs get all rpool/data/vm-100-disk-0
NAME                      PROPERTY              VALUE                  SOURCE
rpool/data/vm-100-disk-0  type                  volume                 -
rpool/data/vm-100-disk-0  creation              Tue Nov 14 21:19 2023  -
rpool/data/vm-100-disk-0  used                  56K                    -
rpool/data/vm-100-disk-0  available             1.67T                  -
rpool/data/vm-100-disk-0  referenced            56K                    -
rpool/data/vm-100-disk-0  compressratio         1.00x                  -
rpool/data/vm-100-disk-0  reservation           none                   default
rpool/data/vm-100-disk-0  volsize               50G                    local
rpool/data/vm-100-disk-0  volblocksize          8K                     default
rpool/data/vm-100-disk-0  checksum              on                     default
rpool/data/vm-100-disk-0  compression           on                     inherited from rpool
rpool/data/vm-100-disk-0  readonly              off                    default
rpool/data/vm-100-disk-0  createtxg             33917                  -
rpool/data/vm-100-disk-0  copies                1                      default
rpool/data/vm-100-disk-0  refreservation        none                   default
rpool/data/vm-100-disk-0  guid                  9201050528404420326    -
rpool/data/vm-100-disk-0  primarycache          all                    default
rpool/data/vm-100-disk-0  secondarycache        all                    default
rpool/data/vm-100-disk-0  usedbysnapshots       0B                     -
rpool/data/vm-100-disk-0  usedbydataset         56K                    -
rpool/data/vm-100-disk-0  usedbychildren        0B                     -
rpool/data/vm-100-disk-0  usedbyrefreservation  0B                     -
rpool/data/vm-100-disk-0  logbias               latency                default
rpool/data/vm-100-disk-0  objsetid              1306                   -
rpool/data/vm-100-disk-0  dedup                 off                    default
rpool/data/vm-100-disk-0  mlslabel              none                   default
rpool/data/vm-100-disk-0  sync                  standard               inherited from rpool
rpool/data/vm-100-disk-0  refcompressratio      1.00x                  -
rpool/data/vm-100-disk-0  written               56K                    -
rpool/data/vm-100-disk-0  logicalused           28K                    -
rpool/data/vm-100-disk-0  logicalreferenced     28K                    -
rpool/data/vm-100-disk-0  volmode               default                default
rpool/data/vm-100-disk-0  snapshot_limit        none                   default
rpool/data/vm-100-disk-0  snapshot_count        none                   default
rpool/data/vm-100-disk-0  snapdev               hidden                 default
rpool/data/vm-100-disk-0  context               none                   default
rpool/data/vm-100-disk-0  fscontext             none                   default
rpool/data/vm-100-disk-0  defcontext            none                   default
rpool/data/vm-100-disk-0  rootcontext           none                   default
rpool/data/vm-100-disk-0  redundant_metadata    all                    default
rpool/data/vm-100-disk-0  encryption            off                    default
rpool/data/vm-100-disk-0  keylocation           none                   default
rpool/data/vm-100-disk-0  keyformat             none                   default
rpool/data/vm-100-disk-0  pbkdf2iters           0                      default

Tried to unmount and delete

root@vm:~# zfs umount rpool/data/vm-100-disk-0
cannot open 'rpool/data/vm-100-disk-0': operation not applicable to datasets of this type

root@vm:~# zfs destroy rpool/data/vm-100-disk-0
cannot destroy 'rpool/data/vm-100-disk-0': dataset is busy

root@vm:~# pvesm status
Name             Type     Status           Total            Used       Available        %
local             dir     active      1804593024         7915904      1796677120    0.44%
local-zfs     zfspool     active      1796677484             264      1796677220    0.00%
root@vm:~# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content iso,vztmpl,backup

zfspool: local-zfs
        pool rpool/data
        content images,rootdir

VM Config

root@vm:~# qm config 100
boot: order=scsi0;ide2;net0
cores: 4
cpu: x86-64-v2-AES
ide2: local:iso/ubuntu-22.04.2-live-server-amd64.iso,media=cdrom,size=1929660K
machine: q35
memory: 4048
meta: creation-qemu=8.0.2,ctime=1699985974
name: email
net0: virtio=F6:D2:C5:84:82:DB,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: l26
scsi0: local-zfs:vm-100-disk-0,iothread=1,size=50G
scsihw: virtio-scsi-single
smbios1: uuid=b4929bdf-6204-4b56-b7c4-94b1320a85a1
sockets: 1
vmgenid: 7a030199-920b-4e71-940c-0375c18959a6

root@vm:~# pveversion -v
proxmox-ve: 8.0.1 (running kernel: 6.2.16-3-pve)
pve-manager: 8.0.3 (running version: 8.0.3/bbf3993334bfa916)
pve-kernel-6.2: 8.0.2
pve-kernel-6.2.16-3-pve: 6.2.16-3
ceph-fuse: 17.2.6-pve1+3
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx2
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-3
libknet1: 1.25-pve1
libproxmox-acme-perl: 1.4.6
libproxmox-backup-qemu0: 1.4.0
libproxmox-rs-perl: 0.3.0
libpve-access-control: 8.0.3
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.0.5
libpve-guest-common-perl: 5.0.3
libpve-http-server-perl: 5.0.3
libpve-rs-perl: 0.8.3
libpve-storage-perl: 8.0.1
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve3
novnc-pve: 1.4.0-2
proxmox-backup-client: 2.99.0-1
proxmox-backup-file-restore: 2.99.0-1
proxmox-kernel-helper: 8.0.2
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.4.0
proxmox-widget-toolkit: 4.0.5
pve-cluster: 8.0.1
pve-container: 5.0.3
pve-docs: 8.0.3
pve-edk2-firmware: 3.20230228-4
pve-firewall: 5.0.2
pve-firmware: 3.7-1
pve-ha-manager: 4.0.2
pve-i18n: 3.0.4
pve-qemu-kvm: 8.0.2-3
pve-xtermjs: 4.16.0-3
qemu-server: 8.0.6
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.1.12-pve1

Thank you


  • Untitled.png
    44.1 KB · Views: 4
  • Untitled2.png
    39.7 KB · Views: 4
Last edited:
It looks like the VM was created but does not show up in the UI?
What is the output of qm status 100
If the VM is running, it would explain why you cannot delete the disk.

Please provide the status of your ZFS pool zpool status

As the UI is not listing VMs, let's verify if the API lists the VMs.
Please post the output of the following and replace <<node-name>> with the name of your node.
pvesh get nodes/<<node-name>>/qemu

To check if anything went wrong when creating the VM, please check the system journal and in case upload the journal.txt file
journalctl -x --since "2023-11-14" --until "2023-11-15" >| journal.txt
  • Like
Reactions: kashifmax
Thank you for your reply. My post was waiting for approval since I think some days from now.
Anyway, I restarted the VE two three times. Then I shutdown the server. The next day I ran the server and I saw the VMs in UI.
I don't what was the issue. If you still want me to do some checks then probably I'll be doing later today.

Thank you so much for your reply.
Maybe this was just a UI issue, which didn't reload properly.
If it works now, please mark the thread as solved. Thanks
  • Like
Reactions: kashifmax


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!