VM No bootable device after reboot

L.O.S

Member
Apr 7, 2021
30
1
13
43
I have been running VM for a while no issues, recently I started having issues with VM not restarting properly.
I now have a specific VM completely stuck.

After reboot I have :

Boot failled: not a bootable device ...

I have reinstalled the last 3 backups to no avail, always the same issue.

scrolling through forum, it seems to be a recurring issue with partition table beeing droped. but I couldn't find a way to rebuilt it ? any pointers?

thx !
 
Hi,
the issue you mentioned affected virtual SATA disks and has been fixed in pve-qemu-kvm >= 8.0.2-7. If the VM was already booted with a newer version or does not use SATA, please post the VM configuration qm config <ID> and output of pveversion -v.

To recover the partition table, you might want to try TestDisk: https://www.cgsecurity.org/wiki/TestDisk
 
in my research I did see the sata related issue, I am not running sata. and I have an up to date version.
here is the output of the commands.

thx


qm config

Code:
balloon: 4096
boot: c
bootdisk: scsi0
cipassword: **********
ciuser: **********
cores: 2
description:
ide2: FileZilla2:vm-1003-cloudinit,media=cdrom,size=4M
ipconfig0: ip=10.10.0.30/24,gw=10.10.0.1
memory: 8192
meta: creation-qemu=8.1.2,ctime=1701100366
name: Shun
nameserver: 10.10.10.10
net0: virtio=BC:24:11:03:C8:48,bridge=vmbr0
numa: 1
onboot: 1
scsi0: FileZilla2:vm-1003-disk-0,size=44544M
scsihw: virtio-scsi-pci
serial0: socket
smbios1: uuid=a15e6d39-fec4-466c-ae2b-c60e04e8a360
sockets: 2
sshkeys:
vga: serial0
vmgenid: 9ed939a9-38ac-46b5-9f67-4e22ada761f3

pveversion -v.
Code:
proxmox-ve: 8.1.0 (running kernel: 6.5.11-6-pve)
pve-manager: 8.1.3 (running version: 8.1.3/b46aac3b42da5d15)
proxmox-kernel-helper: 8.1.0
pve-kernel-5.15: 7.4-7
proxmox-kernel-6.5: 6.5.11-6
proxmox-kernel-6.5.11-6-pve-signed: 6.5.11-6
proxmox-kernel-6.5.11-4-pve-signed: 6.5.11-4
proxmox-kernel-6.2.16-19-pve: 6.2.16-19
proxmox-kernel-6.2: 6.2.16-19
pve-kernel-5.15.126-1-pve: 5.15.126-1
pve-kernel-5.15.74-1-pve: 5.15.74-1
ceph-fuse: 17.2.7-pve1
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx7
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.0
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.1
libpve-access-control: 8.0.7
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.1.0
libpve-guest-common-perl: 5.0.6
libpve-http-server-perl: 5.0.5
libpve-network-perl: 0.9.5
libpve-rs-perl: 0.8.7
libpve-storage-perl: 8.0.5
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve3
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.1.2-1
proxmox-backup-file-restore: 3.1.2-1
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.2
proxmox-mini-journalreader: 1.4.0
proxmox-widget-toolkit: 4.1.3
pve-cluster: 8.0.5
pve-container: 5.0.8
pve-docs: 8.1.3
pve-edk2-firmware: 4.2023.08-2
pve-firewall: 5.0.3
pve-firmware: 3.9-1
pve-ha-manager: 4.0.3
pve-i18n: 3.1.4
pve-qemu-kvm: 8.1.2-4
pve-xtermjs: 5.3.0-2
qemu-server: 8.0.10
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.0-pve4
 
in my research I did see the sata related issue, I am not running sata. and I have an up to date version.
here is the output of the commands.
Okay, then it's not the same issue.

Code:
scsi0: FileZilla2:vm-1003-disk-0,size=44544M
What kind of storage is FileZilla2?

What is the output of the following?
Code:
pvesm path FileZilla2:vm-1003-disk-0
fdisk -l /path/output/by/previous/command
 
Okay, then it's not the same issue.


What kind of storage is FileZilla2?

it is a ZFS pool - 2 SSD mirrorred.

What is the output of the following?
Code:
pvesm path FileZilla2:vm-1003-disk-0
fdisk -l /path/output/by/previous/command



root@saori:~# pvesm path FileZilla2:vm-1003-disk-0
/dev/zvol/FileZilla2/vm-1003-disk-0

root@saori:~# fdisk -l /dev/zvol/FileZilla2/vm-1003-disk-0
Disk /dev/zvol/FileZilla2/vm-1003-disk-0: 913 MiB, 957350400 bytes, 1869825 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 16384 bytes
I/O size (minimum/optimal): 16384 bytes / 16384 bytes
 
root@saori:~# fdisk -l /dev/zvol/FileZilla2/vm-1003-disk-0
Disk /dev/zvol/FileZilla2/vm-1003-disk-0: 913 MiB, 957350400 bytes, 1869825 sectors
That's a completely different size than was recorded in the configuration: scsi0: FileZilla2:vm-1003-disk-0,size=44544M

Can you check zpool history | grep vm-1003-disk-0?
 
That's a completely different size than was recorded in the configuration: scsi0: FileZilla2:vm-1003-disk-0,size=44544M

Can you check zpool history | grep vm-1003-disk-0?
root@saori:~# zpool history | grep vm-1003-disk-0
2023-11-27.17:07:50 zfs create -V 3670016k FileZilla2/vm-1003-disk-0
2023-11-27.17:12:59 zfs set volsize=45613056k FileZilla2/vm-1003-disk-0
2023-12-05.18:42:27 zfs destroy -r FileZilla2/vm-1003-disk-0
2023-12-05.18:42:27 zfs create -V 45613056k FileZilla2/vm-1003-disk-0
2023-12-05.18:44:06 zfs destroy -r FileZilla2/vm-1003-disk-0
2023-12-05.18:44:06 zfs create -V 45613056k FileZilla2/vm-1003-disk-0
2023-12-05.18:53:08 zfs destroy -r FileZilla2/vm-1003-disk-0
2023-12-05.18:53:08 zfs create -V 45613056k FileZilla2/vm-1003-disk-0
 
I am back on this one. I have been seeing the issue quite a bit now.
few VMs just stop booting after a shutdown.
 
What do you get with zfs get all FileZilla2/vm-1003-disk-0? What does zpool status -v say? Is there anything in the system logs/journal?
 
here is the zpool status

Code:
root@saori:~# zpool status -v
  pool: FileZilla
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
        The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: scrub repaired 0B in 00:02:31 with 0 errors on Sun Jan 14 00:26:32 2024
config:


        NAME                                            STATE     READ WRITE CKSUM
        FileZilla                                       ONLINE       0     0     0
          raidz1-0                                      ONLINE       0     0     0
            ata-INTEL_SSDSC2KG960G8_PHYG04810018960CGN  ONLINE       0     0     0
            ata-INTEL_SSDSC2KG960G8_PHYG1031065P960CGN  ONLINE       0     0     0
            ata-INTEL_SSDSC2KG960G8_PHYG1031065F960CGN  ONLINE       0     0     0
            ata-INTEL_SSDSC2KG960G8_PHYG108502KQ960CGN  ONLINE       0     0     0


errors: No known data errors


  pool: FileZilla2
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
        The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: scrub repaired 0B in 00:49:33 with 0 errors on Sun Jan 14 01:13:35 2024
config:


        NAME                                             STATE     READ WRITE CKSUM
        FileZilla2                                       ONLINE       0     0     0
          mirror-0                                       ONLINE       0     0     0
            ata-KINGSTON_SEDC500M1920G_50026B76839BEC3A  ONLINE       0     0     0
            ata-KINGSTON_SEDC500M1920G_50026B76839BEC2C  ONLINE       0     0     0


errors: No known data errors

I have remove the vm-1003 but I have having the same issue on several VM. even cloining a working VM is giving the same no boot issue.


zfs get from another non working VM

Code:
root@saori:~# zfs get all FileZilla/vm-4051-disk-0
NAME                      PROPERTY              VALUE                  SOURCE
FileZilla/vm-4051-disk-0  type                  volume                 -
FileZilla/vm-4051-disk-0  creation              Sat Jan 13 17:27 2024  -
FileZilla/vm-4051-disk-0  used                  3.87G                  -
FileZilla/vm-4051-disk-0  available             2.37T                  -
FileZilla/vm-4051-disk-0  referenced            67.6M                  -
FileZilla/vm-4051-disk-0  compressratio         1.35x                  -
FileZilla/vm-4051-disk-0  reservation           none                   default
FileZilla/vm-4051-disk-0  volsize               3.50G                  local
FileZilla/vm-4051-disk-0  volblocksize          16K                    default
FileZilla/vm-4051-disk-0  checksum              on                     default
FileZilla/vm-4051-disk-0  compression           on                     inherited from FileZilla
FileZilla/vm-4051-disk-0  readonly              off                    default
FileZilla/vm-4051-disk-0  createtxg             7969587                -
FileZilla/vm-4051-disk-0  copies                1                      default
FileZilla/vm-4051-disk-0  refreservation        3.87G                  local
FileZilla/vm-4051-disk-0  guid                  10549505195974360930   -
FileZilla/vm-4051-disk-0  primarycache          all                    default
FileZilla/vm-4051-disk-0  secondarycache        all                    default
FileZilla/vm-4051-disk-0  usedbysnapshots       0B                     -
FileZilla/vm-4051-disk-0  usedbydataset         67.6M                  -
FileZilla/vm-4051-disk-0  usedbychildren        0B                     -
FileZilla/vm-4051-disk-0  usedbyrefreservation  3.81G                  -
FileZilla/vm-4051-disk-0  logbias               latency                default
FileZilla/vm-4051-disk-0  objsetid              96121                  -
FileZilla/vm-4051-disk-0  dedup                 off                    default
FileZilla/vm-4051-disk-0  mlslabel              none                   default
FileZilla/vm-4051-disk-0  sync                  standard               default
FileZilla/vm-4051-disk-0  refcompressratio      1.35x                  -
FileZilla/vm-4051-disk-0  written               67.6M                  -
FileZilla/vm-4051-disk-0  logicalused           79.0M                  -
FileZilla/vm-4051-disk-0  logicalreferenced     79.0M                  -
FileZilla/vm-4051-disk-0  volmode               default                default
FileZilla/vm-4051-disk-0  snapshot_limit        none                   default
FileZilla/vm-4051-disk-0  snapshot_count        none                   default
FileZilla/vm-4051-disk-0  snapdev               hidden                 default
FileZilla/vm-4051-disk-0  context               none                   default
FileZilla/vm-4051-disk-0  fscontext             none                   default
FileZilla/vm-4051-disk-0  defcontext            none                   default
FileZilla/vm-4051-disk-0  rootcontext           none                   default
FileZilla/vm-4051-disk-0  redundant_metadata    all                    default
FileZilla/vm-4051-disk-0  encryption            off                    default
FileZilla/vm-4051-disk-0  keylocation           none                   default
FileZilla/vm-4051-disk-0  keyformat             none                   default
FileZilla/vm-4051-disk-0  pbkdf2iters           0                      default
 
zfs get from another non working VM

Code:
root@saori:~# zfs get all FileZilla/vm-4051-disk-0
NAME                      PROPERTY              VALUE                  SOURCE
FileZilla/vm-4051-disk-0  referenced            67.6M                  -
Seems like it references less than 100 MiB of data. I assume that is not what you expect? How does this value look like for the VM you made the clone from?

Please share the full task log of the clone operation and check the syslog from around that time.
 
correct ! the value on the original is
Code:
FileZilla2/vm-9000-disk-0  referenced            1.24G                  -
 
so I cloned a new VM:

Code:
root@saori:~# zfs get all FileZilla/vm-4052-disk-0
NAME                      PROPERTY              VALUE                  SOURCE
FileZilla/vm-4052-disk-0  type                  volume                 -
FileZilla/vm-4052-disk-0  creation              Mon Jan 15 12:48 2024  -
FileZilla/vm-4052-disk-0  used                  3.87G                  -
FileZilla/vm-4052-disk-0  available             2.37T                  -
FileZilla/vm-4052-disk-0  referenced            67.6M                  -
FileZilla/vm-4052-disk-0  compressratio         1.35x                  -
FileZilla/vm-4052-disk-0  reservation           none                   default
FileZilla/vm-4052-disk-0  volsize               3.50G                  local
FileZilla/vm-4052-disk-0  volblocksize          16K                    default
FileZilla/vm-4052-disk-0  checksum              on                     default
FileZilla/vm-4052-disk-0  compression           on                     inherited from FileZilla
FileZilla/vm-4052-disk-0  readonly              off                    default
FileZilla/vm-4052-disk-0  createtxg             8000146                -
FileZilla/vm-4052-disk-0  copies                1                      default
FileZilla/vm-4052-disk-0  refreservation        3.87G                  local
FileZilla/vm-4052-disk-0  guid                  1413114857490278010    -
FileZilla/vm-4052-disk-0  primarycache          all                    default
FileZilla/vm-4052-disk-0  secondarycache        all                    default
FileZilla/vm-4052-disk-0  usedbysnapshots       0B                     -
FileZilla/vm-4052-disk-0  usedbydataset         67.6M                  -
FileZilla/vm-4052-disk-0  usedbychildren        0B                     -
FileZilla/vm-4052-disk-0  usedbyrefreservation  3.81G                  -
FileZilla/vm-4052-disk-0  logbias               latency                default
FileZilla/vm-4052-disk-0  objsetid              123391                 -
FileZilla/vm-4052-disk-0  dedup                 off                    default
FileZilla/vm-4052-disk-0  mlslabel              none                   default
FileZilla/vm-4052-disk-0  sync                  standard               default
FileZilla/vm-4052-disk-0  refcompressratio      1.35x                  -
FileZilla/vm-4052-disk-0  written               67.6M                  -
FileZilla/vm-4052-disk-0  logicalused           79.0M                  -
FileZilla/vm-4052-disk-0  logicalreferenced     79.0M                  -
FileZilla/vm-4052-disk-0  volmode               default                default
FileZilla/vm-4052-disk-0  snapshot_limit        none                   default
FileZilla/vm-4052-disk-0  snapshot_count        none                   default
FileZilla/vm-4052-disk-0  snapdev               hidden                 default
FileZilla/vm-4052-disk-0  context               none                   default
FileZilla/vm-4052-disk-0  fscontext             none                   default
FileZilla/vm-4052-disk-0  defcontext            none                   default
FileZilla/vm-4052-disk-0  rootcontext           none                   default
FileZilla/vm-4052-disk-0  redundant_metadata    all                    default
FileZilla/vm-4052-disk-0  encryption            off                    default
FileZilla/vm-4052-disk-0  keylocation           none                   default
FileZilla/vm-4052-disk-0  keyformat             none                   default
FileZilla/vm-4052-disk-0  pbkdf2iters           0                      default

same size issue:

Code:
create full clone of drive ide2 (FileZilla2:vm-9000-cloudinit)
create full clone of drive scsi0 (FileZilla2:vm-9000-disk-0)
transferred 0.0 B of 3.5 GiB (0.00%)
transferred 39.4 MiB of 3.5 GiB (1.10%)
transferred 78.5 MiB of 3.5 GiB (2.19%)
transferred 117.9 MiB of 3.5 GiB (3.29%)
transferred 157.0 MiB of 3.5 GiB (4.38%)
transferred 196.4 MiB of 3.5 GiB (5.48%)
transferred 235.5 MiB of 3.5 GiB (6.57%)
transferred 274.9 MiB of 3.5 GiB (7.67%)
transferred 314.0 MiB of 3.5 GiB (8.76%)
transferred 353.4 MiB of 3.5 GiB (9.86%)
transferred 392.4 MiB of 3.5 GiB (10.95%)
transferred 431.9 MiB of 3.5 GiB (12.05%)
transferred 470.9 MiB of 3.5 GiB (13.14%)
transferred 510.4 MiB of 3.5 GiB (14.24%)
transferred 549.4 MiB of 3.5 GiB (15.33%)
transferred 588.9 MiB of 3.5 GiB (16.43%)
transferred 627.9 MiB of 3.5 GiB (17.52%)
transferred 667.3 MiB of 3.5 GiB (18.62%)
transferred 706.8 MiB of 3.5 GiB (19.72%)
transferred 745.8 MiB of 3.5 GiB (20.81%)
transferred 785.3 MiB of 3.5 GiB (21.91%)
transferred 824.3 MiB of 3.5 GiB (23.00%)
transferred 863.7 MiB of 3.5 GiB (24.10%)
transferred 902.8 MiB of 3.5 GiB (25.19%)
transferred 942.2 MiB of 3.5 GiB (26.29%)
transferred 981.3 MiB of 3.5 GiB (27.38%)
transferred 1020.7 MiB of 3.5 GiB (28.48%)
transferred 1.0 GiB of 3.5 GiB (29.57%)
transferred 1.1 GiB of 3.5 GiB (30.67%)
transferred 1.1 GiB of 3.5 GiB (31.76%)
transferred 1.2 GiB of 3.5 GiB (32.86%)
transferred 1.2 GiB of 3.5 GiB (33.95%)
transferred 1.2 GiB of 3.5 GiB (35.05%)
transferred 1.3 GiB of 3.5 GiB (36.14%)
transferred 1.3 GiB of 3.5 GiB (37.24%)
transferred 1.3 GiB of 3.5 GiB (38.34%)
transferred 1.4 GiB of 3.5 GiB (39.43%)
transferred 1.4 GiB of 3.5 GiB (40.53%)
transferred 1.5 GiB of 3.5 GiB (41.62%)
transferred 1.5 GiB of 3.5 GiB (42.72%)
transferred 1.5 GiB of 3.5 GiB (43.81%)
transferred 1.6 GiB of 3.5 GiB (44.91%)
transferred 1.6 GiB of 3.5 GiB (46.00%)
transferred 1.6 GiB of 3.5 GiB (47.10%)
transferred 1.7 GiB of 3.5 GiB (48.19%)
transferred 1.7 GiB of 3.5 GiB (49.29%)
transferred 1.8 GiB of 3.5 GiB (50.38%)
transferred 1.8 GiB of 3.5 GiB (51.48%)
transferred 1.8 GiB of 3.5 GiB (52.57%)
transferred 1.9 GiB of 3.5 GiB (53.67%)
transferred 1.9 GiB of 3.5 GiB (54.76%)
transferred 2.0 GiB of 3.5 GiB (55.86%)
transferred 2.0 GiB of 3.5 GiB (56.96%)
transferred 2.0 GiB of 3.5 GiB (58.05%)
transferred 2.1 GiB of 3.5 GiB (59.15%)
transferred 2.1 GiB of 3.5 GiB (60.24%)
transferred 2.1 GiB of 3.5 GiB (61.34%)
transferred 2.2 GiB of 3.5 GiB (62.43%)
transferred 2.2 GiB of 3.5 GiB (63.53%)
transferred 2.3 GiB of 3.5 GiB (64.62%)
transferred 2.3 GiB of 3.5 GiB (65.72%)
transferred 2.3 GiB of 3.5 GiB (66.81%)
transferred 2.4 GiB of 3.5 GiB (67.91%)
transferred 2.4 GiB of 3.5 GiB (69.00%)
transferred 2.5 GiB of 3.5 GiB (70.10%)
transferred 2.5 GiB of 3.5 GiB (71.19%)
transferred 2.5 GiB of 3.5 GiB (72.29%)
transferred 2.6 GiB of 3.5 GiB (73.38%)
transferred 2.6 GiB of 3.5 GiB (74.48%)
transferred 2.6 GiB of 3.5 GiB (75.57%)
transferred 2.7 GiB of 3.5 GiB (76.67%)
transferred 2.7 GiB of 3.5 GiB (77.77%)
transferred 2.8 GiB of 3.5 GiB (78.86%)
transferred 2.8 GiB of 3.5 GiB (79.96%)
transferred 2.8 GiB of 3.5 GiB (81.05%)
transferred 2.9 GiB of 3.5 GiB (82.15%)
transferred 2.9 GiB of 3.5 GiB (83.24%)
transferred 3.0 GiB of 3.5 GiB (84.34%)
transferred 3.0 GiB of 3.5 GiB (85.43%)
transferred 3.0 GiB of 3.5 GiB (86.53%)
transferred 3.1 GiB of 3.5 GiB (87.62%)
transferred 3.1 GiB of 3.5 GiB (88.72%)
transferred 3.1 GiB of 3.5 GiB (89.81%)
transferred 3.2 GiB of 3.5 GiB (90.91%)
transferred 3.2 GiB of 3.5 GiB (92.00%)
transferred 3.3 GiB of 3.5 GiB (93.10%)
transferred 3.3 GiB of 3.5 GiB (94.19%)
transferred 3.3 GiB of 3.5 GiB (95.29%)
transferred 3.4 GiB of 3.5 GiB (96.39%)
transferred 3.4 GiB of 3.5 GiB (97.48%)
transferred 3.5 GiB of 3.5 GiB (98.58%)
transferred 3.5 GiB of 3.5 GiB (99.67%)
transferred 3.5 GiB of 3.5 GiB (100.00%)
transferred 3.5 GiB of 3.5 GiB (100.00%)
TASK OK

task log show nothing
 
and from journalctl

Code:
Jan 15 12:48:43 saori pvedaemon[3981730]: <root@pam> starting task UPID:saori:000607CD:06114E8F:65A51B9B:qmclone:9000:root@pam:
Jan 15 12:48:46 saori pvedaemon[3981730]: <root@pam> end task UPID:saori:000607CD:06114E8F:65A51B9B:qmclone:9000:root@pam: OK
 
It's unfortunate that there is no error. Can you share the output of
Code:
fdisk -l /dev/zvol/FileZilla2/vm-9000-disk-0
fdisk -l /dev/zvol/FileZilla/vm-4052-disk-0
qemu-img compare /dev/zvol/FileZilla/vm-4052-disk-0 /dev/zvol/FileZilla2/vm-9000-disk-0
cat /etc/pve/storage.cfg
The qemu-img compare command might take a bit
 
Code:
root@saori:~# fdisk -l /dev/zvol/FileZilla2/vm-9000-disk-0
Disk /dev/zvol/FileZilla2/vm-9000-disk-0: 913 MiB, 957350400 bytes, 1869825 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 8192 bytes
I/O size (minimum/optimal): 8192 bytes / 8192 bytes


root@saori:~# fdisk -l /dev/zvol/FileZilla/vm-4052-disk-0
Disk /dev/zvol/FileZilla/vm-4052-disk-0: 3.5 GiB, 3758096384 bytes, 7340032 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 16384 bytes
I/O size (minimum/optimal): 16384 bytes / 16384 bytes


Code:
root@saori:~# qemu-img compare /dev/zvol/FileZilla/vm-4052-disk-0 /dev/zvol/FileZilla2/vm-9000-disk-0
Warning: Image size mismatch!
Images are identical.


Code:
root@saori:~# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content iso,vztmpl,backup

lvmthin: local-lvm
        thinpool data
        vgname pve
        content images,rootdir

zfspool: FileZilla
        pool FileZilla
        content rootdir,images
        mountpoint /FileZilla
        sparse 0

zfspool: FileZilla2
        pool FileZilla2
        content rootdir,images
        mountpoint /FileZilla2
        sparse 0
 
Code:
root@saori:~# fdisk -l /dev/zvol/FileZilla2/vm-9000-disk-0
Disk /dev/zvol/FileZilla2/vm-9000-disk-0: 913 MiB, 957350400 bytes, 1869825 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 8192 bytes
I/O size (minimum/optimal): 8192 bytes / 8192 bytes
Hmm, the original disk seems smaller than it should be according to the clone operation. Is it supposed to contain partitions (fdisk would show those) or just a file system? You can additionally use
Code:
lsblk -o NAME,FSTYPE /dev/zvol/FileZilla2/vm-9000-disk-0
wipefs /dev/zvol/FileZilla2/vm-9000-disk-0
to check for that. Note that wipefs just prints labels and doesn't wipe anything if used without additional flags.
 
Code:
root@saori:~# lsblk -o NAME,FSTYPE /dev/zvol/FileZilla2/vm-9000-disk-0
NAME     FSTYPE
zd400p16 ext4

Code:
root@saori:~# wipefs /dev/zvol/FileZilla2/vm-9000-disk-0
DEVICE         OFFSET TYPE UUID                                 LABEL
vm-9000-disk-0 0x438  ext4 43cf0ceb-1f9d-4018-89da-4f1585f40449 BOOT
 
Code:
root@saori:~# lsblk -o NAME,FSTYPE /dev/zvol/FileZilla2/vm-9000-disk-0
NAME     FSTYPE
zd400p16 ext4

Code:
root@saori:~# wipefs /dev/zvol/FileZilla2/vm-9000-disk-0
DEVICE         OFFSET TYPE UUID                                 LABEL
vm-9000-disk-0 0x438  ext4 43cf0ceb-1f9d-4018-89da-4f1585f40449 BOOT
What if you run these for the cloned image?
 
and for completeness

Code:
root@saori:~# wipefs /dev/zvol/FileZilla/vm-4052-disk-0
DEVICE         OFFSET TYPE UUID                                 LABEL
vm-4052-disk-0 0x438  ext4 43cf0ceb-1f9d-4018-89da-4f1585f40449 BOOT
root@saori:~# wipefs /dev/zvol/FileZilla2/vm-9000-disk-0
DEVICE         OFFSET TYPE UUID                                 LABEL
vm-9000-disk-0 0x438  ext4 43cf0ceb-1f9d-4018-89da-4f1585f40449 BOOT

Code:
root@saori:~# lsblk -o NAME,FSTYPE /dev/zvol/FileZilla/vm-4052-disk-0
NAME FSTYPE
zd96 ext4
root@saori:~# lsblk -o NAME,FSTYPE /dev/zvol/FileZilla2/vm-9000-disk-0
NAME     FSTYPE
zd400p16 ext4
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!