[SOLVED] After update/reboot can't start vm from 2nd drive

DerGärtner

Member
Sep 23, 2018
13
0
6
Hello!
proxmox-ve: 6.4-1 (running kernel: 5.4.157-1-pve)
pve-manager: 6.4-13 (running version: 6.4-13/9f411e79)
pve-kernel-5.4: 6.4-11
pve-kernel-helper: 6.4-11
pve-kernel-5.4.157-1-pve: 5.4.157-1
pve-kernel-5.4.140-1-pve: 5.4.140-1
pve-kernel-5.4.124-1-pve: 5.4.124-2
pve-kernel-5.4.73-1-pve: 5.4.73-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.1.5-pve2~bpo10+1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.22-pve2~bpo10+1
libproxmox-acme-perl: 1.1.0
libproxmox-backup-qemu0: 1.1.0-1
libpve-access-control: 6.4-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.4-4
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.2-3
libpve-storage-perl: 6.4-1
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.1.13-2
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.6-1
pve-cluster: 6.4-1
pve-container: 3.3-6
pve-docs: 6.4-2
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-4
pve-firmware: 3.3-2
pve-ha-manager: 3.1-1
pve-i18n: 2.3-1
pve-qemu-kvm: 5.2.0-6
pve-xtermjs: 4.7.0-3
qemu-server: 6.4-2
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.6-pve1~bpo10+1
Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: WDC WDS100T1R0B
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: ****

Device Start End Sectors Size Type
/dev/sda1 34 2047 2014 1007K BIOS boot
/dev/sda2 2048 1050623 1048576 512M EFI System
/dev/sda3 1050624 1953525134 1952474511 931G Linux LVM


Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: Samsung SSD 870
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes



Disk /dev/mapper/pve-swap: 8 GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/pve-root: 96 GiB, 103079215104 bytes, 201326592 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/pve-vm--201--disk--0: 8 GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes


Disk /dev/mapper/pve-vm--202--disk--0: 8 GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes

[... ...]

Disk /dev/mapper/pve-vm--100--disk--0: 32 GiB, 34359738368 bytes, 67108864 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes
Disklabel type: dos
Disk identifier: ****

Device Boot Start End Sectors Size Id Type
/dev/mapper/pve-vm--100--disk--0-part1 * 2048 999423 997376 487M 83 Linux
/dev/mapper/pve-vm--100--disk--0-part2 1001470 67106815 66105346 31.5G 5 Extended
/dev/mapper/pve-vm--100--disk--0-part5 1001472 67106815 66105344 31.5G 8e Linux LVM

Partition 2 does not start on physical sector boundary.


Disk /dev/mapper/pve-vm--300--disk--0: 13 GiB, 13958643712 bytes, 27262976 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes
Disklabel type: dos
Disk identifier: ****

Device Boot Start End Sectors Size Id Type
/dev/mapper/pve-vm--300--disk--0-part1 * 2048 999423 997376 487M 83 Linux
/dev/mapper/pve-vm--300--disk--0-part2 1001470 27260927 26259458 12.5G 5 Extended
/dev/mapper/pve-vm--300--disk--0-part5 1001472 27260927 26259456 12.5G 8e Linux LVM

Partition 2 does not start on physical sector boundary.


Disk /dev/mapper/pve-vm--203--disk--0: 30 GiB, 32212254720 bytes, 62914560 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes


Disk /dev/mapper/pve-vm--210--disk--0: 30 GiB, 32212254720 bytes, 62914560 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes


Disk /dev/mapper/pve-vm--211--disk--0: 16 GiB, 17179869184 bytes, 33554432 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes


Disk /dev/mapper/pve-vm--101--disk--0: 40 GiB, 42949672960 bytes, 83886080 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes


Disk /dev/mapper/pve-vm--102--disk--0: 20 GiB, 21474836480 bytes, 41943040 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes
Disklabel type: dos
Disk identifier: ****

Device Boot Start End Sectors Size Id Type
/dev/mapper/pve-vm--102--disk--0-part1 * 2048 14614527 14612480 7G 83 Linux
/dev/mapper/pve-vm--102--disk--0-part2 14616574 41940991 27324418 13G 5 Extended
/dev/mapper/pve-vm--102--disk--0-part5 14616576 16615423 1998848 976M 82 Linux swap / Solaris
/dev/mapper/pve-vm--102--disk--0-part6 16617472 41940991 25323520 12.1G 83 Linux

Partition 2 does not start on physical sector boundary.


Disk /dev/mapper/pve-vm--110--disk--0: 50 GiB, 53687091200 bytes, 104857600 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes
Disklabel type: dos
Disk identifier: ****

Device Boot Start End Sectors Size Id Type
/dev/mapper/pve-vm--110--disk--0-part1 * 2048 35588095 35586048 17G 83 Linux
/dev/mapper/pve-vm--110--disk--0-part2 35590142 104855551 69265410 33G 5 Extended
/dev/mapper/pve-vm--110--disk--0-part5 35590144 37588991 1998848 976M 82 Linux swap / Solaris
/dev/mapper/pve-vm--110--disk--0-part6 37591040 104855551 67264512 32.1G 83 Linux

Partition 2 does not start on physical sector boundary.


I did a (kernel-)update last night. After rebooting my server, everything works fine except starting a debian based vm from my 2nd SSD (local-lvm2).
Bash:
kvm: -drive file=/dev/datb/vm-900-disk-0,if=none,id=drive-scsi0,format=raw,cache=none,aio=native,detect-zeroes=on: Could not open '/dev/datb/vm-900-disk-0': No such file or directory
TASK ERROR: start failed: QEMU exited with code 1
All other CT/VM's are working fine (which are located on my 1st SSD [local-lvm]).
I can see the 2nd SSD in the WebUI with the Status "available". Also the "vm-900-disk-0" is located under "VM Disks" but it looks like the local-lvm2 is empty.

Thanks for your help!
 
Last edited:
I don't know why this happened after the upgrade but the partition table was lost.
Bash:
~: sfdisk -d /dev/sdb
sfdisk: /dev/sdb: does not contain a recognized partition table
 
I did re-write the lost partition table with testDisk and after reboot (without any errors) i can see the partition right back under Disks.

vgscan:
Bash:
~# vgscan
  Reading all physical volumes.  This may take a while...
  Found volume group "ts-vg" using metadata type lvm2
  Found volume group "pve" using metadata type lvm2

But the lvm-thinpool is gone and the Status shows 'unknown'. Can someone help me please to get my Thinpool back or to create a new one? Thanks!
 
Please provide the output of pvs, vgs and lvs -a.

If possible, please also attach the journal since the last boot: journalctl -b > journal.txt (attach the resulting journal.txt file)
 
Hello mira!

Bash:
~# pvs
  PV         VG    Fmt  Attr PSize    PFree
  /dev/sda3  pve   lvm2 a--  <931.01g 15.99g
  /dev/sdb1  ts-vg lvm2 a--    <1.78t     0

Bash:
~# vgs
  VG    #PV #LV #SN Attr   VSize    VFree
  pve     1  19   0 wz--n- <931.01g 15.99g
  ts-vg   1   3   0 wz--n-   <1.78t     0

Bash:
~# lvs -a
  LV                               VG    Attr       LSize    Pool Origin                          Data%  Meta%  Move Log Cpy%Sync Convert
  data                             pve   twi-aotz-- <794.79g                                      6.39   0.57                           
  [data_tdata]                     pve   Twi-ao---- <794.79g                                                                            
  [data_tmeta]                     pve   ewi-ao----    8.11g                                                                            
  [lvol0_pmspare]                  pve   ewi-------    8.11g                                                                            
  root                             pve   -wi-ao----   96.00g                                                                                           
  swap                             pve   -wi-ao----    8.00g                                                                            
  vm-100-disk-0                    pve   Vwi-a-tz--   32.00g data                                 18.98                                 
  vm-101-disk-0                    pve   Vwi-a-tz--   40.00g data                                 0.00                                  
  vm-102-disk-0                    pve   Vwi-aotz--   20.00g data                                 11.40                                 
  vm-110-disk-0                    pve   Vwi-aotz--   50.00g data                                 14.04                                 
  vm-200-disk-0                    pve   Vwi-aotz--   50.00g data                                 15.77                                 
  vm-201-disk-0                    pve   Vwi-aotz--    8.00g data                                 66.53                                 
  vm-202-disk-0                    pve   Vwi-aotz--    8.00g data                                 30.28                                 
  vm-203-disk-0                    pve   Vwi-a-tz--   30.00g data                                 8.19                                  
  vm-208-disk-0                    pve   Vwi-aotz--   30.00g data                                 9.26                                  
  vm-209-disk-0                    pve   Vwi-aotz--   30.00g data                                 6.94                                  
  vm-210-disk-0                    pve   Vwi-aotz--   30.00g data                                 7.00                                  
  vm-211-disk-0                    pve   Vwi-a-tz--   16.00g data                                 13.33                                 
  vm-300-disk-0                    pve   Vwi-a-tz--   13.00g data                                 11.72                                 
  home                             ts-vg -wi-a-----    1.75t                                                                            
  root                             ts-vg -wi-a-----  <27.94g                                                                            
  swap_1                           ts-vg -wi-a-----  976.00m

Thank you for your help!
 

Attachments

  • journal.txt
    214.9 KB · Views: 2
Last edited:
Could it be your disk had a second partition with a VG on top called `datb`?
In your first post you mentioned the following: kvm: -drive file=/dev/datb/vm-900-disk-0,...

Please also provide your storage config (/etc/pve/storage.cfg) and the VM config (qm config 900).

If you have backup, I'd suggest restoring. Not sure there's anything to rescue left.
 
There was only one partition - and yes, as you mentioned - it was called 'datb', so maybe something went wrong on re-writing the partition table. I've a backup but I can't use the drive at the moment, because everytime I want to add a new storage or thinpool, it say's like 'All drives in use'.

Bash:
~# qm config 900
agent: 1
boot: order=net0
cores: 4
memory: 16384
name: s90
net0: virtio=***,bridge=vmbr3,tag=20
numa: 0
onboot: 1
ostype: l26
scsi0: local-lvm2:vm-900-disk-0,backup=1,size=1866468M
smbios1: uuid=b62cada4-c00a-4195-8fed-...
sockets: 1
vmgenid: 5c92509b-90e7-41d9-a860-...

Bash:
~# nano /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content iso,backup,vztmpl

lvmthin: local-lvm
        thinpool data
        vgname pve
        content rootdir,images

nfs: backup-lvm
        export /PVE
        path /mnt/pve/backup-lvm
        server 192.168.1.240
        content backup
        prune-backups keep-last=4
 
And /dev/sdb is a Samsung SSD with 2TB of storage, and it was all used for the disk `vm-900-disk-0`?
Did you remove `local-lvm2` already from the storage config?
 
And /dev/sdb is a Samsung SSD with 2TB of storage, and it was all used for the disk `vm-900-disk-0`?
Did you remove `local-lvm2` already from the storage config?
yes and yes.
I removed it because I thought I can add it again.
 
I've a backup but I can't use the drive at the moment, because everytime I want to add a new storage or thinpool, it say's like 'All drives in use'.
How do you try to add a new storage?
`All drives in use` is the error you get when you try to create a new storage on one of your disks instead of simply adding an available storage to Datacenter -> Storages.
 
I've tried to add a LVM-Thin storage instead of LVM - where I can choose 'existed volume group'. Ok, I can see now the storage with the Status 'available' and 100% full of space but the vm-900 is gone.
 
Last edited:
Because I can't use the Disk again (restore Backup without errors) I wiped it with
Bash:
wipefs -a /dev/sdb
After that I created a phys. volume and a volume group (vg), like:
Bash:
pvcreate /dev/sdb
vgcreate ssd2tb /dev/sdb
Under Datacenter -> Storage I added a LVM with the newly created vg.
I can see now the new storage with 100% free of space.

I want to restore the vm-900-0-disk but I get the error message that there was not enough free space.

I tried also to create a new volume for the vm:
Code:
lvcreate 'ssd2tb/vm-900-disk-0' error: Volume group "ssd2tb" has insufficient free space (476932 extents): 33554432 required. at /usr/share/perl5/PVE/API2/Qemu.pm line 1442. (500)

It looks like that the SSD 2TB was broken. Can someone agree or did I something wrong?

Thanks again.

Bash:
~# pvdisplay
  --- Physical volume ---
  PV Name               /dev/sdb
  VG Name               ssd2tb
  PV Size               <1.82 TiB / not usable <1.09 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              476932
  Free PE               476932
  Allocated PE          0
  PV UUID               kMLo9I-...
  
  --- Physical volume ---
  PV Name               /dev/sda3
  VG Name               pve
  PV Size               931.01 GiB / not usable 4.69 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              238338
  Free PE               4094
  Allocated PE          234244
  PV UUID               1t9Jmq-...

Bash:
vgdisplay
  --- Volume group ---
  VG Name               ssd2tb
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <1.82 TiB
  PE Size               4.00 MiB
  Total PE              476932
  Alloc PE / Size       0 / 0   
  Free  PE / Size       476932 / <1.82 TiB
  VG UUID               y1ieHK-...
  
  --- Volume group ---
  VG Name               pve
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  257
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                36
  Open LV               10
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <931.01 GiB
  PE Size               4.00 MiB
  Total PE              238338
  Alloc PE / Size       234244 / <915.02 GiB
  Free  PE / Size       4094 / 15.99 GiB
  VG UUID               tiuwAD-...
 
It looks like you've overprovisioned the VM disk, which is possible when using LVM-Thin.
You could create a new LVM-Thin instead, and restore the disk there.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!