Problems restoring a VM dump

apvargas

New Member
Oct 24, 2019
15
3
3
53
Hi, I am having problems with a virtual machine, it tells me that the bootable disk is not present and when doing a restore, all the backups reproduce the same error. Any idea what can cause this behavior?
 

Attachments

  • Captura de pantalla de 2021-03-05 11-48-30.png
    Captura de pantalla de 2021-03-05 11-48-30.png
    10.9 KB · Views: 18
Hi,
did you do anything special with the machine before the problem started happening? Could you share the VM configuraton qm config <ID> and the output of pveversion -v? If you go to the Options panel for the VM in the GUI, you can check if the correct disk is present for the boot order.
 
Hi,
did you do anything special with the machine before the problem started happening? Could you share the VM configuraton qm config <ID> and the output of pveversion -v? If you go to the Options panel for the VM in the GUI, you can check if the correct disk is present for the boot order.
Hi, the server was restarted to make a diagnosis of a DB service installed there. In fact, the two disks are present in the hardware corresponding to the equipment.



# qm config 110
bootdisk: sata0
cores: 2
ide2: local:iso/CentOS-7-x86_64-DVD-1804.iso,media=cdrom
memory: 8192
name: Postgres-desarollo
net0: virtio=1E:8E:04:24:41:5F,bridge=vmbr0
numa: 0
ostype: l26
parent: snapshot20200520
sata0: local-lvm:vm-110-disk-1,size=75G
sata1: local-lvm:vm-110-disk-2,size=50G
scsihw: virtio-scsi-pci
smbios1: uuid=8be73bd6-bf1f-4ac6-bce8-9e932230a90c
sockets: 2


pveversion -v
proxmox-ve: 5.2-2 (running kernel: 4.15.17-1-pve)
pve-manager: 5.2-1 (running version: 5.2-1/0fcd7879)
pve-kernel-4.15: 5.2-1
pve-kernel-4.15.17-1-pve: 4.15.17-9
corosync: 2.4.2-pve5
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-apiclient-perl: 2.0-4
libpve-common-perl: 5.0-31
libpve-guest-common-perl: 2.0-16
libpve-http-server-perl: 2.0-8
libpve-storage-perl: 5.0-23
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 3.0.0-3
lxcfs: 3.0.0-1
novnc-pve: 0.6-4
proxmox-widget-toolkit: 1.0-18
pve-cluster: 5.0-27
pve-container: 2.0-23
pve-docs: 5.2-3
pve-firewall: 3.0-8
pve-firmware: 2.0-4
pve-ha-manager: 2.0-5
pve-i18n: 1.0-5
pve-libspice-server1: 0.12.8-3
pve-qemu-kvm: 2.11.1-5
pve-xtermjs: 1.0-5
qemu-server: 5.0-26
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.8-pve1~bpo9
 
In fact, the two disks are present in the hardware corresponding to the equipment.
and is the boot order correct? from the screenshot it seems like trying to boot from network

pveversion -v
proxmox-ve: 5.2-2 (running kernel: 4.15.17-1-pve)
pve-manager: 5.2-1 (running version: 5.2-1/0fcd7879)
pve 5.x has been EOL for some time now, i'd recommend you to upgrade to 6.3 when possible
 
To add to what @oguz said, you might also want to check the partitions on the virtual disks, e.g.
Code:
fdisk -l /dev/mapper/pve-vm--110--disk--1
fdisk -l /dev/mapper/pve-vm--110--disk--2
 
and is the boot order correct? from the screenshot it seems like trying to boot from network


pve 5.x has been EOL for some time now, i'd recommend you to upgrade to 6.3 when possible
Hi, yes, the order is correct, the configuration of the disks of that VM has not been modified, you are right, the update of that equipment is pending.
 
To add to what @oguz said, you might also want to check the partitions on the virtual disks, e.g.
Code:
fdisk -l /dev/mapper/pve-vm--110--disk--1
fdisk -l /dev/mapper/pve-vm--110--disk--2
Hi, this is te result of the excecution of the commands


root@urbinapveIV:/home/apvargas# fdisk -l /dev/mapper/pve-vm--110--disk--1
Disk /dev/mapper/pve-vm--110--disk--1: 75 GiB, 80530636800 bytes, 157286400 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes
root@urbinapveIV:/home/apvargas# fdisk -l /dev/mapper/pve-vm--110--disk--2
Disk /dev/mapper/pve-vm--110--disk--2: 50 GiB, 53687091200 bytes, 104857600 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes
Disklabel type: dos
Disk identifier: 0xc784d6c5

Device Boot Start End Sectors Size Id Type
/dev/mapper/pve-vm--110--disk--2-part1 2048 104857599 104855552 50G 83 Linux
root@urbinapveIV:/home/apvargas#
 
Hi, this is te result of the excecution of the commands


root@urbinapveIV:/home/apvargas# fdisk -l /dev/mapper/pve-vm--110--disk--1
Disk /dev/mapper/pve-vm--110--disk--1: 75 GiB, 80530636800 bytes, 157286400 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes
root@urbinapveIV:/home/apvargas# fdisk -l /dev/mapper/pve-vm--110--disk--2
Disk /dev/mapper/pve-vm--110--disk--2: 50 GiB, 53687091200 bytes, 104857600 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes
Disklabel type: dos
Disk identifier: 0xc784d6c5

Device Boot Start End Sectors Size Id Type
/dev/mapper/pve-vm--110--disk--2-part1 2048 104857599 104855552 50G 83 Linux
root@urbinapveIV:/home/apvargas#
How were the disks used within the VM? Seems like only the second disk contains a single partition. You could try and mount it to see what's on it.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!