My VMs can't boot after shutdown AFTER UPDATE FOR 7.3.3

can you please check your partition tables @welltecnologia ?

boot into a linux oder windows live cd and have a look at the disks partition table.

if the partition table exists, then it's not the bug described at https://bugzilla.proxmox.com/show_bug.cgi?id=2874 , but most likeley another problem
Hi,

Masi a VM, from another server just gave the same problem.

Now a Windows Server VM.

It doesn't boot anymore.

I put the disk in another working VM, and there is no more partition on the virtual disk.

That is, the problem is happening caused by PROXMOX BACKUP, for some reason it damages the partition table and we only find out about it when we restart the VM.

A tool that was made to give us security ends up causing tragedy.

And the worst thing is that it ends up ruining ALL backups, because PBS ends up backing up the VM with a problem in the partition table, that is, you can't have any security.

As this is a very serious problem and I have no idea how to solve it and I don't have paid support, the only solution I see is to completely abandon PROXMOX after 5 years and go back to HYPER-V.

Using PROXMOX with this bug and no solution is suicide.
 
So you can have an idea of the size of my problem.

The server that gave the problem is a file server, and I only have 3 backups of this VM, a 2.4TB VM.

In addition to the delay in restoring this vm from PBS, I still run the risk of the 3 backups being damaged with this error in the partition table.
 
can you please check your partition tables @welltecnologia ?

boot into a linux oder windows live cd and have a look at the disks partition table.

if the partition table exists, then it's not the bug described at https://bugzilla.proxmox.com/show_bug.cgi?id=2874 , but most likeley another problem
Hi,

Yes, I checked and the MBR is destroyed.
And worse, the backups made by PBS end up making backups with the MBR destroyed.

That is, the backups are also being affected.

I'm trying to install the new virtual machine and I'll have to reconfigure the whole VM.

What astonishes me, due to the seriousness of the BUG, as PROXMOX still hasn't found the reason for this error.

I seriously think about abandoning PROXMOX once and for all, they are huge problems that it has been showing in the last few years and if you don't pay for support, you are left alone with these bugs.

At least HYPER-V is infinitely simple and far from as many bugs as PROXMOX has been showing over the years.
 
sata0: HD-VMs-1:103/vm-103-disk-0.qcow2,size=1T

Do all your problematic VMs/vdisks use: sata?:
https://bugzilla.proxmox.com/show_bug.cgi?id=2874#c51

If yes, you could try with: scsi (it is the general recommendation anyway) and see, if the problem goes away.
But be aware, that you might need some preparation in the guest OS, especially for Windows. So better test it first with some test-VMs with different guest OSs!
 
Do all your problematic VMs/vdisks use: sata?:
https://bugzilla.proxmox.com/show_bug.cgi?id=2874#c51

If yes, you could try with: scsi (it is the general recommendation anyway) and see, if the problem goes away.
But be aware, that you might need some preparation in the guest OS, especially for Windows. So better test it first with some test-VMs with different guest OSs!
ok, and witch change the sata for scsi in my VMs?
 
It just happened with me too - I tried to reboot one of my linux vm and it failed to boot.
I don't know if this is a coincidence or not, but it happened after I upgraded the Proxmox to the latest avail version.

Symptoms were slightly different - my ubuntu VMs did find boot partition and showed me boot menu (like normal boot and troubleshooting boot) but once I press "normal boot" it went into endless cycle of messages like "linux sym0: "interrupted SCRIPT address not found" and something else related to SCSI.
What I did - I stopped the VMs and changed SCSI controller for the VMs from what was setup as default (and what was working for many many months before) to "VirtIO SCSI". And boths affected VMs boot up straight away.
Hope this could help someone.
 
Hi,
It just happened with me too - I tried to reboot one of my linux vm and it failed to boot.
I don't know if this is a coincidence or not, but it happened after I upgraded the Proxmox to the latest avail version.

Symptoms were slightly different - my ubuntu VMs did find boot partition and showed me boot menu (like normal boot and troubleshooting boot) but once I press "normal boot" it went into endless cycle of messages like "linux sym0: "interrupted SCRIPT address not found" and something else related to SCSI.
What I did - I stopped the VMs and changed SCSI controller for the VMs from what was setup as default (and what was working for many many months before) to "VirtIO SCSI". And boths affected VMs boot up straight away.
Hope this could help someone.
yes, that does sound different. Please post the output of pveversion -v and qm config <ID> for the affected VMs. What version of Ubuntu is running in the VMs?

So you changed the controller from VirtIO SCSI single to VirtIO SCSI? Can you check if apt install pve-qemu-kvm=7.1.0-4 makes boot work again with VirtIO SCSI single (make sure to shutdown+start the VM, not reboot from within the guest, because that wouldn't pick up the newly installed QEMU)?
 
Hi,

yes, that does sound different. Please post the output of pveversion -v and qm config <ID> for the affected VMs. What version of Ubuntu is running in the VMs?

So you changed the controller from VirtIO SCSI single to VirtIO SCSI? Can you check if apt install pve-qemu-kvm=7.1.0-4 makes boot work again with VirtIO SCSI single (make sure to shutdown+start the VM, not reboot from within the guest, because that wouldn't pick up the newly installed QEMU)?
Hi fiona,
sorry for the late reply.
Here is the info you asked:

Code:
root@pve:~# pveversion -v
proxmox-ve: 7.3-1 (running kernel: 5.15.85-1-pve)
pve-manager: 7.3-6 (running version: 7.3-6/723bb6ec)
pve-kernel-helper: 7.3-4
pve-kernel-5.15: 7.3-2
pve-kernel-5.13: 7.1-9
pve-kernel-5.15.85-1-pve: 5.15.85-1
pve-kernel-5.15.83-1-pve: 5.15.83-1
pve-kernel-5.15.74-1-pve: 5.15.74-1
pve-kernel-5.15.53-1-pve: 5.15.53-1
pve-kernel-5.15.39-3-pve: 5.15.39-3
pve-kernel-5.15.39-1-pve: 5.15.39-1
pve-kernel-5.15.35-3-pve: 5.15.35-6
pve-kernel-5.15.35-2-pve: 5.15.35-5
pve-kernel-5.15.35-1-pve: 5.15.35-3
pve-kernel-5.13.19-6-pve: 5.13.19-15
pve-kernel-5.13.19-5-pve: 5.13.19-13
pve-kernel-5.13.19-4-pve: 5.13.19-9
pve-kernel-5.13.19-2-pve: 5.13.19-4
ceph-fuse: 15.2.15-pve1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.3
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.3-1
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.3-2
libpve-guest-common-perl: 4.2-3
libpve-http-server-perl: 4.1-5
libpve-storage-perl: 7.3-2
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-1
lxcfs: 5.0.3-pve1
novnc-pve: 1.3.0-3
proxmox-backup-client: 2.3.3-1
proxmox-backup-file-restore: 2.3.3-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-offline-mirror-helper: 0.5.1-1
proxmox-widget-toolkit: 3.5.5
pve-cluster: 7.3-2
pve-container: 4.4-2
pve-docs: 7.3-1
pve-edk2-firmware: 3.20220526-1
pve-firewall: 4.2-7
pve-firmware: 3.6-3
pve-ha-manager: 3.5.1
pve-i18n: 2.8-2
pve-qemu-kvm: 7.2.0-5
pve-xtermjs: 4.16.0-1
qemu-server: 7.3-3
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+2
vncterm: 1.7-1
zfsutils-linux: 2.1.9-pve1
root@pve:~#
root@pve:~#
root@pve:~#############################################################################
root@pve:~# qm config 211
agent: 1,fstrim_cloned_disks=1
boot: order=scsi0
cores: 5
hotplug: disk,network,usb
ide2: none,media=cdrom
memory: 5120
name: UbuntuServer
net0: virtio=4E:2D:94:F2:F0:38,bridge=vmbr0,firewall=1
net1: virtio=C6:66:C5:C3:17:60,bridge=vmbr0,firewall=1,tag=102
net2: virtio=1E:BD:BC:4B:82:BB,bridge=vmbr0,firewall=1,tag=123
onboot: 1
ostype: l26
scsi0: local-lvm:vm-211-disk-0,size=50G
scsihw: virtio-scsi-pci
smbios1: uuid=a642b287-acdd-458f-bfdc-463d94dff423
startup: order=1
usb2: host=152d:0567,usb3=1
vmgenid: 04e5e05d-f0c5-4046-8cbd-51fd7f8ea7ae
root@pve:~#

I am running Ubuntu 20.04.5 LTS

No, it was NOT running VirtIO SCSI single - I think by default, when I created the instance, it was LSI 53C895A
The instance was created a while ago - may be at that time it was the default controller.
But now, I can confirm. when I try to create a new instance - it offered me VirtIO SCSI single as a default
 
No, it was NOT running VirtIO SCSI single - I think by default, when I created the instance, it was LSI 53C895A
If it was the LSI controller, it's most likely the same as the issue reported here. It should be fixed in pve-qemu-kvm>=7.2.0-7 with this commit. But if you can use a different controller, I'd actually recommend doing that, the LSI one is old and not the most performant.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!