BUG?! 4.3: Snapshot -> Rollback -> Problem: VM reboots

goseph

Renowned Member
Dec 4, 2014
35
1
73
Hello,

i hope you can help on this.

In Proxmox 4.2-18 everything was fine when taking Snaptshots of a VM and then rolling them back.
Was using 4.2.-18 with the regular installing routine at that time.

But after updating to 4.3-8 (Subscription available and active) today, the VM will automatic reboot after applying a rollback.

This is happening on two totally different systems.

Each VM has this problem when I try to rollback but not at the same time so the other VMs do not reboot. Only the one being rolled back.

Those are the last Syslog-Entries :
srv pvedaemon[1431]: <root@pam> snapshot VM 100: Ask3
srv pvedaemon[1160]: <root@pam> starting task UPID:srv:00000597:00005814:58152DF8:qmsnapshot:100:root@pam:
srv systemd-timesyncd[479]: interval/delta/delay/jitter/drift 256s/-0.019s/0.025s/0.018s/+38ppm
srv pvedaemon[1161]: <root@pam> starting task UPID:srv:000005AA:00005F89:58152E0B:qmrollback:100:root@pam:
srv pvedaemon[1450]: <root@pam> rollback snapshot VM 100: Ask3
srv kernel: vmbr0: port 2(tap100i0) entered disabled state
srv pvedaemon[1162]: client closed connection
srv systemd[1]: Starting 100.scope.
srv systemd[1]: Started 100.scope.
srv kernel: device tap100i0 entered promiscuous mode
srv kernel: vmbr0: port 2(tap100i0) entered forwarding state
srv kernel: vmbr0: port 2(tap100i0) entered forwarding state
srv pvedaemon[1162]: got timeout
srv kernel: kvm: zapping shadow pages for mmio generation wraparound
srv kernel: kvm: zapping shadow pages for mmio generation wraparound
srv kernel: kvm [1479]: vcpu0 unhandled rdmsr: 0xc001100d
srv kernel: kvm [1479]: vcpu1 unhandled rdmsr: 0xc001100d

Please tell me which information you need and how can i get them for you?

Thanks a lot!
 
Last edited:
The vm always "reboot" after a rollback.

But if you have snapshoted the memory, the saved memory is loaded at vm start (and It's transparent for you).

So the question is :
- do you have done a snapshot with memory state save ?
 
So the question is :
- do you have done a snapshot with memory state save ?
First: thanks a lot for your reply.

Sure, the checkmark to save RAM is set.

Was working in 4.2 just fine. After updating to 4.3-8 not working. On both different systems.
Which VM i use does not matter, a rollback without a VM rebooting is not possible anymore.
 
Last edited:
What happen when you launch the rollback ?
do you see vm rebooting ? (That should mean than vm have crashed just after having loaded the memory state).
Any info in you guest logs ?
Yes, I can see the VM rebooting. If noVNC is running it is losing connecting and i can see the vm rebooting after it reconnects.

Snapshot is working but VM reboots after rollback.

This looks like a Bug to me since it was working fine in 4.2 and now happens on two different systems with different hardware on 4.3-8.

Guest VM: Using Debian in this case. Where is the log you would like to see?

Thank you!
 
Last edited:
also, does it occur when a snapshot taken previsouly on older proxmox version and try to rollback on 4.3 ?

or, can you also reproduce taking a new snapshot on proxmox 4.3 and try to rollback on 4.3 ?

can you send the /etc/pve/qemu-server/vmid.conf of th vm ?
 
Note that I'm able to reproduce here, snapshot take on qemu 2.7.0-4 and rollback on qemu 2.7.0-4.

I'll do more tests.
So you can reproduce this Bug? Would be great somehow.
Snapshots were taken in latest 4.3 and rollback in 4.3, too.

Thanks!
 
Note that I'm able to reproduce here, snapshot take on qemu 2.7.0-4 and rollback on qemu 2.7.0-4.

I'll do more tests.
I have also tested on latest version on enterprise and found the same issue. Maybe the reinstating of the saved memory causes a kernel panic?
 
  • Like
Reactions: goseph
I have also tested on latest version on enterprise and found the same issue. Maybe the reinstating of the saved memory causes a kernel panic?
Is someone familiar with the Bug tracker? I see this as a bigger issue. Do you agree?

Thanks for the support so far.
 
Last edited:
snasphot from 2.6.1-6 > rollback to 2.6.1-6 : ok
snasphot from 2.6.1-6 > rollback to 2.7.0-4: ok
snasphot from 2.7.0-4 > rollback to 2.7.0-4 : ko

so, the problem seem be in qemu 2.7. I'll make a bug report to proxmox dev mailing list.
Thanks for the report !
 
  • Like
Reactions: goseph
snasphot from 2.6.1-6 > rollback to 2.6.1-6 : ok
snasphot from 2.6.1-6 > rollback to 2.7.0-4: ok
snasphot from 2.7.0-4 > rollback to 2.7.0-4 : ko

so, the problem seem be in qemu 2.7. I'll make a bug report to proxmox dev mailing list.
Thanks for the report !
Thanks!
Will you report here or how do I stay up2date on this best?
 
I'll report here if I have some news.
Thank you.

I'll testet here on windows on ZFS. This makes no problem. But on linuxvms... they does not reboot, but you have to reboot with reset, they are hanging in a strange state.
 
I followed the bugthread... When can we count on the transfer to the Enterpriserepo? When need this for an special system. There are about 4 or 5 Livesnapshot, we must able to rollback this for configuration. An reboot at one of these snapshotstats will terminate the config in the VM.

Thanks a lot
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!