disks become unwritable after snapshot

drjaymz@

Member
Jan 19, 2022
124
5
23
102
Still having the same issue as I've always had, and its really a showstopper for the use of proxmox for anything important.

Have a VM imported from legacy KVM previous version of qemu. Everything runs perfectly fine except when you perform a backup to snapshot about 1/10 times this breaks the disks completely to the point where the only resolution is to destroy the VM and restart at which points its corrupted. When the problem occurs you can still log into to the VM (ubuntu 12 or 14 I think) but you cannot run any commands that require disk access and if you do then you get something like : Input/output error.

Now this issue has gone on long enough, its in the forums as unsolved for two years at this point. Usually people assume its to do with fsfreeze and fsthaw because it throws an error on fsthaw, but its NOT; The guest agent fs-freeze or thaw that is the problem - they just happen to be logging to a writable log and therefore is the place where you see the error first when the thaw realises the disks have vanished. We don't have the guest agent installed and it makes no difference if you do anyway. It doesn't have to be that VM, simply create a new one or a container stick mariadb on it, set a backup job that backs it up every hour and soon enough you'll see that your database is broken and queries get stuck. File systems don't become read-only, they vanish entirely.

I run zfs scrub on the pool afterwards and everything is 100% no errors detected.
Another thing that I have noticed that could be loosely related is if I have created a snapshot successfully and then create a clone based on that - during the cloning process the exact same thing happens to the source VM it locks up and disks are unwritable, unrecoverable until restart. Again about 1 in 10.

1683532152861.png
underlying disks look perfectly fine, they are not using hardware raid, just attached and combined into zpool.

Configuration:
Its a DELL poweredge 450, PERC H745 but I have seen same issue on others.
PVE is 7.4-3
HA cluster of 3. + PBS.

1683533038446.png
Nothing fancy in the config, machine default, disk controller default, BIOS - default. PVE settings are largely default and we don't have anything fancy installed and pretty much the entire config is via the UI.

It doesn't matter what type of backup you're using, if snapshot is selected the problem occurs. It means you can't really use this in any mission critical scenario as I'd hoped. Not only that, but it means you can't use PBS or make use of any of the features in either because you can't trust it. For a while it looked like backup to PBS wasn't causing the issue but it is.

All day long we have replication running - isn't that doing exactly the same underneath and if so why doesn't that cause it to fail? We have that running every 15 min.

When it screws up:

1683532926328.png

Bootdisk size here shows "0B".
 
I'm going to ask a silly question, does stop mode always work fine for backups? [Yes, I know it is not ideal, but I'm interested if it also fails with a frequent backup cadence on your setup. Just use a demo instance.]
 
I'm going to ask a silly question, does stop mode always work fine for backups? [Yes, I know it is not ideal, but I'm interested if it also fails with a frequent backup cadence on your setup. Just use a demo instance.]
I don't think thats a silly question. I have never seen that problem from a stopped one but I have not used that often because stopped isn't very useful. You'll remember that to cause an issue with disks disappearing the VM has to be running and restarting it restores them; ergo if its stopped then by the time its started you probably don't have an issue. You could try to create a clone from a stopped snapshot and see if that causes the problem, if it were repeatable every time that would be worth doing. If its every now and then thats going to be hard to find. The trouble if rare still becomes often when you are doing hourly snapshots on many machines.
 
Last edited:
Agreed, but here's the thing, what actually happens on your setup? [Could be an interesting experiment, perhaps it does not work as expected, revealing some other overlooked issue which is a cause of the main fault.]

You know, is it a race condition? It would certainly explain the randomness, and one place which would be logged is the main Proxmox sysvol, do you have any interesting data from there when a backup failure occurs? [Perhaps time goes backwards on the VM...]
 
The VM records nothing. Our s
Agreed, but here's the thing, what actually happens on your setup? [Could be an interesting experiment, perhaps it does not work as expected, revealing some other overlooked issue which is a cause of the main fault.]

You know, is it a race condition? It would certainly explain the randomness, and one place which would be logged is the main Proxmox sysvol, do you have any interesting data from there when a backup failure occurs? [Perhaps time goes backwards on the VM...]
The syslog is as if time stops at the point we have a problem. Note that the faulty snapshot occurred today at 7.11 and the last item in the syslog is 7.14 and I think this is some time after the snapshot concludes, I think it concluded about 7:11:08. Annoyingly if the syslog was sent to a syslog server or was writable maybe we'd have seen something useful. I have looked to see if I can see anything in the pve syslog and there's not a lot, just that the snapshot cron occured. I also tried looking for the log for the backup, but by the time I got to it it had run subsequent backups and doesn't seem to keep the log. But previously the log seems to show nothing unusual and whilst the VM is borked you can continue to snapshot it and they all run correctly. Its the VM that can't see its disks.
 
Hi,
please post the output of pveversion -v. How does the load on your system look like during backup?

Now this issue has gone on long enough, its in the forums as unsolved for two years at this point. Usually people assume its to do with fsfreeze and fsthaw because it throws an error on fsthaw, but its NOT; The guest agent fs-freeze or thaw that is the problem - they just happen to be logging to a writable log and therefore is the place where you see the error first when the thaw realises the disks have vanished.
There are actually issues with fsfreeze in some cases and that's what most other reports are about. I haven't heard others talk about vanished disks and neither about having the same issues with containers. I'd guess that your issue is different. What exactly do you mean by "vanished disks"? Is the filesystem on it gone like you say below for containers?

We don't have the guest agent installed and it makes no difference if you do anyway. It doesn't have to be that VM, simply create a new one or a container stick mariadb on it, set a backup job that backs it up every hour and soon enough you'll see that your database is broken and queries get stuck. File systems don't become read-only, they vanish entirely.
VMs and containers use completely different backup mechanisms (block-based and integrated in QEMU for VMs, file-based "from the outside" for containers). If you use snapshot backup mode, QEMU will do an overlay in its block layer, but containers will use a ZFS snapshot. So it's really surprising that you experience the issue with both.

It doesn't matter what type of backup you're using, if snapshot is selected the problem occurs. It means you can't really use this in any mission critical scenario as I'd hoped. Not only that, but it means you can't use PBS or make use of any of the features in either because you can't trust it. For a while it looked like backup to PBS wasn't causing the issue but it is.
Have you tried backups to another target than PBS? I'm not saying there can't be a PBS-related bug here, but thousands and thousands of people are using it everyday without such issues, so it rather guess it has to be something specific to your setup.

All day long we have replication running - isn't that doing exactly the same underneath and if so why doesn't that cause it to fail? We have that running every 15 min.
Replication uses ZFS snapshots, no QEMU involved. But it also isn't for containers.
 
Hi,
please post the output of pveversion -v. How does the load on your system look like during backup?


There are actually issues with fsfreeze in some cases and that's what most other reports are about. I haven't heard others talk about vanished disks and neither about having the same issues with containers. I'd guess that your issue is different. What exactly do you mean by "vanished disks"? Is the filesystem on it gone like you say below for containers?


VMs and containers use completely different backup mechanisms (block-based and integrated in QEMU for VMs, file-based "from the outside" for containers). If you use snapshot backup mode, QEMU will do an overlay in its block layer, but containers will use a ZFS snapshot. So it's really surprising that you experience the issue with both.


Have you tried backups to another target than PBS? I'm not saying there can't be a PBS-related bug here, but thousands and thousands of people are using it everyday without such issues, so it rather guess it has to be something specific to your setup.


Replication uses ZFS snapshots, no QEMU involved. But it also isn't for containers.
Hi, thanks for your reply.

pversion output:

Code:
proxmox-ve: 7.4-1 (running kernel: 5.15.64-1-pve)
pve-manager: 7.4-3 (running version: 7.4-3/9002ab8a)
pve-kernel-5.15: 7.4-1
pve-kernel-5.13: 7.1-9
pve-kernel-5.15.104-1-pve: 5.15.104-2
pve-kernel-5.15.83-1-pve: 5.15.83-1
pve-kernel-5.15.64-1-pve: 5.15.64-1
pve-kernel-5.13.19-6-pve: 5.13.19-15
pve-kernel-5.13.19-2-pve: 5.13.19-4
ceph-fuse: 15.2.15-pve1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.4
libproxmox-backup-qemu0: 1.3.1-1
libproxmox-rs-perl: 0.2.1
libpve-access-control: 7.4-2
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.3-4
libpve-guest-common-perl: 4.2-4
libpve-http-server-perl: 4.2-3
libpve-rs-perl: 0.7.5
libpve-storage-perl: 7.4-2
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-2
lxcfs: 5.0.3-pve1
novnc-pve: 1.4.0-1
proxmox-backup-client: 2.4.1-1
proxmox-backup-file-restore: 2.4.1-1
proxmox-kernel-helper: 7.4-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-offline-mirror-helper: 0.5.1-1
proxmox-widget-toolkit: 3.6.5
pve-cluster: 7.3-3
pve-container: 4.4-3
pve-docs: 7.4-2
pve-edk2-firmware: 3.20230228-2
pve-firewall: 4.3-1
pve-firmware: 3.6-4
pve-ha-manager: 3.6.0
pve-i18n: 2.12-1
pve-qemu-kvm: 7.2.0-8
pve-xtermjs: 4.16.0-1
qemu-server: 7.4-3
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+3
vncterm: 1.7-1
zfsutils-linux: 2.1.9-pve1

System load during backup is extremely low - below a backup is running to PBS.

1683621484552.png

I'm concentrating on the VM issue at the moment, issues with containers on snapshots seemed to be related to MariaDB version 10.9 and haven't seen that since 10.11 and have had far less issues with containers and everything new I build is done that way. So we ignore that for now, I think that MariaDB didn't like disk access being delayed beyond a few ms and never recovered.

By vanished disks, its very difficult to see what the problem is once its encountered because you cannot run most binaries (because they are located on disk) within the VM and I can only issue any command if I already had an ssh connection open.
An example would be that we have a disk mounted on /u2. When you try to list or access that folder you get Input/output error. In the GUI the bootdisk size becomes 0B. But I'd like to get more information like call mount, or df but these won't run because they require binaries on disk.
You appear not to be able to read or write at all from mounted filesystems and it appears to be all of them.
To cause the problem you have to either backup to local or PBS (both caused it) or I think clone. The symptoms are identical the to the qemu guest agent thaw issue in that the disk isn't writable and is only resolved by a restart of the VM. In my case - I am not running guest agent because my VM is too old for this so its not thawing thats the issue but the underlying issue *could* be the same thing.

Problem appears when using local backup, backup to NFS or PBS. so its not the mechanism per-se; it appears to be any action that reads the snapshot. But there is no IO bottle necking I can see and even if there was, I'd expect it to run slow, not break the VM forever. For info, the VM's are old warehouse management systems, running 2 dozen ssh users with informix backend, They run on Suse 9.0 Kernel 2.4.21 from way back when. They are imported xen disks and use kernel / initial ramfs files on boot up and before they were on here, they ran on KVM / QEMU servers so proxmox wasn't really a jump in technology.

My plan is to clone the VM and try to provoke the issue and maybe I can get a bit more information - for example by logging syslog to another server, I may be able to see if there are some kernel panics etc. Any other suggestions greatly appreciated.
I'm willing to bet it is something weird in my setup - either the specific hardware or that specific kernel for the VM - and yes I really wanted to make use of the system thousands are using because it is the ideal management and backup solution for these systems. But you'll also understand if it locks up on backup - that is an issue, never locks up at any other time.
 
also forgot to mention, on restart the VM has disk errors consistent with a power off - which I guess is what you'd expect and snapshots also look like that as the system is running at the point of snapshot. If I call zfs scrub and check the pool, its perfectly fine, no errors are found. So I don't think any issue there specifically.
 
I'm concentrating on the VM issue at the moment, issues with containers on snapshots seemed to be related to MariaDB version 10.9 and haven't seen that since 10.11 and have had far less issues with containers and everything new I build is done that way. So we ignore that for now, I think that MariaDB didn't like disk access being delayed beyond a few ms and never recovered.
Okay, then it might be a QEMU-related issue after all.

By vanished disks, its very difficult to see what the problem is once its encountered because you cannot run most binaries (because they are located on disk) within the VM and I can only issue any command if I already had an ssh connection open.
An example would be that we have a disk mounted on /u2. When you try to list or access that folder you get Input/output error. In the GUI the bootdisk size becomes 0B. But I'd like to get more information like call mount, or df but these won't run because they require binaries on disk.
You appear not to be able to read or write at all from mounted filesystems and it appears to be all of them.
To cause the problem you have to either backup to local or PBS (both caused it) or I think clone. The symptoms are identical the to the qemu guest agent thaw issue in that the disk isn't writable and is only resolved by a restart of the VM. In my case - I am not running guest agent because my VM is too old for this so its not thawing thats the issue but the underlying issue *could* be the same thing.
I/O to the disks should only be blocked for the short time that the backup job is created, i.e. in between the messages
Code:
INFO: creating Proxmox Backup Server archive 'vm/102/2023-05-09T10:54:20Z'
INFO: started backup task '2f40be79-ca15-4a19-af74-f437266ce76e'
in the log. Since you are not using the guest agent, we can of course rule out that it is making the disks read only.

EDIT: I should also mention that IO is bottlenecked by the backup target (because when a guest writes to a not-yet-backed-up sector, the old data needs to be backed up first). But you mentioned it also happens with local storage as the target, so I guess we can rule that out too.

Can you try creating a script named query-block.pm with the following contents
Code:
#!/bin/perl

use strict;
use warnings;

use Data::Dumper;
$Data::Dumper::Sortkeys = 1;

use PVE::QemuServer::Monitor qw(mon_cmd);

my $vmid = shift or die "need to specify vmid\n";

my $res = eval { mon_cmd($vmid, "query-block" ) };
die $@ if $@;
print Dumper($res);
and running it with perl query-block.pm <ID> with the VM's ID the next time the issue happens? Then we'll get QEMU's view on the disks.

Problem appears when using local backup, backup to NFS or PBS. so its not the mechanism per-se; it appears to be any action that reads the snapshot. But there is no IO bottle necking I can see and even if there was, I'd expect it to run slow, not break the VM forever. For info, the VM's are old warehouse management systems, running 2 dozen ssh users with informix backend, They run on Suse 9.0 Kernel 2.4.21 from way back when. They are imported xen disks and use kernel / initial ramfs files on boot up and before they were on here, they ran on KVM / QEMU servers so proxmox wasn't really a jump in technology.
Okay, that is pretty old :) Did you ever experience the issue on VMs with more modern kernels?

My plan is to clone the VM and try to provoke the issue and maybe I can get a bit more information - for example by logging syslog to another server, I may be able to see if there are some kernel panics etc. Any other suggestions greatly appreciated.
I'm willing to bet it is something weird in my setup - either the specific hardware or that specific kernel for the VM - and yes I really wanted to make use of the system thousands are using because it is the ideal management and backup solution for these systems. But you'll also understand if it locks up on backup - that is an issue, never locks up at any other time.
Sure, it should not happen.
 
Last edited:
Okay, then it might be a QEMU-related issue after all.


I/O to the disks should only be blocked for the short time that the backup job is created, i.e. in between the messages
Code:
INFO: creating Proxmox Backup Server archive 'vm/102/2023-05-09T10:54:20Z'
INFO: started backup task '2f40be79-ca15-4a19-af74-f437266ce76e'
in the log. Since you are not using the guest agent, we can of course rule out that it is making the disks read only.

EDIT: I should also mention that IO is bottlenecked by the backup target (because when a guest writes to a not-yet-backed-up sector, the old data needs to be backed up first). But you mentioned it also happens with local storage as the target, so I guess we can rule that out too.

Can you try creating a script named query-block.pm with the following contents
Code:
#!/bin/perl

use strict;
use warnings;

use Data::Dumper;
$Data::Dumper::Sortkeys = 1;

use PVE::QemuServer::Monitor qw(mon_cmd);

my $vmid = shift or die "need to specify vmid\n";

my $res = eval { mon_cmd($vmid, "query-block" ) };
die $@ if $@;
print Dumper($res);
and running it with perl query-block.pm <ID> with the VM's ID the next time the issue happens? Then we'll get QEMU's view on the disks.


Okay, that is pretty old :) Did you ever experience the issue on VMs with more modern kernels?


Sure, it should not happen.
Thanks, I'll give this a go and see if I can make it happen on a clone. I expect it will be awkward and not want to happen if we're watching it.

The kernel is pretty old, I looked into what it would take to transfer to a more modern VM or container and we have a lot of custom Binaries that may prove difficult or impossible to compile. Its probably got to be migrated at some point, but I think we're far from alone in having core business functionality that is so old. My position is really that I have to make this work with that VM. I suspect that the kernel has issue with IO being momentarily blocked.

Perhaps one other point to note is that the replication and indeed the next backup still ran when the VM was broken. So outside the VM I think it was still accessible.
 
ok, I have managed to replicate the issue with a cloned VM. and its not quite as reported previously.
The file systems are mounted. I can call all binaries in /bin but anything in /usr/bin results in Input/output error. But as root I can touch a file in there and CAN write to it. Totally bizarre.

I snapshotted the broken VM and then rebooted and on reboot its fine again. Then I restored the snapshot and its back to the broken state! So this confirms that it is the VM itself that has broken.

I don't thin the qm block helps but here it is.

First report is broken state, second is after rebooted and working:
 

Attachments

ok, I have managed to replicate the issue with a cloned VM. and its not quite as reported previously.
The file systems are mounted. I can call all binaries in /bin but anything in /usr/bin results in Input/output error. But as root I can touch a file in there and CAN write to it. Totally bizarre.
Are /bin and /usr/bin different file systems? Are they on the same disk?

I don't thin the qm block helps but here it is.

First report is broken state, second is after rebooted and working:
Well, at least we can see that the disks are not marked as read-only by QEMU at this level. And the io-status is ok, so yes, the I/O errors might come from within the guest.
 
Are /bin and /usr/bin different file systems? Are they on the same disk?


Well, at least we can see that the disks are not marked as read-only by QEMU at this level. And the io-status is ok, so yes, the I/O errors might come from within the guest.
AFAIK /bin and /use/bin are both part of the root fs. so they are the same. This is a bit like the old days when you had the cylinders, head and sectors set wrong. Some files were readable others were wrong addresses. Any writing to the disk in this state probably corrupts it.

Its as if the disk addresses have moved but the guest doesn't know about it or something between the two hasn't worked.
If you reboot the VM its happy. I think something is stale in the VM's view of the disk. Its also quite interesting that:
1) if you snapshot the VM and restore the state its still broken, but its ok when you restart it.
2) when its broken, in these folders you can touch a new file and can write to them ok, but when you list or read from disk some files work other result in IO error.

Exactly what happens when a snapshot is created?
 
1) if you snapshot the VM and restore the state its still broken, but its ok when you restart it.
Do you include RAM in the snapshot? Snapshots are supposed to be a capture of the exact state the guest is in, so that's not surprising, but working as intended. It does rule out that the QEMU instance itself is in a broken state (since restoring the snapshot leads to a fresh QEMU instance).

Exactly what happens when a snapshot is created?
It's not a actually snapshot. QEMU inserts a new block node as an overlay. This is done in such a way that already in-flight requests are first finished and new ones need to wait until the overlay is there. Like that the guest will never notice. Reads behave just as before. New writes will first check if the sector has already been backed up. If not, it will be backed up first, before the new write is done. When the backup finishes, the overlay is removed. Again, in a safe way to properly handle in-flight requests.
 
Do you include RAM in the snapshot? Snapshots are supposed to be a capture of the exact state the guest is in, so that's not surprising, but working as intended. It does rule out that the QEMU instance itself is in a broken state (since restoring the snapshot leads to a fresh QEMU instance).


It's not a actually snapshot. QEMU inserts a new block node as an overlay. This is done in such a way that already in-flight requests are first finished and new ones need to wait until the overlay is there. Like that the guest will never notice. Reads behave just as before. New writes will first check if the sector has already been backed up. If not, it will be backed up first, before the new write is done. When the backup finishes, the overlay is removed. Again, in a safe way to properly handle in-flight requests.
I know it maintains state - I mean that its telling that it remains broken when a new instance of QEMU is created.

As I see it the "Snapshot" should be completely transparent to the guest but for whatever reason in this instance - its not. And its interesting that its not always, just sometimes, so that suggests that it depends what the guest was doing at the point we have a problem. About the only thing I can think of is that something is blocked momentarily or the IO returns an error or state the guest doesn't handle properly.
The IO error is indicative of files/sectors not being where the guest thinks they should be.

It could be that the problem occurs everytime a snapshot is made, but we only notice when certain files are affected. That's obviously no good.

As it is with legacy business systems, we are not really able to migrate the warehousing and it looks like this solution isn't compatible for some reason.
 
Just had it happen on another server. (same kernel) .And going back to the previous question, why doesn't replication cause the issue? And also, people are using the autosnap script which as far as I can see, creates a snapshot and that also doesn't cause a problem.

Since replication runs all day long and doesn't seem to cause an issue, why can we not backup one of the replicated images? The replicated image doesn't have a VM dangling off of it.
 
As I already said, replication is done purely on the ZFS level.

Did you get any logs from within the guest? Maybe use netconsole or similar to get the log even if the filesystem cannot be written to anymore.
 
This may yet be related to qemu after all.
I'm running Kernel 2.4.21 and haven't seen that on any other VM's that are later than that. So we seem to have an incompatibility between QEMU and 2.4 kernel.

But wait.... Under OS type there are two linux kernel options.

l24Linux 2.4 Kernel
l26Linux 2.6 - 6.X Kernel

Sadly nobody has a clue what setting to 2.4 actually does, but it must be there for a reason. I know that people who have tried to install 2.4 kernel when its set to 2.6+ have a failure but I couldn't find any specific detail.

Our legacy qemu config has never has any issues on KVM, but it could be that they are similar vintage. below is the orginal config. It doesn't make reference to ostype so its either not available or wasn't set. This is from the machine I imported it from which didn't have the above issues. There are a few options that seemed to have no equivalence.

So if anybody knows what the ostype l24 actually does then let me know.

Code:
<domain type='kvm' id='38'>
<name>zzzzz</name>
<uuid>096b33ae-6068-6fce-fc24-b68b604948c4</uuid>
<memory unit='KiB'>2097152</memory>
<currentMemory unit='KiB'>2097152</currentMemory>
<vcpu placement='static'>8</vcpu>
<resource>
   <partition>/machine</partition>
</resource>
<os>
   <type arch='x86_64' machine='pc-i440fx-bionic'>hvm</type>
   <kernel>/var/lib/libvirt/boot/vmlinuz-2.4</kernel>
   <initrd>/var/lib/libvirt/boot/initrd.img-2.4</initrd>
   <cmdline>root=/dev/vda ro vdso=0 showopts vga=normal console=tty1 console=ttyS0,115200 3</cmdline>
   <boot dev='hd'/>
   <bootmenu enable='no'/>
</os>
<features>
   <acpi/>
   <apic/>
   <pae/>
</features>
<clock offset='utc'>
   <timer name='rtc' tickpolicy='catchup'/>
   <timer name='pit' tickpolicy='delay'/>
   <timer name='hpet' present='no'/>
</clock>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
<devices>
   <emulator>/usr/bin/kvm</emulator>
   <disk type='block' device='disk'>
     <driver name='qemu' type='raw'/>
     <source dev='/dev/data/wmsvb1-disk-root'/>
     <backingStore/>
     <target dev='vda' bus='virtio'/>
     <alias name='virtio-disk0'/>
     <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
   </disk>
   <disk type='block' device='disk'>
     <driver name='qemu' type='raw'/>
     <source dev='/dev/data/wmsvb1-swap'/>
     <backingStore/>
     <target dev='vdb' bus='virtio'/>
     <alias name='virtio-disk1'/>
     <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
   </disk>
   <disk type='block' device='disk'>
     <driver name='qemu' type='raw'/>
     <source dev='/dev/data/wmsvb1-disk-u2'/>
     <backingStore/>
     <target dev='vdc' bus='virtio'/>
     <alias name='virtio-disk2'/>
     <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
   </disk>
   <controller type='ide' index='0'>
     <alias name='ide'/>
     <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
   </controller>
   <controller type='usb' index='0' model='piix3-uhci'>
     <alias name='usb'/>
     <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
   </controller>
   <controller type='pci' index='0' model='pci-root'>
     <alias name='pci.0'/>
   </controller>
   <interface type='bridge'>
     <mac address='52:54:00:77:79:4a'/>
     <source bridge='br0'/>
     <target dev='vnet28'/>
     <model type='virtio'/>
     <alias name='net0'/>
     <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
   </interface>
   <serial type='pty'>
     <source path='/dev/pts/29'/>
     <target type='isa-serial' port='0'>
       <model name='isa-serial'/>
     </target>
     <alias name='serial0'/>
   </serial>
   <console type='pty' tty='/dev/pts/29'>
     <source path='/dev/pts/29'/>
     <target type='serial' port='0'/>
     <alias name='serial0'/>
   </console>
   <input type='mouse' bus='ps2'>
     <alias name='input0'/>
   </input>
   <input type='keyboard' bus='ps2'>
     <alias name='input1'/>
   </input>
   <graphics type='vnc' port='5928' autoport='yes' listen='0.0.0.0' keymap='en-gb'>
     <listen type='address' address='0.0.0.0'/>
   </graphics>
   <video>
     <model type='cirrus' vram='16384' heads='1' primary='yes'/>
     <alias name='video0'/>
     <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
   </video>
   <memballoon model='virtio'>
     <alias name='balloon0'/>
     <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
   </memballoon>
</devices>
<seclabel type='dynamic' model='dac' relabel='yes'>
   <label>+64055:+115</label>
   <imagelabel>+64055:+115</imagelabel>
</seclabel>
</domain>
 
So if anybody knows what the ostype l24 actually does then let me know.
The only difference to using l26 I see (grepped for l24 and l26 in our code base) are for some USB-related things (because they wouldn't be supported with this old kernel) and making sure to explicitly keep the old setting for ICH9-LPC.acpi-pci-hotplug-with-bridge-support=off, because the default was changed to on in QEMU 6.1 and not doing so might confuse systemd because of different PCI numbering. But that does not sound relevant to your issue either.
 
The only difference to using l26 I see (grepped for l24 and l26 in our code base) are for some USB-related things (because they wouldn't be supported with this old kernel) and making sure to explicitly keep the old setting for ICH9-LPC.acpi-pci-hotplug-with-bridge-support=off, because the default was changed to on in QEMU 6.1 and not doing so might confuse systemd because of different PCI numbering. But that does not sound relevant to your issue either.

It was obviously important enough to need it and despite huge changes > 2.6 there are no special versions.
In my travels, I did look at the differences between 2.4 and 2.6 from a kernel point of view and they would seem to be quite vast - far too many to mention but plenty of fundamental changes.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!