[SOLVED] [pve8] kernel: EDID has corrupt header

Taylan

Member
Oct 19, 2020
82
29
23
54
Hello everyone,


after upgrading to proxmox 8 (to kernel 6.2.16-3-pve) the syslog is flooded with following messages:

Bash:
...
Jun 23 15:12:42 Proxmox kernel: EDID has corrupt header
Jun 23 15:12:42 Proxmox kernel: EDID block 0 is all zeroes
...

This is a headless server, no monitor is connected. This behaviour is new after the upgrade.

I have following modules active in `/etc/modules`:
Bash:
kvmgt

# Generated by sensors-detect on Mon Jan 17 13:04:59 2022
# Chip drivers
coretemp
jc42
nct6775

And following in the `/etc/kernel/cmdline`:
Bash:
root=ZFS=rpool/ROOT/pve-1 boot=zfs intel_iommu=on i915.enable_gvt=1 quiet loglevel=3 vga=current initcall_blacklist=sysfb_init

Any help here?


Thanks...


Edit0: I looked through the logs in dmesg, it seems to be a long standing bug:
https://github.com/intel/gvt-linux/issues/77

Edit1: I just found a workaround for everyone affected by this.
This issue is not just flooding the kernel logs, the server was stalling every 1-2 seconds as well.

The workaround is disabling the kms poller:
Bash:
echo "options drm_kms_helper poll=0" >> /etc/modprobe.d/modprobe.conf
update-initramfs -u -k all

See 3.4 in https://wiki.archlinux.org/title/kernel_mode_setting for more information.

Edit2: The above mentioned workaround disables the mode setting, so the screen (IPMI) stays black, so you have to be very careful, if you type a decryption password at boot. More user friendly workaround is just setting a arbitrary EDID and use fastboot, so drm doesn't try to set modes:

Code:
echo " i915.fastboot=1 drm.edid_firmware=edid/1280x1024.bin" >> /etc/kernel/cmdline
update-initramfs -u -k all && pve-efiboot-tool refresh
 
Last edited:
Just curious maybe it's safer from long run perspective (PVE maintenance) to just disable i915 module at all?
 
Just curious maybe it's safer from long run perspective (PVE maintenance) to just disable i915 module at all?
I don't think so, not every proxmox user is in some enterprise environment.
For me, it's for home lab use, I passthrough my gpu to VMs or containers.

It's a long standing bug, and it might get fixed after all.
 
Yes, but the config is for intel igpu.

Don't forget to do this afterwards:
Code:
update-initramfs -u -k all
 
I have the same cpu, it has an intel igpu, so the config above should just work as it is.
Thanks, but I don't think I got it.

I created this file: /etc/modprobe.d/modprobe.conf with this content:

Bash:
i915.fastboot=1 drm.edid_firmware=edid/1280x1024.bin

then I run: update-initramfs -u -k all

but I received this error:

Bash:
libkmod: ERROR ../libkmod/libkmod-config.c:712 kmod_config_parse: /etc/modprobe.d/modprobe.conf line 1: ignoring bad line starting with 'i915.fastboot=1'
 
Thanks, but I don't think I got it.

I created this file: /etc/modprobe.d/modprobe.conf with this content:

Bash:
i915.fastboot=1 drm.edid_firmware=edid/1280x1024.bin

then I run: update-initramfs -u -k all

but I received this error:

Bash:
libkmod: ERROR ../libkmod/libkmod-config.c:712 kmod_config_parse: /etc/modprobe.d/modprobe.conf line 1: ignoring bad line starting with 'i915.fastboot=1'
sorry, I didn't pay enough attention.

Those lines should be added to kernel cmdline:

So the workaround is this:
Code:
echo " i915.fastboot=1 drm.edid_firmware=edid/1280x1024.bin" >> /etc/kernel/cmdline
update-initramfs -u -k all && pve-efiboot-tool refresh

My config looks like this:
Code:
# cat /etc/kernel/cmdline
root=ZFS=rpool/ROOT/pve-1 boot=zfs intel_iommu=on iommu=pt i915.enable_gvt=1 i915.enable_guc=0 pcie_acs_override=downstream,multifunction initcall_blacklist=sysfb_init quiet loglevel=3 vga=current i915.fastboot=1 drm.edid_firmware=edid/1280x1024.bin
 
sorry, I didn't pay enough attention.

Those lines should be added to kernel cmdline:

So the workaround is this:
Code:
echo " i915.fastboot=1 drm.edid_firmware=edid/1280x1024.bin" >> /etc/kernel/cmdline
update-initramfs -u -k all && pve-efiboot-tool refresh

My config looks like this:
Code:
# cat /etc/kernel/cmdline
root=ZFS=rpool/ROOT/pve-1 boot=zfs intel_iommu=on iommu=pt i915.enable_gvt=1 i915.enable_guc=0 pcie_acs_override=downstream,multifunction initcall_blacklist=sysfb_init quiet loglevel=3 vga=current i915.fastboot=1 drm.edid_firmware=edid/1280x1024.bin
My actual /etc/kernel/cmdline contains this:

Bash:
root=ZFS=rpool/ROOT/pve-1 boot=zfs

Should I change it like this:

Bash:
root=ZFS=rpool/ROOT/pve-1 boot=zfs i915.fastboot=1 drm.edid_firmware=edid/1280x1024.bin

and run: update-initramfs -u -k all ?
 
Thanks, it seems to work. I also tested the KVM and it worked.

When the fix will be available, in order to undo the changes can I change back that line and run update-initramfs -u -k all to update it? I hope it won't break down all.
You're welcome, but, if you aren't sure, what every piece of that config does, you shouldn't just copy&paste it. So try to understand, would be my advise.

You trusted some stranger on the internet, so trust again, when I say, sure, it'll be ok ;)
 
You're welcome, but, if you aren't sure, what every piece of that config does, you shouldn't just copy&paste it. So try to understand, would be my advise.

You trusted some stranger on the internet, so trust again, when I say, sure, it'll be ok ;)
You are right. However, it's a brand new server so it doesn't matter if it falls apart for now :D
 
I'm facing the same issue on an OVH server with a Xeon E-2274G CPU, and unfortunately the edit2 workaround has no effect.
 
I'm facing the same issue on an OVH server with a Xeon E-2274G CPU, and unfortunately the edit2 workaround has no effect.
It still works here. It's probably another issue. Is this a cloud server? Do they have some kind of remoting software for IPMI? Maybe ask them or double check your cmdline file and don't forget to create the initram with the new information.
update-initramfs -u -k all
pve-efiboot-tool refresh


Edit: If you don't mind to boot with a blank screen (no password entry for decryption etc.) you could try previous workaround.
 
Last edited:
It's a dedicated server, with an access through IPMI. I've double checked the cmdline file, executed the update-initramfs/pve-efiboot-tool commands and rebooted twice, the EDID message is still there.
 
It's a dedicated server, with an access through IPMI. I've double checked the cmdline file, executed the update-initramfs/pve-efiboot-tool commands and rebooted twice, the EDID message is still there.
Are you booting via "proxmox-boot-tool"?

What is the output of following?
Code:
proxmox-boot-tool status

And in the meantime, please post your content of your `/etc/kernel/cmdline`.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!