Building custom Kernel (6.11) but unable to edit config

bugmenot

Member
Jul 20, 2020
5
0
21
34
Hey guys,

I'm trying to build a custom kernel but whatever I do I am unable to edit the config of the kernel, it always reverts to the default.

Bash:
git clone git://git.proxmox.com/git/pve-kernel.git
cd pve-kernel

nano Makefile
# I edit this line: EXTRAVERSION=-$(KREL)-pve-custom

make build-dir-fresh

#Copy custom config
cp /path/to/my/custom.config proxmox-kernel-6.11.11-1/ubuntu-kernel/.config

make deb

After this I extract the config from the .deb file and it has reverted to the original config.

Bash:
dpkg-deb -x proxmox-kernel-6.11.11-1-pve-custom_6.11.11-1_amd64.deb extracted_kernel
find extracted_kernel -name "config-*"

Any idea how to allow for a custom config? I've tried to edit the config-6.11.11.org file as well before I run the "make deb" cmd but it seems to always revert to the original config file.
 
Last edited:
So I'm setting these debian rules in debian/rules, they are reflected in proxmox-kernel-6.11.11/debian/rules after running make build-dir-fresh.
1739435908159.png

Note I added the CONFIG_MLX5_VFIO_PCI as well since I need that too.
After building the .deb files with make deb it still seems to be turned off. Even though I see the settings reflected in the make screen:

1739436442722.png

I unpack the kernel deb with this command and view it
Bash:
dpkg-deb -x proxmox-kernel-6.11.11-1-pve-custom_6.11.11-1_amd64.deb extracted_kernel
vi extracted_kernel/boot/config-6.11.11-1-pve-custom

Afterwards I view the extracted config and these settings are still default:

1739436249590.png

Any thoughts?
 
I think for some reason it takes my current kernel config and uses that as the configuration. Any ideas on how I mitigate this?
 
Kconfig is complex. in this case, you are trying to enable settings that are actually automatically configured based on other settings. do you actually need those to be builtin? usually, things being built as module (m) is okay ;)
 
What I'm trying to do is a live migration with PCI-e Passthrough. I've enabled all the modules.

Bash:
lsmod | grep vfio
mlx5_vfio_pci          49152  0
vfio_pci               16384  2
vfio_pci_core          86016  2 mlx5_vfio_pci,vfio_pci
vfio_iommu_type1       49152  2
vfio                   65536  13 mlx5_vfio_pci,vfio_pci_core,vfio_iommu_type1,vfio_pci
iommufd               102400  1 vfio
mlx5_core            2347008  3 mlx5_vfio_pci,mlx5_vdpa,mlx5_ib

Once I initiate the migration i get this error:

Bash:
qm monitor 309
Entering QEMU Monitor for VM 309 - type 'help' for help
qm> migrate -d tcp:10.0.123.112:5901
Error: 0000:81:00.7: VFIO migration is not supported in kernel

VFIO migration is not supported in kernel. Therefore I'm trying to compile a kernel with VFIO enabled by default.
 
whether the VFIO driver is compiled as a module or builtin doesn't make a difference for that. as far as I can tell, that error means that your driver (or device) doesn't support tracking its state in a fashion that would allow live migration..
 
what is the device you're trying to live migrate? how does your qemu config (qm config ID) look like?

there are a bit more things to do for live migration than just the kernel module. the device has to be marked as migratable (normally) but we don't do that for pci devices.
also, sometimes it's necessary that e.g. virtual functions need to retain the correct driver, but by default we rebind the device to the generic 'vfio-pci' driver (which might not work to migrate)

e.g. i'm currently working on nvidia vgpu live migration integration: https://lore.proxmox.com/pve-devel/4fc9a8ef-f263-4906-bf39-3c7561c2a653@proxmox.com/T/#t
and there i add 'enable-migration=on' to the hostpci device, and leave the driver that nvidia loads onto the virtual functions