Hello,
I'm currently setting up a testing environment based on a 2 nodes Proxmox cluster.
The 2 nodes are absolutely identical machines (micro-PC).
I've followed the exact same procedure to install both nodes, booting from the proxmox-ve_8.3-1.iso.
Both installations went off without a hitch.
Then I went through some configuration settings and updated the repositories to the No-Subscription ones (from the GUI).
The effective update was carried out from the command line, using
Both systems were then rebooted, and a peculiar behavior could be observed: after rebooting, node1 booted on kernel 6.8.12-4-pve, while node2 booted on kernel 6.8.12-5-pve.
I tried pinning kernel 6.8.12-5-pve on node1:
Rebooting the system didn't load the 6.8.12-5-pve kernel, but the 6.8.12-4-pve kernel.
I found absolutely no way to pin the targeted kernel, even though I even tried a re-install from scratch!
Since the objective is to create a cluster, it seems it would be wiser to have all nodes using the same kernel version, am I wrong?
Moreover, I suppose the latest kernel version is probably automatically defined as the default one when an upgrade takes place. It would become tedious to have to re-pin each and every new kernel version.
In the end, I decided to go the other way around, pinning node2 to the 6.8.12-4-pve kernel, which worked as expected on the next node2 reboot. But, obviously, this is exactly the opposite of what I would want.
I'm out of ideas on where to go from here.
In case someone reading this has some track I could follow it would be welcome.
Thanks.
I'm currently setting up a testing environment based on a 2 nodes Proxmox cluster.
The 2 nodes are absolutely identical machines (micro-PC).
I've followed the exact same procedure to install both nodes, booting from the proxmox-ve_8.3-1.iso.
Both installations went off without a hitch.
Then I went through some configuration settings and updated the repositories to the No-Subscription ones (from the GUI).
The effective update was carried out from the command line, using
apt update
and apt -y upgrade
Both systems were then rebooted, and a peculiar behavior could be observed: after rebooting, node1 booted on kernel 6.8.12-4-pve, while node2 booted on kernel 6.8.12-5-pve.
I tried pinning kernel 6.8.12-5-pve on node1:
Bash:
root@node1:~# uname -r
6.8.12-4-pve
root@node1:~# proxmox-boot-tool kernel list
Manually selected kernels:
None.
Automatically selected kernels:
6.8.12-4-pve
6.8.12-5-pve
root@node1:~# proxmox-boot-tool status
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
System currently booted with uefi
AD85-A0BC is configured with: uefi (versions: 6.8.12-4-pve, 6.8.12-5-pve)
root@node1:~# proxmox-boot-tool kernel pin 6.8.12-5-pve
Set kernel '6.8.12-5-pve' in /etc/kernel/proxmox-boot-pin.
Refresh the actual boot ESPs now? [yN] y
Running hook script 'proxmox-auto-removal'..
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
Copying and configuring kernels on /dev/disk/by-uuid/AD85-A0BC
Copying kernel and creating boot-entry for 6.8.12-4-pve
Copying kernel and creating boot-entry for 6.8.12-5-pve
Rebooting the system didn't load the 6.8.12-5-pve kernel, but the 6.8.12-4-pve kernel.
I found absolutely no way to pin the targeted kernel, even though I even tried a re-install from scratch!
Since the objective is to create a cluster, it seems it would be wiser to have all nodes using the same kernel version, am I wrong?
Moreover, I suppose the latest kernel version is probably automatically defined as the default one when an upgrade takes place. It would become tedious to have to re-pin each and every new kernel version.
In the end, I decided to go the other way around, pinning node2 to the 6.8.12-4-pve kernel, which worked as expected on the next node2 reboot. But, obviously, this is exactly the opposite of what I would want.
I'm out of ideas on where to go from here.
In case someone reading this has some track I could follow it would be welcome.
Thanks.
Last edited: