Try read the entire thread before making assumptions. The reason is explained here: https://forum.proxmox.com/threads/pve-8-0-and-8-1-hangs-on-boot.137033/post-609320 which also explains why this has nothing todo with the kernel but was related to living out a module from the initramfs which was...
Try read the entire thread before making assumptions. The reason is explained here: https://forum.proxmox.com/threads/pve-8-0-and-8-1-hangs-on-boot.137033/post-609320 which also makes it obviously way you see no problems on proxmox 7.4
Yep, that did the trick :)
The fix enables booting on these kernels:
- 5.15.126-1-pve
- 6.2.16-19-pve
- 6.5.11-4-pve
Thank's Thomas :cool:
To recap in case other comes to this thread:
echo "simplefb" >> /etc/initramfs-tools/modules
update-initramfs -u -k 6.5.11-4-pve (or update-initramfs -u...
Seems it is the initrd build system on 8.0 and 8.1 which is making unbootable initrd files because if I build a initrd file for pve-kernel-5.15.126-1-pve: 5.15.126-1 on 8.1 the system cannot boot using the pve-kernel-5.15.126-1-pve: 5.15.126-1 an showing exactly the same as for kernel > 6.1...
nomodeset was enabled.
This is wat I see after adding earlyprintk=vga which is exactly the same as I saw before, no output what so ever
To me it looks like a kernel segfault .
networking is enabled (management interface:
$ ping -c1 esx2
PING esx2.datanom.net (172.16.3.9) 56(84) bytes of data...
Hi all,
Serious problem with proxmox-kernel-6.2 (proxmox-kernel-6.2.16-19-pve) and proxmox-kernel-6.5 (proxmox-kernel-6.5.11-4-pve-signed) previous kernels proxmox-kernel-6.1 and the one I currently boots on pve-kernel-5.15.126-1-pve works as expected. The symptom is that when boot sequence...
After upgrading to this kernel sensor readings through i2c is broken. The console is spammed with these messages:
[24271.599465] i2c i2c-0: Failed! (01)
Booting on kernel pve-kernel-5.15.83-1-pve and sensor reading is restored and no i2c errors in the log.
Hardware:
System Information...
I can recommend using OmniosCE. Have been running it here for more than 10 years without any problems or issues just remember to separate iSCSI traffic on a different network than the network for proxmox cluster. Preferably using 10Gbit or better is using it in an enterprise setup.
A question: Is it the optimal way that mounts should be blocking?
Would it not be better if mounts was not blocking and used a timeout instead and in case mount observed timeout refuse starting VM's/PCT's with block devices on these mounts?
Sorry, missed that info ;-)
New information:
Only problems with NFS mounts to Qnap.
NFSv4.2 does not work
NFSv4.1 does not work
NFSv4.0 work
NFSv3 does not work.
As can be read here: https://forum.proxmox.com/threads/proxmox-ve-7-2-released.108970/post-468736
kernel pve-kernel-5.15.30-2-pve does not fix the issue. As a remark I have a Qnap which have a iscsi and nfs mount but have other storages as well but apparently if the Qnap mount fails the GUI...
pve-kernel-5.15.30-2-pve completely brakes GUI functionality. The cluster works as expected and can be managed via CLI on the different nodes (stop, start, and migrate - also VM's running in HA mode) but no way to do it from the GUI. Booting nodes using pve-kernel-5.13.19-6-pve and the GUI...
Not only that but if you run micro services or docker based services spooling a complete operating system is a waste of ressorces. Only if your micro services or docker based services must have a 100% uptime then go for a VM since LXC containers cannot live migrate still.
I think that part is from a time where unprivileged containers was not production ready and default was to run using a privileged container. A privileged containers root is mapped to the host root so breaking out of the container means that you get root privileges on the host while breaking out...
From https://[your_pve_host]/pve-docs/chapter-pct.html#pct_options:
keyctl=<boolean> (default = 0)
For unprivileged containers only: Allow the use of the keyctl() system call. This is required to use docker inside a container. By default unprivileged containers will see this system call as...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.