Search results

  1. R

    Boot Failure on upgrade to VE 7.2 with kernel 5.15.35-1-pve

    Ok, if I look how it works for Debian (I didn't know the kernel was an Ubuntu version) I'd expect that PVE just would use the LTS 5.10 kernel which comes with the Debian Bullseye version. Patched for PVE of course, but just the 5.10 version. Things that happened to some of us the last week could...
  2. R

    Boot Failure on upgrade to VE 7.2 with kernel 5.15.35-1-pve

    The issue is that these DL360G6 servers are from the same batch with equal hardware and equal firmware. I have access to ILO, but not to the console. I can ask a collegue to keep an eye on the boot process, but I'd rather do that myself. Another thing is: why change the kernel in stable...
  3. R

    Boot Failure on upgrade to VE 7.2 with kernel 5.15.35-1-pve

    Do you recommend to add "intel_iommu=on iommu=pt" to all systems, even those that start without any problems? I have multiple DL360G6 proxmox clusters, some start, others need to be started using the old kernel which is quite weird IMHO. Same hardware, same install, same version. Another thing...
  4. R

    Boot Failure on upgrade to VE 7.2 with kernel 5.15.35-1-pve

    Is this issue a particular PVE issue or a general kernel issue?
  5. R

    Boot Failure on upgrade to VE 7.2 with kernel 5.15.35-1-pve

    As stated previously: run as root: proxmox-boot-tool kernel pin 5.13.19-6-pve Then you won't get a bad surprise after an outage.
  6. R

    Boot Failure on upgrade to VE 7.2 with kernel 5.15.35-1-pve

    Same here, Two machines DL360 G6 with HPE Smart Array P410. Pinned 5.13.19-6-pve, that one works well. Another set of the same DL360 servers start well 5.15.35-1-pve, but these ones use glusterfs for VM's. I did no further investigation as I'm at 1000km from these machines.
  7. R

    Console output stops after udev

    Nope. Just a VM. I copied Buster to a new VM and upgraded it to Bullseye. The Buster still works. I retried, same result. Btw: Proxmox6. It is a sysV init system, no systemd. R
  8. R

    Console output stops after udev

    I just updated a Debian Buster to Debian Bullseye. It works, I can access the machine using ssh, but the Proxmox console stops after: Waiting for /dev to be fully populated... The tty's are running: 1191 tty1 Ss+ 0:00 /sbin/getty 38400 tty1 1192 tty2 Ss+ 0:00 /sbin/getty 38400...
  9. R

    accessing glusterfs

    Another anoying thing: I run glusterfs on ext4 formatted /dev/mapper/pve-data partitions, but apparently the gluster volume is not online when the VM starts (there is only 1 test VM): TASK ERROR: unable to activate storage 'gvol0' - directory '/mnt/pve/gvol0' does not exist or is unreachable...
  10. R

    accessing glusterfs

    ok, thnx, so if I add some extra proxmox nodes, the extra node will always use its local drive.
  11. R

    accessing glusterfs

    Hello list, I have 2 machines in 1 cluster pmox5 where I have added glusterfs-server. On both machines the gluster share is mounted as: pmox1:gvol0 /mnt/pve/gvol0 fuse.glusterfs rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072 0 0 On both servers it points to...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!