Boot failed after upgrading to 7.2-4 stuck in initramfs

Jun 3, 2018
12
0
21
40
Hello,

After upgrade to version 7.2-4, when booting up, I am taken to the following screen:

1.jpeg

2.jpeg

----

3.jpeg

4.jpeg
I tried to boot with an older kernel version, the system started, but I encounter the following problems when trying to start VM.

Code:
  WARNING: Activation disabled. No device-mapper interaction will be attempted.
  WARNING: Activation disabled. No device-mapper interaction will be attempted.
  WARNING: Activation disabled. No device-mapper interaction will be attempted.
kvm: -drive file=/mnt/pve/iso_storage/template/iso/virtio-win-0.1.141.iso,if=none,id=drive-ide0,media=cdrom,aio=io_uring: Unable to use io_uring: failed to init linux io_uring ring: Function not implemented
TASK ERROR: start failed: QEMU exited with code 1

pveversion -v:

Code:
proxmox-ve: 7.2-1 (running kernel: 4.15.18-2-pve)
pve-manager: 7.2-4 (running version: 7.2-4/ca9d43cc)
pve-kernel-5.15: 7.2-3
pve-kernel-helper: 7.2-3
pve-kernel-5.15.35-1-pve: 5.15.35-3
pve-kernel-5.15.7-1-pve: 5.15.7-1
pve-kernel-4.15: 5.4-19
pve-kernel-4.15.18-30-pve: 4.15.18-58
pve-kernel-4.15.18-9-pve: 4.15.18-30
pve-kernel-4.15.18-2-pve: 4.15.18-21
pve-kernel-4.15.17-1-pve: 4.15.17-9
pve-kernel-4.13.13-2-pve: 4.13.13-33
ceph-fuse: 16.2.7
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: 0.8.36+pve1
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.2
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.1-8
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.2-1
libpve-guest-common-perl: 4.1-2
libpve-http-server-perl: 4.1-2
libpve-storage-perl: 7.2-4
libqb0: 1.0.5-1~bpo9+2
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.12-1
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
proxmox-backup-client: 2.2.1-1
proxmox-backup-file-restore: 2.2.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.5.1
pve-cluster: 7.2-1
pve-container: 4.2-1
pve-docs: 7.2-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.4-2
pve-ha-manager: 3.3-4
pve-i18n: 2.7-2
pve-qemu-kvm: 6.2.0-7
pve-xtermjs: 4.16.0-1
qemu-server: 7.2-3
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.7.1~bpo11+1
vncterm: 1.7-1
zfsutils-linux: 2.1.4-pve1


Can anyone help me with that ?
 
from which version did you upgrade?
 
you don't have a PVE 6 kernel installed, but do have PVE 5 and 7 kernels installed..

Code:
pve-kernel-5.15.35-1-pve: 5.15.35-3
pve-kernel-5.15.7-1-pve: 5.15.7-1
pve-kernel-4.15: 5.4-19
pve-kernel-4.15.18-30-pve: 4.15.18-58
pve-kernel-4.15.18-9-pve: 4.15.18-30
pve-kernel-4.15.18-2-pve: 4.15.18-21
pve-kernel-4.15.17-1-pve: 4.15.17-9
pve-kernel-4.13.13-2-pve: 4.13.13-33

so I highly doubt that ;)

I suggest the following:
- make backups of all your guests and important config files (first!)
- update your initramfs, retry booting the PVE 7 kernel
- if that fails, try to collect a log of the failed boot and post it here
 
you don't have a PVE 6 kernel installed, but do have PVE 5 and 7 kernels installed..

Code:
pve-kernel-5.15.35-1-pve: 5.15.35-3
pve-kernel-5.15.7-1-pve: 5.15.7-1
pve-kernel-4.15: 5.4-19
pve-kernel-4.15.18-30-pve: 4.15.18-58
pve-kernel-4.15.18-9-pve: 4.15.18-30
pve-kernel-4.15.18-2-pve: 4.15.18-21
pve-kernel-4.15.17-1-pve: 4.15.17-9
pve-kernel-4.13.13-2-pve: 4.13.13-33

so I highly doubt that ;)

I suggest the following:
- make backups of all your guests and important config files (first!)
- update your initramfs, retry booting the PVE 7 kernel
- if that fails, try to collect a log of the failed boot and post it here
I am not very familiar with this virtualization environment, you can please specify step by step how to proceed. Thank you.
 
  1. backup all your guests (should have been done before the upgrade as well, but better twice than not at all)
  2. update-initramfs -u -k all
  3. reboot into 5.15.35-1-pve kernel
  4. collect output (e.g., via serial console, ..) if the reboot fails
 
  1. backup all your guests (should have been done before the upgrade as well, but better twice than not at all)
  2. update-initramfs -u -k all
  3. reboot into 5.15.35-1-pve kernel
  4. collect output (e.g., via serial console, ..) if the reboot fails
1.jpg
2.jpg
3.jpg4.jpg

same problems. At this point, also if i trying to boot with an older version of the kernel, it stops at initramfs.
 
Last edited:
could you please get the contents of '/etc/lvm/lvm.conf' inside the initramfs and the kernel cmdline ('cat /proc/cmdline')?
 
could you please get the contents of '/etc/lvm/lvm.conf' inside the initramfs and the kernel cmdline ('cat /proc/cmdline')?
cat /proc/cmdline:
cmdline.jpg
/etc/lvm/lvm.conf

Code:
config {

    checks = 1

    abort_on_errors = 0

    profile_dir = "/etc/lvm/profile"
}

devices {

    dir = "/dev"

    scan = [ "/dev" ]

    obtain_device_list_from_udev = 1

    external_device_info_source = "none"



    sysfs_scan = 1

    scan_lvs = 0

    multipath_component_detection = 1

    md_component_detection = 1


    fw_raid_component_detection = 0

    md_chunk_alignment = 1


    data_alignment_detection = 1

    data_alignment = 0

    data_alignment_offset_detection = 1

    ignore_suspended_devices = 0

    ignore_lvm_mirrors = 1

    require_restorefile_with_uuid = 1

    pv_min_size = 2048

    issue_discards = 0

    allow_changes_with_duplicate_pvs = 0

    allow_mixed_block_sizes = 0
}

allocation {


    maximise_cling = 1

    use_blkid_wiping = 1

    wipe_signatures_when_zeroing_new_lvs = 1

    mirror_logs_require_separate_pvs = 0




}

log {





    verbose = 0

    silent = 0

    syslog = 1


    overwrite = 0

    level = 0


    command_names = 0

    prefix = "  "

    activation = 0

    debug_classes = [ "memory", "devices", "io", "activation", "allocation", "metadata", "cache", "locking", "lvmpolld", "dbus" ]


}

backup {

    backup = 1

    backup_dir = "/etc/lvm/backup"

    archive = 1

    archive_dir = "/etc/lvm/archive"

    retain_min = 10

    retain_days = 30
}

shell {

    history_size = 100
}

global {

    umask = 077

    test = 0

    units = "r"

    si_unit_consistency = 1

    suffix = 1

    activation = 0

    proc = "/proc"

    etc = "/etc"

    wait_for_locks = 1

    locking_dir = "/run/lock/lvm"

    prioritise_write_locks = 1


    abort_on_internal_errors = 0

    metadata_read_only = 0

    mirror_segtype_default = "raid1"


    raid10_segtype_default = "raid10"

    sparse_segtype_default = "thin"




    use_lvmlockd = 0



    system_id_source = "none"


    use_lvmpolld = 1

    notify_dbus = 1

}

activation {

    checks = 0

    udev_sync = 1

    udev_rules = 1


    retry_deactivation = 1

    missing_stripe_filler = "error"



    raid_region_size = 2048

    auto_activation_volume_list = [ "mandragora", "vm-volume" ]

    raid_fault_policy = "warn"

    mirror_image_fault_policy = "remove"

    mirror_log_fault_policy = "allocate"

    snapshot_autoextend_threshold = 100

    snapshot_autoextend_percent = 20

    thin_pool_autoextend_threshold = 100

    thin_pool_autoextend_percent = 20





    monitoring = 1



    activation_mode = "degraded"


}




dmeventd {



}




devices {
     global_filter=["r|/dev/zd.*|"]
}
 
Last edited:
this line looks suspicious:

Code:
    auto_activation_volume_list = [ "mandragora", "vm-volume" ]

no idea where they come from (certainly not from anything PVE does ;)), but they likely prevent auto activation of your actual VG 'pve'.

I'd try the following:
- in the initramfs prompt, run lvm vgchange -ay pve (should activate the VG)
- press C^D to let the initrd attempt booting again
- if that doesn't work:
-- mount the root LV to the target directory (IIRC 'root')
-- press C^D to let the initrd attempt booting again

if that doesn't work, booting a live ISO, activating your pve VG, mounting the root LV and other partitions of your main disk, chrooting into the directory, editing lvm.conf and updating the initramfs probably works as well (standard 'chroot recovery' procedure, you should find plenty of howtos for the exact steps!)
 
this line looks suspicious:

Code:
    auto_activation_volume_list = [ "mandragora", "vm-volume" ]

no idea where they come from (certainly not from anything PVE does ;)), but they likely prevent auto activation of your actual VG 'pve'.

I'd try the following:
- in the initramfs prompt, run lvm vgchange -ay pve (should activate the VG)
- press C^D to let the initrd attempt booting again
- if that doesn't work:
-- mount the root LV to the target directory (IIRC 'root')
-- press C^D to let the initrd attempt booting again

if that doesn't work, booting a live ISO, activating your pve VG, mounting the root LV and other partitions of your main disk, chrooting into the directory, editing lvm.conf and updating the initramfs probably works as well (standard 'chroot recovery' procedure, you should find plenty of howtos for the exact steps!)
I disabled the line : auto_activation_volume_list = [ "mandragora", "vm-volume" ]

lvm vgchange -ay pve:

5.JPG

not working.

Can you please be a little more explicit about this command : mount the root LV to the target directory (IIRC 'root').
 
if activation doesn't work, that won't help either. please proceed with the live CD route.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!