timed out for waiting the udev queue being empty

Kriengkrai.K9

New Member
May 19, 2024
4
0
1
timed out for waiting the udev queue being empty
Gave up waiting for root file system device. Common problems:
- Boot args (cat /proc/cmdline)
- Check rootdelay= (did the system wait long enough?)
- Missing modules (cat /proc/modules; ls /dev)
ALERT! /dev/mapper/pve-root does not exist. Dropping to a shell!
(initramfs)S__92413956.jpg
 
Hi

I have exactly the same problem. I have installed the latest version of pve 8.2.2. The root is on an md0 raid1 array. Everything else is on ZFS.

The operating system boots inconsistently; that is, during one restart, it may hang for 2-4 minutes and then load, while during another, it may hang for 5 minutes and fail to load. This happens completely randomly.


The problem is observed specifically during boot-up. No issues have been observed during operation so far.

Key moments are captured in screenshots:

Grub - 1:16
1716153674227.png


Immediately after Grub - 1:19
Booting Proxmox VE GNU/Linux
Loading Initial ramdisk...
1716153709846.png

When the OS is loading, it hangs in this position for 2 minutes and then begins to boot. But as we know, after Grub, the boot should start immediately, or at most within 2 seconds.However, in this case, when the OS did not load, it hung on this screen until 03:30, i.e., it hung for 2 minutes and 20 seconds.

Here it writes: Timed out waiting for the udev queue to be empty.
And it hangs like this for almost 2 more minutes, until 05:14.
1716154124709.png

at 05:14: Timed out waiting for the udev queue to be empty. mdadm: error opening /dev/md?*: No such file or directory.
The root is specifically on this raid.
1716154213030.png

After another 3 seconds
1716154310162.png


At this window, it does not respond to the keyboard, nor to ctrl alt del.
After one or two restarts, the system will boot up with a long hang lasting 2-4 minutes.
All hardware components were sequentially replaced from a ZIP kit for testing (motherboard, RAM, power supply, NVMe disks, SAS controllers). Only the processor was not replaced. Therefore, hardware issues can almost certainly be ruled out.

An important observation is that if all HDD and SSD drives are disconnected, leaving only the NVMe drives where the root is located, the system boots immediately without any delays.
It seems like the bootloader doesn't like that there are so many drives in the server—18 in total.

If you need me to send any additional data or command outputs, I'll send them all.
 
Kriengkrai.K9
  • What pve version do you use?
  • Does your system not boot at all, or does it boot intermittently like mine?
  • How many disks in you system?
 
What i didn't realise, you can advance boot grub and select a previous version of pve / pbs
This enabled me to boot correctly
 
The problem is probably massive, I hope it will be fixed soon.

Kriengkrai.K9,
Do you have this problem after the update too?


Try to boot your pve using the installation iso: advanced mode - Rescue boot.

As I understand it, the problem is in the bootloader, in Rescue boot mode, the download bypasses the regular bootloader.
Probably it helps you.


Screenshots:
1716334746924.png

1716334767269.png
 
This just occurred twice to me as well. It didn't want to boot the first time, either.
I had to use rescue boot and free up some space in the boot directory, even though there was enough space in the first place.
 
Hello

Just to support in that you are not alone on this issue - I've this problem occuring on a brand new build to an old Dell PowerEdge server - boots up sporadically and often fails as Dimitris7 describes - I'd be uploading exactly the same screenshots as he did.

Purely FYI - I'm on the latest Proxmox version, booting off a single SSD with two Raid 1 Sata drives for data / systems etc. I mention this as although all drives are found during boot-up, when I take the two Raid drives out it 'seems' to boot up faster and more regularly (not 100%) into proxmox.

I'm going to try the Rescue Boot option but also i've downloaded the previous stable build 8.1 from Feb 2024 to try as with very little so far configured or set-up i've not much to lose.
 
I resolved my issue by replacing my entire server and swapping the drives over... I suspect that my issue might have been due to underlying hardware failure.
 
Similar matter here, Dell T440 with Perc, booting from to SSDs in RAID1 and 4 4TB spinners (WD RED) in RAID5.

In my case system boot always and the behaviour is a little different:
Once GRUB starts I get a blank screen with a cursor, it can stay more than 5 minutes in that situation, in the meantime I can see the HD's leds flashing in turns, the I get:

Timed out for waiting the udev queue being empty.
Found volume group "Raid5" using metadata type lvm2
Found volume group "pve" using metadata type lvm2
6 logical volume(s) in volume group "Raid5" now active
13 logical volume(s) in volume group "pve" now active
/dev/mapper/pve-root: clean, 103480/6291456 files, 6432038/25165824 blocks

Then the normal login issue
 
Hello everyone!

The issue persists. I have a hypothesis that if I move the root and bootloader to different disks, the problem might go away. It’s possible that this failure occurs specifically on my NVMe disks for some reason. This is just a hypothesis, and I will check it and update you.

My version:

Bash:
root@:~# pveversion -v
proxmox-ve: 8.2.0 (running kernel: 6.8.12-1-pve)
pve-manager: 8.2.4 (running version: 8.2.4/faa83925c9641325)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.8: 6.8.12-1
proxmox-kernel-6.8.12-1-pve-signed: 6.8.12-1
proxmox-kernel-6.8.4-3-pve-signed: 6.8.4-3
proxmox-kernel-6.8.4-2-pve-signed: 6.8.4-2
ceph-fuse: 17.2.7-pve3
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx9
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.1
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.1.4
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.7
libpve-cluster-perl: 8.0.7
libpve-common-perl: 8.2.2
libpve-guest-common-perl: 5.1.4
libpve-http-server-perl: 5.1.0
libpve-network-perl: 0.9.8
libpve-rs-perl: 0.8.9
libpve-storage-perl: 8.2.3
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.2.7-1
proxmox-backup-file-restore: 3.2.7-1
proxmox-firewall: 0.5.0
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.6
proxmox-widget-toolkit: 4.2.3
pve-cluster: 8.0.7
pve-container: 5.1.12
pve-docs: 8.2.3
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.1
pve-firewall: 5.0.7
pve-firmware: 3.13-1
pve-ha-manager: 4.0.5
pve-i18n: 3.2.2
pve-qemu-kvm: 9.0.2-2
pve-xtermjs: 5.3.0-3
qemu-server: 8.2.4
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.4-pve1
 
Hi, no NVMe's on my side only SSD's Sandisk as boot disks (RAID1)

Both RAID units are made from Dell Perc controller.

Not sure but I think that this was not happening with Proxmox v7

My setup:
Bash:
root@pve:~# pveversion -v
proxmox-ve: 8.2.0 (running kernel: 6.8.8-4-pve)
pve-manager: 8.2.3 (running version: 8.2.3/b4648c690591095f)
proxmox-kernel-helper: 8.1.0
pve-kernel-5.15: 7.4-15
pve-kernel-5.13: 7.1-9
proxmox-kernel-6.8: 6.8.8-4
proxmox-kernel-6.8.8-4-pve-signed: 6.8.8-4
pve-kernel-5.4: 6.4-15
pve-kernel-5.15.158-2-pve: 5.15.158-2
pve-kernel-5.15.39-3-pve: 5.15.39-3
pve-kernel-5.13.19-6-pve: 5.13.19-15
pve-kernel-5.4.174-2-pve: 5.4.174-2
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph-fuse: 16.2.11+ds-2
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown: residual config
ifupdown2: 3.2.0-1+pmx9
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.1
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.1.4
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.7
libpve-cluster-perl: 8.0.7
libpve-common-perl: 8.2.1
libpve-guest-common-perl: 5.1.3
libpve-http-server-perl: 5.1.0
libpve-network-perl: 0.9.8
libpve-rs-perl: 0.8.9
libpve-storage-perl: 8.2.3
libqb0: 1.0.5-1
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.2.3-1
proxmox-backup-file-restore: 3.2.3-1
proxmox-firewall: 0.5.0
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.6
proxmox-widget-toolkit: 4.2.3
pve-cluster: 8.0.7
pve-container: 5.1.10
pve-docs: 8.2.2
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.1
pve-firewall: 5.0.7
pve-firmware: 3.13-1
pve-ha-manager: 4.0.5
pve-i18n: 3.2.2
pve-qemu-kvm: 8.1.5-6
pve-xtermjs: 5.3.0-3
qemu-server: 8.2.3
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.4-pve1
 
Seeking for this kind of stuff I founded that disabling mapping checks will avoid this but it seems to me as a way to ofuscate the problem, would like to know what's the problem and how to SOLVE it, not HIDE it.
 
Seeking for this kind of stuff I founded that disabling mapping checks will avoid this but it seems to me as a way to ofuscate the problem, would like to know what's the problem and how to SOLVE it, not HIDE it.

I founded that disabling mapping checks will avoid this
Please write in more detail that what exactly did you do? What means disabling mapping checks?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!