timed out for waiting the udev queue being empty

Thanks for the detailed writeup. I have the same issue, followed the instructions. Limit LVM Scanning seemed to work (passing the mentioned tests). But when I made the MDADM changes, and rebooted I was still greeted with a long boot and the udev error.

cat /proc/mdstat

Personalities :
unused devices: <none>
As far as I understand, the error "Timed out waiting for the udev queue to be empty" occurs if the system freezes for more than 2 minutes or so during boot.

As I understand it, the bootloader scans all the disks and searches for file systems on them.

If there are a lot of disks and there are a lot of partitions and block devices on them, then it takes a lot of time, more than 2 minutes.

This LVM and mdadm setup has reduced the time the system uses during boot, because the system no longer searches for LVM partitions and raid arrays on zfs and ceph.

If this error appears, but the system starts normally, then everything is fine.

I noticed that this problem only occurs in a server that has a lot of disks. The server that had this problem has about 20 disks.

Before changing the settings, the boot time was 5-6 minutes and it did not load every time, resulting in an initramfs error.
Screenshots in previous posts.

initramfs error only got into this error when the startup took longer. From which I conclude that when a certain timeout passes, the system crashes into this error.

Reduce the boot system time by configuring lvm and mdadm, I have solved this problem.

Now a server with 20 hard drives starts 2-4 minutes instead of 5-6 earlier.

How many hard drives you have on your server?
and how long does it take to boot system?
 
Ah great info and good spot.

I do, as it happens, have lots of drives. I have 15 including OS - a couple of which are aging and have block issues. They are mounted in fstab by file system UUID, so maybe it is what it is.

This is a DELL R620 which has a VERY long post anyway. I'm not doing hardware diagnostic on every boot but it does take forever to progress through the various stages including raid, inventory etc. I guess the Proxmox boot is timed for the mentioned timeout after grub? Unfortunately I didn't time it prior to the changes, but from reboot to console/terminal is ~12mins with probably half that being post.

I do have a ZFS pool... why does cat /proc/mdstat unused devices yields <none>?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!