Search results

  1. J

    timed out for waiting the udev queue being empty

    It’s known to work: https://forum.proxmox.com/threads/timed-out-for-waiting-for-udev-queue-being-empty.129481/ add thin_check_options = [ "-q", "--skip-mappings" ] into /etc/lvm/lvm.conf then use update-initramfs -u and reboot
  2. J

    timed out for waiting the udev queue being empty

    Ups... This is the German forum... :oops: Maybe an admin could move this to the right section...
  3. J

    timed out for waiting the udev queue being empty

    Seeking for this kind of stuff I founded that disabling mapping checks will avoid this but it seems to me as a way to ofuscate the problem, would like to know what's the problem and how to SOLVE it, not HIDE it.
  4. J

    timed out for waiting the udev queue being empty

    Hi, no NVMe's on my side only SSD's Sandisk as boot disks (RAID1) Both RAID units are made from Dell Perc controller. Not sure but I think that this was not happening with Proxmox v7 My setup: root@pve:~# pveversion -v proxmox-ve: 8.2.0 (running kernel: 6.8.8-4-pve) pve-manager: 8.2.3...
  5. J

    timed out for waiting the udev queue being empty

    Similar matter here, Dell T440 with Perc, booting from to SSDs in RAID1 and 4 4TB spinners (WD RED) in RAID5. In my case system boot always and the behaviour is a little different: Once GRUB starts I get a blank screen with a cursor, it can stay more than 5 minutes in that situation, in the...
  6. J

    Advice about shared storage

    Hi, Decided to continue with this thread one year later and the question is the same: 2 VM with Windows Server 2022 1 Storage that must be shared between both VMs Apart from mount the storage in one VM, share the unit, and mount this share in the remaining VM, is there anybody that could...
  7. J

    Unable to clone CT with MP0 at different storage

    Hi, I’m shure the file system at mp0 is writable. It contains the data directory for a Moodle system that is in production so it must be writable.
  8. J

    Unable to clone CT with MP0 at different storage

    The raid5 storage are 4 mechanical 4TB HDs installed at the server with a Hardware Raid card. They appear at the system as another disk drive like the local and local-lvm that are in a two SSD mirror.
  9. J

    Unable to clone CT with MP0 at different storage

    Hi, of course: dir: local path /var/lib/vz content iso,backup,vztmpl lvmthin: local-lvm thinpool data vgname pve content rootdir,images lvmthin: Raid5 thinpool Raid5 vgname Raid5 content images,rootdir nodes pve nfs...
  10. J

    Unable to clone CT with MP0 at different storage

    Hi, I'm trying to clone a standar Debian 10 container: But I get this when issuing the clone, roofs is ok: create full clone of mountpoint rootfs (local-lvm:vm-108-disk-0) Logical volume "vm-104-disk-0" created. Creating filesystem with 5242880 4k blocks and 1310720 inodes Filesystem UUID...
  11. J

    Advice about shared storage

    Hi, I have a Proxmox 6 install with a couple of 240GB mirrored SSDs for booting and Proxmox, a couple of 2 TB mirrored SSDs for containers and 4 2TB SSDs in Raid5 config for storage. The machine is a Dell T440 and it has a Perc Raid card with battery and 1GB of cache. All of these is installed...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!