How to make VM ignore missing luks drives?

KevinH90

New Member
Aug 16, 2024
2
0
1
Hi. I have a luks-encrypted external hard drive plugged into my host machine. I use the host machine to open/decrypt and map the drive to /dev/mapper/my-luks on the host. Then I have that location assigned to a VM as a secondary data (non-boot) drive, which will mount it. Everything works great! Except sometimes I do not have the drives plugged into the host. In this scenario, the VM fails to boot. I get an error saying, "Could not open '/dev/mapper/my-luks': No such file or directory". This is to be expected. However, I would like the VM to just ignore such error and proceed to boot when I do not have the drive plugged in. Is there a way to do this? I have tried setting ignore_msrs to true on the host, but the VM still failed to boot. Many thanks for any insight.
 
Last edited:
I think the simplest thing would be to go ahead and detach it from your VM under the resources tab when you need to start without it, then re-attach it when needed. If anyone else has a better solution I'd love to hear it.

I believe this could be automated with a hookscript that:

1.Checks if it exists in your host
2. Removes it on the start of the VM if it does not exist
3. Take no additional action if it exists

Official PVE Hookscript Doc
 
Would fstab actually prevent Proxmox from erroring out due to a missing resource on VM boot? If that's the case that is definitely a much easier option.
 
Hi. Thank you for the replies.

I do not know how this can be addressed with fstab on the VM, since the VM would have to at least begin booting before it can read fstab. The error occurs on the host machine when I try to boot the VM, and completely prevents the VM from even starting the boot process, so the VM never even gets to the point where fstab is invoked. (In fact, there is currently no entry on the VM fstab for the drive at all, as I am not trying to auto-mount it during boot.)

I am currently having to manually add or remove the drive from the VM harware list as sva described (conditionally upon whether the external drive is plugged into the host), and i am looking into scripting this.

My hope was that Proxmox or QEMU would have some sort of simple option that would just prevent VM boot failure due to missing non-essential drives. But alas, I have still not found such a thing.
 
Last edited:
"nofail" in fstab is easierst way to modify the vm not having boot problem.
Error in proxmox doesn't disapear because they are right errors as device requests could not be succeded (while unplugged) but going into rotating logs and you know the reason.
 
Hi. I have a luks-encrypted external hard drive plugged into my host machine. I use the host machine to open/decrypt and map the drive to /dev/mapper/my-luks on the host. Then I have that location assigned to a VM as a secondary data (non-boot) drive, which will mount it. Everything works great! Except sometimes I do not have the drives plugged into the host. In this scenario, the VM fails to boot. I get an error saying, "Could not open '/dev/mapper/my-luks': No such file or directory". This is to be expected. However, I would like the VM to just ignore such error and proceed to boot when I do not have the drive plugged in. Is there a way to do this? I have tried setting ignore_msrs to true on the host, but the VM still failed to boot. Many thanks for any insight.

Please run through this and confirm, not tested. I didn't add anything destructive, but I always err on the side of caution when it comes to storage. Recommend trying it against some throwaway VM with some flash drive you don't care about etc. until you confirm. I imagine this logic should be fairly close to what you're looking for. I'm interested to see how this goes, I would work on making this more robust/clean after your testing.


Code:
#!/bin/bash
   VMID="$1"
   PHASE="$2"
   DRIVE_PATH="</dev/mapperpath>"
   DRIVE_ID="<actualdriveid>"

if [ "$PHASE" = "pre-start" ]; then
    # Check if alr config
    if qm config $VMID | grep -q "$DRIVE_ID"; then
        DRIVE_CONFIGURED=true
    else
        DRIVE_CONFIGURED=false
    fi

    if [ -e "$DRIVE_PATH" ]; then
        # Drive exists
        if ! $DRIVE_CONFIGURED; then
            # Drive exists but not config, add
            echo "Drive exists. Adding to the VM"
            qm set $VMID -$DRIVE_ID $DRIVE_PATH
        else
            echo "Drive exists, already config. No action"
        fi
    else
        # Drive doesn't exist
        if $DRIVE_CONFIGURED; then
            # Drive doesn't exist but is config, goodbye
            echo "Drive doesn't exist. Removing from VM"
            qm set $VMID -delete $DRIVE_ID
        else
            echo "Drive doesn't exist and is not configured. No action needed."
        fi
    fi
fi
"nofail" in fstab is easierst way to modify the vm not having boot problem.
Error in proxmox doesn't disapear because they are right errors as device requests could not be succeded (while unplugged) but going into rotating logs and you know the reason.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!