I recently performed a pveupgrade. During the update my VM went haywire and became unresponsive. I rebooted the host via SSH and after coming back online I was unable to mount my LUKS-encrypted disk image. I'm getting "No key available with this passphrase." -- Please keep reading before you say "wrong password" or "container is corrupt lil homie" -- I'm almost certain the image is NOT corrupt and I know for an absolute fact that the password is correct for two reasons:
1) Using the KeepassXC password manager, I have used both the copy&paste feature as well as the auto-type feature (where it switches back to the previous window and automagically types the password using some kind of "SendKeys" API to simulate typing on the keyboard).
2) I've dumped the header and confirmed its not corrupt and the password is correct (see below).
Due to unfortunate circumstances I can't scp a copy of the image to where I need it... so I'm kinda stuck with where it is right now. I'm almost completely convinced something changed on my system during the upgrade/crash/reboot..... Let me back up a little. Here's some detail:
The following is what was upgraded over 1 week prior to my issue; I'm not sure if this is relevant but I'm including it for completeness:
Here are the upgrades which processed immediately prior to rebooting the system and getting this LUKS-issue.
EDIT: I found a post from a few weeks ago where I included the output of
I noticed a kernel fault (?) during the update. Again, not sure if this is relevant but its here for completeness:
During the update, my SSH sessions started printing a notice every 20~30 seconds. I accidentally deleted the file where I put a copy of the full message but it was something like this:
The VM became unstable and I think it locked up at one point but all of the above is on/from the hostnode.
I
wat do?
tl;dr: Performed pveupgrade and now LUKS container images cannot be opened due to invalid password error but the password is accurate and can be verified on a different computer via dumped header.
I've read a handful of posts around the internet and these forums of similar situations but none quite like this. A couple posts mentioned mismatched keyboard layouts from when the password was generated and the most recent attempt to unlock the container. I'm not sure if that's even a thing over an SSH session and my sshd config doesnt seem to have changed.
Here's what I know:
- On the hostnode I cannot
- On the hostnode I can save
- I created a temporary LUKS container file with a 1-character password, scp it to the server, and I am not able to open/mount it on the hostnode whereas it opens without issue on my home computer:
1) Using the KeepassXC password manager, I have used both the copy&paste feature as well as the auto-type feature (where it switches back to the previous window and automagically types the password using some kind of "SendKeys" API to simulate typing on the keyboard).
2) I've dumped the header and confirmed its not corrupt and the password is correct (see below).
Due to unfortunate circumstances I can't scp a copy of the image to where I need it... so I'm kinda stuck with where it is right now. I'm almost completely convinced something changed on my system during the upgrade/crash/reboot..... Let me back up a little. Here's some detail:
The following is what was upgraded over 1 week prior to my issue; I'm not sure if this is relevant but I'm including it for completeness:
Code:
Start-Date: 2024-07-02 10:53:36
Commandline: apt-get dist-upgrade
Install: linux-image-6.1.0-22-amd64:amd64 (6.1.94-1, automatic)
Upgrade: libcurl4:amd64 (7.88.1-10+deb12u5, 7.88.1-10+deb12u6), udev:amd64 (252.22-1~deb12u1, 252.26-1~deb12u2), python3.11:amd64 (3.11.2-6, 3.11.2-6+deb12u2), libcurl3-gnutls:amd64 (7.88.1-10+deb12u5, 7.88.1-10+deb12u6), openssh-client:a
md64 (1:9.2p1-2+deb12u2, 1:9.2p1-2+deb12u3), libgdk-pixbuf2.0-bin:amd64 (2.42.10+dfsg-1+b1, 2.42.10+dfsg-1+deb12u1), systemd-timesyncd:amd64 (252.22-1~deb12u1, 252.26-1~deb12u2), libpam-systemd:amd64 (252.22-1~deb12u1, 252.26-1~deb12u2),
libsystemd0:amd64 (252.22-1~deb12u1, 252.26-1~deb12u2), libfreetype6:amd64 (2.12.1+dfsg-5, 2.12.1+dfsg-5+deb12u3), libcjson1:amd64 (1.7.15-1, 1.7.15-1+deb12u1), libnss-systemd:amd64 (252.22-1~deb12u1, 252.26-1~deb12u2), openssh-server:amd
64 (1:9.2p1-2+deb12u2, 1:9.2p1-2+deb12u3), libpython3.11-minimal:amd64 (3.11.2-6, 3.11.2-6+deb12u2), libgdk-pixbuf-2.0-0:amd64 (2.42.10+dfsg-1+b1, 2.42.10+dfsg-1+deb12u1), libglib2.0-data:amd64 (2.74.6-2+deb12u2, 2.74.6-2+deb12u3), system
d:amd64 (252.22-1~deb12u1, 252.26-1~deb12u2), libudev1:amd64 (252.22-1~deb12u1, 252.26-1~deb12u2), libssl3:amd64 (3.0.11-1~deb12u2, 3.0.13-1~deb12u1), libpython3.11:amd64 (3.11.2-6, 3.11.2-6+deb12u2), linux-image-amd64:amd64 (6.1.90-1, 6.
1.94-1), bash:amd64 (5.2.15-2+b2, 5.2.15-2+b7), base-files:amd64 (12.4+deb12u5, 12.4+deb12u6), libpython3.11-stdlib:amd64 (3.11.2-6, 3.11.2-6+deb12u2), gnutls-bin:amd64 (3.7.9-2+deb12u2, 3.7.9-2+deb12u3), distro-info-data:amd64 (0.58+deb1
2u1, 0.58+deb12u2), libseccomp2:amd64 (2.5.4-1+b3, 2.5.4-1+deb12u1), libglib2.0-0:amd64 (2.74.6-2+deb12u2, 2.74.6-2+deb12u3), openssh-sftp-server:amd64 (1:9.2p1-2+deb12u2, 1:9.2p1-2+deb12u3), nano:amd64 (7.2-1, 7.2-1+deb12u1), python3.11-
minimal:amd64 (3.11.2-6, 3.11.2-6+deb12u2), libsystemd-shared:amd64 (252.22-1~deb12u1, 252.26-1~deb12u2), systemd-sysv:amd64 (252.22-1~deb12u1, 252.26-1~deb12u2), python3-idna:amd64 (3.3-1, 3.3-1+deb12u1), libgnutls30:amd64 (3.7.9-2+deb12
u2, 3.7.9-2+deb12u3), curl:amd64 (7.88.1-10+deb12u5, 7.88.1-10+deb12u6), libgnutlsxx30:amd64 (3.7.9-2+deb12u2, 3.7.9-2+deb12u3), libgnutls-dane0:amd64 (3.7.9-2+deb12u2, 3.7.9-2+deb12u3), dns-root-data:amd64 (2023010101, 2024041801~deb12u1
), postfix:amd64 (3.7.10-0+deb12u1, 3.7.11-0+deb12u1), openssl:amd64 (3.0.11-1~deb12u2, 3.0.13-1~deb12u1), libgdk-pixbuf2.0-common:amd64 (2.42.10+dfsg-1, 2.42.10+dfsg-1+deb12u1)
End-Date: 2024-07-02 11:03:40
Here are the upgrades which processed immediately prior to rebooting the system and getting this LUKS-issue.
Code:
# apt update
# apt list --upgradable
# pveupgrade
Start-Date: 2024-07-14 04:26:54
Commandline: apt-get dist-upgrade
Upgrade: krb5-locales:amd64 (1.20.1-2+deb12u1, 1.20.1-2+deb12u2), libgssapi-krb5-2:amd64 (1.20.1-2+deb12u1, 1.20.1-2+deb12u2), pve-qemu-kvm:amd64 (9.0.0-3, 9.0.0-6), libkrb5support0:amd64 (1.20.1-2+deb12u1, 1.20.1-2+deb12u2), proxmox-back
up-file-restore:amd64 (3.2.4-1, 3.2.7-1), libkrb5-3:amd64 (1.20.1-2+deb12u1, 1.20.1-2+deb12u2), libk5crypto3:amd64 (1.20.1-2+deb12u1, 1.20.1-2+deb12u2), proxmox-backup-client:amd64 (3.2.4-1, 3.2.7-1)
End-Date: 2024-07-14 04:26:58
EDIT: I found a post from a few weeks ago where I included the output of
pveversion -v
Here is exactly what was updated before rebooting!
Code:
[user@home ~]$ pveversion -v > current-output_Jul-14-2024.txt
[user@home ~]$ diff current-output_Jul-14-2024.txt previous-output_Jun-25-2024.txt
1c1
< proxmox-ve: 8.2.0 (running kernel: 6.8.8-2-pve)
---
> proxmox-ve: 8.2.0 (running kernel: 6.8.8-1-pve)
5,6c5
< proxmox-kernel-6.8: 6.8.8-2
< proxmox-kernel-6.8.8-2-pve-signed: 6.8.8-2
---
> proxmox-kernel-6.8: 6.8.8-1
31c30
< libpve-storage-perl: 8.2.3
---
> libpve-storage-perl: 8.2.2
37,38c36,37
< proxmox-backup-client: 3.2.7-1
< proxmox-backup-file-restore: 3.2.7-1
---
> proxmox-backup-client: 3.2.4-1
> proxmox-backup-file-restore: 3.2.4-1
54c53
< pve-qemu-kvm: 9.0.0-6
---
> pve-qemu-kvm: 8.1.5-6
I noticed a kernel fault (?) during the update. Again, not sure if this is relevant but its here for completeness:
Code:
[224718.034464] BUG: unable to handle page fault for address: 0000615b2d0cdca3
[224718.034476] #PF: supervisor read access in kernel mode
[224718.034483] #PF: error_code(0x0001) - permissions violation
[224718.034488] PGD 121f11067 P4D 121f11067 PUD 182b8a067 PMD 10ce6a067 PTE 163f4f025
[224718.034500] Oops: 0001 [#1] PREEMPT SMP NOPTI
[224718.034507] CPU: 3 PID: 3125 Comm: kvm Tainted: P O 6.8.8-2-pve #1
[224718.034516] Hardware name: Gigabyte Technology Co., Ltd. B450M DS3H/B450M DS3H-CF, BIOS F66 03/22/2024
[224718.034523] RIP: 0010:__check_heap_object+0x61/0x120
[224718.034534] Code: 2b 15 ab 59 69 01 48 c1 fa 06 21 c8 48 c1 e2 0c 48 03 15 aa 59 69 01 48 39 d7 0f 82 8c 00 00 00 48 89 fb 84 c0 75 50 48 89 f8 <41> 8b 4d 18 48 29 d0 48 99 48 f7 f9 89 d3 66 90 41 8b 85 d0 00 00
[224718.034548] RSP: 0018:ffffc1520264f840 EFLAGS: 00010246
[224718.034555] RAX: ffff9e3cd445fe0c RBX: ffff9e3cd445fe0c RCX: 0000000000000000
[224718.034562] RDX: ffff9e3cd445c000 RSI: 0000000000000168 RDI: ffff9e3cd445fe0c
[224718.034569] RBP: ffffc1520264f860 R08: 0000000000000000 R09: 0000000000000000
[224718.034576] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000168
[224718.034583] R13: 0000615b2d0cdc8b R14: 0000000000000000 R15: 0000615b30858a00
[224718.034590] FS: 000079dae6866300(0000) GS:ffff9e4bbe380000(0000) knlGS:0000000000000000
[224718.034598] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[224718.034604] CR2: 0000615b2d0cdca3 CR3: 000000010d170000 CR4: 0000000000350ef0
[224718.034611] Call Trace:
[224718.034616] <TASK>
[224718.034622] ? show_regs+0x6d/0x80
[224718.034630] ? __die+0x24/0x80
[224718.034637] ? page_fault_oops+0x176/0x500
[224718.034644] ? srso_return_thunk+0x5/0x5f
[224718.034655] ? do_user_addr_fault+0x2f9/0x6b0
[224718.034663] ? exc_page_fault+0x83/0x1b0
[224718.034671] ? asm_exc_page_fault+0x27/0x30
[224718.034682] ? __check_heap_object+0x61/0x120
[224718.034690] __check_object_size+0x293/0x300
[224718.034698] do_sys_poll+0x120/0x610
[224718.034728] ? _copy_from_user+0x2f/0x80
[224718.034735] ? srso_return_thunk+0x5/0x5f
[224718.034743] __x64_sys_ppoll+0xde/0x170
[224718.034751] x64_sys_call+0x18e5/0x24b0
[224718.034758] do_syscall_64+0x81/0x170
[224718.034765] ? __pfx_pollwake+0x10/0x10
[224718.034770] ? srso_return_thunk+0x5/0x5f
[224718.034776] ? _copy_to_user+0x25/0x50
[224718.034782] ? srso_return_thunk+0x5/0x5f
[224718.034788] ? put_timespec64+0x3d/0x70
[224718.034794] ? srso_return_thunk+0x5/0x5f
[224718.034800] ? poll_select_finish+0x1ed/0x260
[224718.034807] ? srso_return_thunk+0x5/0x5f
[224718.034812] ? __rseq_handle_notify_resume+0xa5/0x4d0
[224718.034823] ? srso_return_thunk+0x5/0x5f
[224718.034829] ? syscall_exit_to_user_mode+0x89/0x260
[224718.034836] ? srso_return_thunk+0x5/0x5f
[224718.034842] ? do_syscall_64+0x8d/0x170
[224718.034848] ? srso_return_thunk+0x5/0x5f
[224718.034854] ? syscall_exit_to_user_mode+0x89/0x260
[224718.034861] ? srso_return_thunk+0x5/0x5f
[224718.034867] ? do_syscall_64+0x8d/0x170
[224718.034873] ? srso_return_thunk+0x5/0x5f
[224718.034879] ? do_syscall_64+0x8d/0x170
[224718.034884] ? do_syscall_64+0x8d/0x170
[224718.034890] ? do_syscall_64+0x8d/0x170
[224718.034895] ? srso_return_thunk+0x5/0x5f
[224718.034902] entry_SYSCALL_64_after_hwframe+0x78/0x80
[224718.034909] RIP: 0033:0x79daf158b256
[224718.034925] Code: 7c 24 08 e8 6c 95 f8 ff 4c 8b 54 24 18 48 8b 74 24 10 41 b8 08 00 00 00 41 89 c1 48 8b 7c 24 08 4c 89 e2 b8 0f 01 00 00 0f 05 <48> 3d 00 f0 ff ff 77 32 44 89 cf 89 44 24 08 e8 b6 95 f8 ff 8b 44
[224718.034939] RSP: 002b:00007ffd3b5ec690 EFLAGS: 00000293 ORIG_RAX: 000000000000010f
[224718.034947] RAX: ffffffffffffffda RBX: 0000615b2f6322d0 RCX: 000079daf158b256
[224718.034954] RDX: 00007ffd3b5ec6b0 RSI: 000000000000004b RDI: 0000615b30858910
[224718.034961] RBP: 00007ffd3b5ec71c R08: 0000000000000008 R09: 0000000000000000
[224718.034968] R10: 0000000000000000 R11: 0000000000000293 R12: 00007ffd3b5ec6b0
[224718.034974] R13: 0000615b2f6322d0 R14: 0000615b2dfa07c8 R15: 00007ffd3b5ec720
[224718.034986] </TASK>
[224718.034989] Modules linked in: veth dm_crypt ebtable_filter ebtables ip_set ip6table_raw iptable_raw ip6table_filter ip6_tables iptable_filter nf_tables libcrc32c bonding tls softdog sunrpc nfnetlink_log nfnetlink binfmt_misc amdgpu intel_rapl_msr intel_rapl_common snd_hda_codec_realtek snd_hda_codec_generic radeon amdxcp drm_exec edac_mce_amd snd_hda_intel gpu_sched crct10dif_pclmul drm_buddy snd_intel_dspcfg polyval_clmulni drm_suballoc_helper snd_intel_sdw_acpi polyval_generic ghash_clmulni_intel drm_ttm_helper snd_hda_codec sha256_ssse3 ttm sha1_ssse3 aesni_intel snd_hda_core drm_display_helper snd_hwdep crypto_simd zfs(PO) cec cryptd snd_pcm rc_core i2c_algo_bit snd_timer video snd soundcore pcspkr gigabyte_wmi wmi_bmof rapl k10temp spl(O) mac_hid vhost_net vhost vhost_iotlb tap kvm_amd ccp kvm irqbypass efi_pstore dmi_sysfs ip_tables x_tables autofs4 xhci_pci xhci_pci_renesas r8169 crc32_pclmul e1000e i2c_piix4 realtek ahci xhci_hcd libahci wmi gpio_amdpt
[224718.035145] CR2: 0000615b2d0cdca3
[224718.035150] ---[ end trace 0000000000000000 ]---
[224718.035155] RIP: 0010:__check_heap_object+0x61/0x120
[224718.035162] Code: 2b 15 ab 59 69 01 48 c1 fa 06 21 c8 48 c1 e2 0c 48 03 15 aa 59 69 01 48 39 d7 0f 82 8c 00 00 00 48 89 fb 84 c0 75 50 48 89 f8 <41> 8b 4d 18 48 29 d0 48 99 48 f7 f9 89 d3 66 90 41 8b 85 d0 00 00
[224718.035176] RSP: 0018:ffffc1520264f840 EFLAGS: 00010246
[224718.035182] RAX: ffff9e3cd445fe0c RBX: ffff9e3cd445fe0c RCX: 0000000000000000
[224718.035189] RDX: ffff9e3cd445c000 RSI: 0000000000000168 RDI: ffff9e3cd445fe0c
[224718.035196] RBP: ffffc1520264f860 R08: 0000000000000000 R09: 0000000000000000
[224718.035203] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000168
[224718.035209] R13: 0000615b2d0cdc8b R14: 0000000000000000 R15: 0000615b30858a00
[224718.035216] FS: 000079dae6866300(0000) GS:ffff9e4bbe380000(0000) knlGS:0000000000000000
[224718.035224] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[224718.035230] CR2: 0000615b2d0cdca3 CR3: 000000010d170000 CR4: 0000000000350ef0
[224718.035237] note: kvm[3125] exited with irqs disabled
[332894.903902] perf: interrupt took too long (3106 > 2500), lowering kernel.perf_event_max_sample_rate to 64000
[636614.451687] hrtimer: interrupt took 5441 ns
Code:
[123456.78] watchdog: BUG: soft lockup - CPU#? stuck of 22s! [??]
I
reboot
'd the host. When it came back up, I attempted to mount a LUKS image as I always do:
Code:
root@hnode:/root# cryptsetup open /root/luks240.bin luks240
Enter passphrase for /root/luks240.bin:
No key available with this passphrase.
Enter passphrase for /root/luks240.bin:
No key available with this passphrase.
Enter passphrase for /root/luks240.bin:
wat do?
tl;dr: Performed pveupgrade and now LUKS container images cannot be opened due to invalid password error but the password is accurate and can be verified on a different computer via dumped header.
I've read a handful of posts around the internet and these forums of similar situations but none quite like this. A couple posts mentioned mismatched keyboard layouts from when the password was generated and the most recent attempt to unlock the container. I'm not sure if that's even a thing over an SSH session and my sshd config doesnt seem to have changed.
Here's what I know:
- On the hostnode I cannot
cryptsetup open
my LUKS container due to error message "No key available with this passphrase" even though the password is being entered into my ssh client correctly (the same exact way I've always done it)- On the hostnode I can save
cryptsetup luksHeaderBackup
to a file. From my home computer scp the file off the server, and then successfully use cryptsetup luksOpen --test-passphrase
to verify the header is in-tact and the password I have is accurate.- I created a temporary LUKS container file with a 1-character password, scp it to the server, and I am not able to open/mount it on the hostnode whereas it opens without issue on my home computer:
Code:
[user@home ~/server/luks-thing]$ fallocate -l 24M tmp3.bin
[user@home ~/server/luks-thing]$ sudo cryptsetup luksFormat tmp3.bin
WARNING!
========
This will overwrite data on tmp3.bin irrevocably.
Are you sure? (Type 'yes' in capital letters): YES
Enter passphrase for tmp3.bin: A
Verify passphrase: A
[user@home ~/server/luks-thing]$ sudo cryptsetup open tmp3.bin t3
Enter passphrase for tmp3.bin: A
[user@home ~/server/luks-thing]$ sudo mkfs.ext4 /dev/mapper/t3
mke2fs 1.46.5 (30-Dec-2021)
Creating filesystem with 2048 4k blocks and 2048 inodes
Allocating group tables: done
Writing inode tables: done
Creating journal (1024 blocks): done
Writing superblocks and filesystem accounting information: done
[user@home ~/server/luks-thing]$ sudo mount /dev/mapper/t3 t3mnt
[user@home ~/server/luks-thing]$ ls t3mnt/
lost+found
[user@home ~/server/luks-thing]$ sudo umount t3mnt
[user@home ~/server/luks-thing]$ sudo cryptsetup close t3
[user@home ~/server/luks-thing]$ scp ./tmp3.bin root@hnode:/root/tmp3.bin
#######################################################
root@hnode:~# cryptsetup luksOpen tmp3.bin t
Enter passphrase for tmp3.bin: A
No key available with this passphrase.
Enter passphrase for tmp3.bin:
No key available with this passphrase.
Enter passphrase for tmp3.bin:
No key available with this passphrase.
root@hnode:~#