pveupgrade: Now my LUKS containers won't open due to bad password. Password is correct. Header not corrupt.

sillyquota

New Member
Jun 25, 2024
6
1
3
Internet
I recently performed a pveupgrade. During the update my VM went haywire and became unresponsive. I rebooted the host via SSH and after coming back online I was unable to mount my LUKS-encrypted disk image. I'm getting "No key available with this passphrase." -- Please keep reading before you say "wrong password" or "container is corrupt lil homie" -- I'm almost certain the image is NOT corrupt and I know for an absolute fact that the password is correct for two reasons:
1) Using the KeepassXC password manager, I have used both the copy&paste feature as well as the auto-type feature (where it switches back to the previous window and automagically types the password using some kind of "SendKeys" API to simulate typing on the keyboard).
2) I've dumped the header and confirmed its not corrupt and the password is correct (see below).

Due to unfortunate circumstances I can't scp a copy of the image to where I need it... so I'm kinda stuck with where it is right now. I'm almost completely convinced something changed on my system during the upgrade/crash/reboot..... Let me back up a little. Here's some detail:

The following is what was upgraded over 1 week prior to my issue; I'm not sure if this is relevant but I'm including it for completeness:
Code:
Start-Date: 2024-07-02  10:53:36
Commandline: apt-get dist-upgrade
Install: linux-image-6.1.0-22-amd64:amd64 (6.1.94-1, automatic)
Upgrade: libcurl4:amd64 (7.88.1-10+deb12u5, 7.88.1-10+deb12u6), udev:amd64 (252.22-1~deb12u1, 252.26-1~deb12u2), python3.11:amd64 (3.11.2-6, 3.11.2-6+deb12u2), libcurl3-gnutls:amd64 (7.88.1-10+deb12u5, 7.88.1-10+deb12u6), openssh-client:a
md64 (1:9.2p1-2+deb12u2, 1:9.2p1-2+deb12u3), libgdk-pixbuf2.0-bin:amd64 (2.42.10+dfsg-1+b1, 2.42.10+dfsg-1+deb12u1), systemd-timesyncd:amd64 (252.22-1~deb12u1, 252.26-1~deb12u2), libpam-systemd:amd64 (252.22-1~deb12u1, 252.26-1~deb12u2),
libsystemd0:amd64 (252.22-1~deb12u1, 252.26-1~deb12u2), libfreetype6:amd64 (2.12.1+dfsg-5, 2.12.1+dfsg-5+deb12u3), libcjson1:amd64 (1.7.15-1, 1.7.15-1+deb12u1), libnss-systemd:amd64 (252.22-1~deb12u1, 252.26-1~deb12u2), openssh-server:amd
64 (1:9.2p1-2+deb12u2, 1:9.2p1-2+deb12u3), libpython3.11-minimal:amd64 (3.11.2-6, 3.11.2-6+deb12u2), libgdk-pixbuf-2.0-0:amd64 (2.42.10+dfsg-1+b1, 2.42.10+dfsg-1+deb12u1), libglib2.0-data:amd64 (2.74.6-2+deb12u2, 2.74.6-2+deb12u3), system
d:amd64 (252.22-1~deb12u1, 252.26-1~deb12u2), libudev1:amd64 (252.22-1~deb12u1, 252.26-1~deb12u2), libssl3:amd64 (3.0.11-1~deb12u2, 3.0.13-1~deb12u1), libpython3.11:amd64 (3.11.2-6, 3.11.2-6+deb12u2), linux-image-amd64:amd64 (6.1.90-1, 6.
1.94-1), bash:amd64 (5.2.15-2+b2, 5.2.15-2+b7), base-files:amd64 (12.4+deb12u5, 12.4+deb12u6), libpython3.11-stdlib:amd64 (3.11.2-6, 3.11.2-6+deb12u2), gnutls-bin:amd64 (3.7.9-2+deb12u2, 3.7.9-2+deb12u3), distro-info-data:amd64 (0.58+deb1
2u1, 0.58+deb12u2), libseccomp2:amd64 (2.5.4-1+b3, 2.5.4-1+deb12u1), libglib2.0-0:amd64 (2.74.6-2+deb12u2, 2.74.6-2+deb12u3), openssh-sftp-server:amd64 (1:9.2p1-2+deb12u2, 1:9.2p1-2+deb12u3), nano:amd64 (7.2-1, 7.2-1+deb12u1), python3.11-
minimal:amd64 (3.11.2-6, 3.11.2-6+deb12u2), libsystemd-shared:amd64 (252.22-1~deb12u1, 252.26-1~deb12u2), systemd-sysv:amd64 (252.22-1~deb12u1, 252.26-1~deb12u2), python3-idna:amd64 (3.3-1, 3.3-1+deb12u1), libgnutls30:amd64 (3.7.9-2+deb12
u2, 3.7.9-2+deb12u3), curl:amd64 (7.88.1-10+deb12u5, 7.88.1-10+deb12u6), libgnutlsxx30:amd64 (3.7.9-2+deb12u2, 3.7.9-2+deb12u3), libgnutls-dane0:amd64 (3.7.9-2+deb12u2, 3.7.9-2+deb12u3), dns-root-data:amd64 (2023010101, 2024041801~deb12u1
), postfix:amd64 (3.7.10-0+deb12u1, 3.7.11-0+deb12u1), openssl:amd64 (3.0.11-1~deb12u2, 3.0.13-1~deb12u1), libgdk-pixbuf2.0-common:amd64 (2.42.10+dfsg-1, 2.42.10+dfsg-1+deb12u1)
End-Date: 2024-07-02  11:03:40

Here are the upgrades which processed immediately prior to rebooting the system and getting this LUKS-issue.
Code:
# apt update
# apt list --upgradable
# pveupgrade
Start-Date: 2024-07-14  04:26:54
Commandline: apt-get dist-upgrade
Upgrade: krb5-locales:amd64 (1.20.1-2+deb12u1, 1.20.1-2+deb12u2), libgssapi-krb5-2:amd64 (1.20.1-2+deb12u1, 1.20.1-2+deb12u2), pve-qemu-kvm:amd64 (9.0.0-3, 9.0.0-6), libkrb5support0:amd64 (1.20.1-2+deb12u1, 1.20.1-2+deb12u2), proxmox-back
up-file-restore:amd64 (3.2.4-1, 3.2.7-1), libkrb5-3:amd64 (1.20.1-2+deb12u1, 1.20.1-2+deb12u2), libk5crypto3:amd64 (1.20.1-2+deb12u1, 1.20.1-2+deb12u2), proxmox-backup-client:amd64 (3.2.4-1, 3.2.7-1)
End-Date: 2024-07-14  04:26:58

EDIT: I found a post from a few weeks ago where I included the output of pveversion -v Here is exactly what was updated before rebooting!
Code:
[user@home ~]$ pveversion -v > current-output_Jul-14-2024.txt
[user@home ~]$ diff current-output_Jul-14-2024.txt previous-output_Jun-25-2024.txt
1c1
< proxmox-ve: 8.2.0 (running kernel: 6.8.8-2-pve)
---
> proxmox-ve: 8.2.0 (running kernel: 6.8.8-1-pve)
5,6c5
< proxmox-kernel-6.8: 6.8.8-2
< proxmox-kernel-6.8.8-2-pve-signed: 6.8.8-2
---
> proxmox-kernel-6.8: 6.8.8-1
31c30
< libpve-storage-perl: 8.2.3
---
> libpve-storage-perl: 8.2.2
37,38c36,37
< proxmox-backup-client: 3.2.7-1
< proxmox-backup-file-restore: 3.2.7-1
---
> proxmox-backup-client: 3.2.4-1
> proxmox-backup-file-restore: 3.2.4-1
54c53
< pve-qemu-kvm: 9.0.0-6
---
> pve-qemu-kvm: 8.1.5-6


I noticed a kernel fault (?) during the update. Again, not sure if this is relevant but its here for completeness:
Code:
[224718.034464] BUG: unable to handle page fault for address: 0000615b2d0cdca3
[224718.034476] #PF: supervisor read access in kernel mode
[224718.034483] #PF: error_code(0x0001) - permissions violation
[224718.034488] PGD 121f11067 P4D 121f11067 PUD 182b8a067 PMD 10ce6a067 PTE 163f4f025
[224718.034500] Oops: 0001 [#1] PREEMPT SMP NOPTI
[224718.034507] CPU: 3 PID: 3125 Comm: kvm Tainted: P           O       6.8.8-2-pve #1
[224718.034516] Hardware name: Gigabyte Technology Co., Ltd. B450M DS3H/B450M DS3H-CF, BIOS F66 03/22/2024
[224718.034523] RIP: 0010:__check_heap_object+0x61/0x120
[224718.034534] Code: 2b 15 ab 59 69 01 48 c1 fa 06 21 c8 48 c1 e2 0c 48 03 15 aa 59 69 01 48 39 d7 0f 82 8c 00 00 00 48 89 fb 84 c0 75 50 48 89 f8 <41> 8b 4d 18 48 29 d0 48 99 48 f7 f9 89 d3 66 90 41 8b 85 d0 00 00
[224718.034548] RSP: 0018:ffffc1520264f840 EFLAGS: 00010246
[224718.034555] RAX: ffff9e3cd445fe0c RBX: ffff9e3cd445fe0c RCX: 0000000000000000
[224718.034562] RDX: ffff9e3cd445c000 RSI: 0000000000000168 RDI: ffff9e3cd445fe0c
[224718.034569] RBP: ffffc1520264f860 R08: 0000000000000000 R09: 0000000000000000
[224718.034576] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000168
[224718.034583] R13: 0000615b2d0cdc8b R14: 0000000000000000 R15: 0000615b30858a00
[224718.034590] FS:  000079dae6866300(0000) GS:ffff9e4bbe380000(0000) knlGS:0000000000000000
[224718.034598] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[224718.034604] CR2: 0000615b2d0cdca3 CR3: 000000010d170000 CR4: 0000000000350ef0
[224718.034611] Call Trace:
[224718.034616]  <TASK>
[224718.034622]  ? show_regs+0x6d/0x80
[224718.034630]  ? __die+0x24/0x80
[224718.034637]  ? page_fault_oops+0x176/0x500
[224718.034644]  ? srso_return_thunk+0x5/0x5f
[224718.034655]  ? do_user_addr_fault+0x2f9/0x6b0
[224718.034663]  ? exc_page_fault+0x83/0x1b0
[224718.034671]  ? asm_exc_page_fault+0x27/0x30
[224718.034682]  ? __check_heap_object+0x61/0x120
[224718.034690]  __check_object_size+0x293/0x300
[224718.034698]  do_sys_poll+0x120/0x610
[224718.034728]  ? _copy_from_user+0x2f/0x80
[224718.034735]  ? srso_return_thunk+0x5/0x5f
[224718.034743]  __x64_sys_ppoll+0xde/0x170
[224718.034751]  x64_sys_call+0x18e5/0x24b0
[224718.034758]  do_syscall_64+0x81/0x170
[224718.034765]  ? __pfx_pollwake+0x10/0x10
[224718.034770]  ? srso_return_thunk+0x5/0x5f
[224718.034776]  ? _copy_to_user+0x25/0x50
[224718.034782]  ? srso_return_thunk+0x5/0x5f
[224718.034788]  ? put_timespec64+0x3d/0x70
[224718.034794]  ? srso_return_thunk+0x5/0x5f
[224718.034800]  ? poll_select_finish+0x1ed/0x260
[224718.034807]  ? srso_return_thunk+0x5/0x5f
[224718.034812]  ? __rseq_handle_notify_resume+0xa5/0x4d0
[224718.034823]  ? srso_return_thunk+0x5/0x5f
[224718.034829]  ? syscall_exit_to_user_mode+0x89/0x260
[224718.034836]  ? srso_return_thunk+0x5/0x5f
[224718.034842]  ? do_syscall_64+0x8d/0x170
[224718.034848]  ? srso_return_thunk+0x5/0x5f
[224718.034854]  ? syscall_exit_to_user_mode+0x89/0x260
[224718.034861]  ? srso_return_thunk+0x5/0x5f
[224718.034867]  ? do_syscall_64+0x8d/0x170
[224718.034873]  ? srso_return_thunk+0x5/0x5f
[224718.034879]  ? do_syscall_64+0x8d/0x170
[224718.034884]  ? do_syscall_64+0x8d/0x170
[224718.034890]  ? do_syscall_64+0x8d/0x170
[224718.034895]  ? srso_return_thunk+0x5/0x5f
[224718.034902]  entry_SYSCALL_64_after_hwframe+0x78/0x80
[224718.034909] RIP: 0033:0x79daf158b256
[224718.034925] Code: 7c 24 08 e8 6c 95 f8 ff 4c 8b 54 24 18 48 8b 74 24 10 41 b8 08 00 00 00 41 89 c1 48 8b 7c 24 08 4c 89 e2 b8 0f 01 00 00 0f 05 <48> 3d 00 f0 ff ff 77 32 44 89 cf 89 44 24 08 e8 b6 95 f8 ff 8b 44
[224718.034939] RSP: 002b:00007ffd3b5ec690 EFLAGS: 00000293 ORIG_RAX: 000000000000010f
[224718.034947] RAX: ffffffffffffffda RBX: 0000615b2f6322d0 RCX: 000079daf158b256
[224718.034954] RDX: 00007ffd3b5ec6b0 RSI: 000000000000004b RDI: 0000615b30858910
[224718.034961] RBP: 00007ffd3b5ec71c R08: 0000000000000008 R09: 0000000000000000
[224718.034968] R10: 0000000000000000 R11: 0000000000000293 R12: 00007ffd3b5ec6b0
[224718.034974] R13: 0000615b2f6322d0 R14: 0000615b2dfa07c8 R15: 00007ffd3b5ec720
[224718.034986]  </TASK>
[224718.034989] Modules linked in: veth dm_crypt ebtable_filter ebtables ip_set ip6table_raw iptable_raw ip6table_filter ip6_tables iptable_filter nf_tables libcrc32c bonding tls softdog sunrpc nfnetlink_log nfnetlink binfmt_misc amdgpu intel_rapl_msr intel_rapl_common snd_hda_codec_realtek snd_hda_codec_generic radeon amdxcp drm_exec edac_mce_amd snd_hda_intel gpu_sched crct10dif_pclmul drm_buddy snd_intel_dspcfg polyval_clmulni drm_suballoc_helper snd_intel_sdw_acpi polyval_generic ghash_clmulni_intel drm_ttm_helper snd_hda_codec sha256_ssse3 ttm sha1_ssse3 aesni_intel snd_hda_core drm_display_helper snd_hwdep crypto_simd zfs(PO) cec cryptd snd_pcm rc_core i2c_algo_bit snd_timer video snd soundcore pcspkr gigabyte_wmi wmi_bmof rapl k10temp spl(O) mac_hid vhost_net vhost vhost_iotlb tap kvm_amd ccp kvm irqbypass efi_pstore dmi_sysfs ip_tables x_tables autofs4 xhci_pci xhci_pci_renesas r8169 crc32_pclmul e1000e i2c_piix4 realtek ahci xhci_hcd libahci wmi gpio_amdpt
[224718.035145] CR2: 0000615b2d0cdca3
[224718.035150] ---[ end trace 0000000000000000 ]---
[224718.035155] RIP: 0010:__check_heap_object+0x61/0x120
[224718.035162] Code: 2b 15 ab 59 69 01 48 c1 fa 06 21 c8 48 c1 e2 0c 48 03 15 aa 59 69 01 48 39 d7 0f 82 8c 00 00 00 48 89 fb 84 c0 75 50 48 89 f8 <41> 8b 4d 18 48 29 d0 48 99 48 f7 f9 89 d3 66 90 41 8b 85 d0 00 00
[224718.035176] RSP: 0018:ffffc1520264f840 EFLAGS: 00010246
[224718.035182] RAX: ffff9e3cd445fe0c RBX: ffff9e3cd445fe0c RCX: 0000000000000000
[224718.035189] RDX: ffff9e3cd445c000 RSI: 0000000000000168 RDI: ffff9e3cd445fe0c
[224718.035196] RBP: ffffc1520264f860 R08: 0000000000000000 R09: 0000000000000000
[224718.035203] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000168
[224718.035209] R13: 0000615b2d0cdc8b R14: 0000000000000000 R15: 0000615b30858a00
[224718.035216] FS:  000079dae6866300(0000) GS:ffff9e4bbe380000(0000) knlGS:0000000000000000
[224718.035224] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[224718.035230] CR2: 0000615b2d0cdca3 CR3: 000000010d170000 CR4: 0000000000350ef0
[224718.035237] note: kvm[3125] exited with irqs disabled
[332894.903902] perf: interrupt took too long (3106 > 2500), lowering kernel.perf_event_max_sample_rate to 64000
[636614.451687] hrtimer: interrupt took 5441 ns
During the update, my SSH sessions started printing a notice every 20~30 seconds. I accidentally deleted the file where I put a copy of the full message but it was something like this:
Code:
[123456.78] watchdog: BUG: soft lockup - CPU#? stuck of 22s! [??]
The VM became unstable and I think it locked up at one point but all of the above is on/from the hostnode.

I reboot'd the host. When it came back up, I attempted to mount a LUKS image as I always do:
Code:
root@hnode:/root# cryptsetup open /root/luks240.bin luks240
Enter passphrase for /root/luks240.bin:
No key available with this passphrase.
Enter passphrase for /root/luks240.bin:
No key available with this passphrase.
Enter passphrase for /root/luks240.bin:

wat do?

tl;dr: Performed pveupgrade and now LUKS container images cannot be opened due to invalid password error but the password is accurate and can be verified on a different computer via dumped header.

I've read a handful of posts around the internet and these forums of similar situations but none quite like this. A couple posts mentioned mismatched keyboard layouts from when the password was generated and the most recent attempt to unlock the container. I'm not sure if that's even a thing over an SSH session and my sshd config doesnt seem to have changed.

Here's what I know:
-
On the hostnode I cannot cryptsetup open my LUKS container due to error message "No key available with this passphrase" even though the password is being entered into my ssh client correctly (the same exact way I've always done it)
- On the hostnode I can save cryptsetup luksHeaderBackup to a file. From my home computer scp the file off the server, and then successfully use cryptsetup luksOpen --test-passphrase to verify the header is in-tact and the password I have is accurate.
- I created a temporary LUKS container file with a 1-character password, scp it to the server, and I am not able to open/mount it on the hostnode whereas it opens without issue on my home computer:
Code:
[user@home ~/server/luks-thing]$ fallocate -l 24M tmp3.bin
[user@home ~/server/luks-thing]$ sudo cryptsetup luksFormat tmp3.bin

WARNING!
========
This will overwrite data on tmp3.bin irrevocably.

Are you sure? (Type 'yes' in capital letters): YES
Enter passphrase for tmp3.bin: A
Verify passphrase: A
[user@home ~/server/luks-thing]$ sudo cryptsetup open tmp3.bin t3
Enter passphrase for tmp3.bin: A
[user@home ~/server/luks-thing]$ sudo mkfs.ext4 /dev/mapper/t3
mke2fs 1.46.5 (30-Dec-2021)
Creating filesystem with 2048 4k blocks and 2048 inodes

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (1024 blocks): done
Writing superblocks and filesystem accounting information: done

[user@home ~/server/luks-thing]$ sudo mount /dev/mapper/t3 t3mnt
[user@home ~/server/luks-thing]$ ls t3mnt/
lost+found
[user@home ~/server/luks-thing]$ sudo umount t3mnt
[user@home ~/server/luks-thing]$ sudo cryptsetup close t3
[user@home ~/server/luks-thing]$ scp ./tmp3.bin root@hnode:/root/tmp3.bin
#######################################################
root@hnode:~# cryptsetup luksOpen tmp3.bin t
Enter passphrase for tmp3.bin: A
No key available with this passphrase.
Enter passphrase for tmp3.bin:
No key available with this passphrase.
Enter passphrase for tmp3.bin:
No key available with this passphrase.
root@hnode:~#
 
Firstly I don't use Luks - so I'm poking blind to try & help.

Try live booting the server with Linux media & see if it can open the Luks image.
Alternatively, copy the image to a USB stick & try opening it on different node/pc.

But I guess you/we can assume that the above will work, but your main problem is getting this to work within your PVE environment correctly. It would appear that some update has broken it. (I do remember reading in the past of users encountering similar error/s with Luks when updating their Linux OS/kernel).

Assuming your node is fully updated - what version of Luks are you using, my fully updated PVE node, where Luks is not installed shows:
Code:
# apt policy cryptsetup
cryptsetup:
  Installed: (none)
  Candidate: 2:2.6.1-4~deb12u2
  Version table:
     2:2.6.1-4~deb12u2 500
        500 http://ftp.debian.org/debian bookworm/main amd64 Packages

You could consider, reinstalling the cryptsetup package (maybe remove & install) & see if that fixes things.

Edit: Maybe try pinning to a previous kernel?
 
Last edited:
  • Like
Reactions: sillyquota
- I created a temporary LUKS container file with a 1-character password, scp it to the server, and I am not able to open/mount it on the hostnode whereas it opens without issue on my home computer:

Which "1=character" is the password if it's not a secret? :)

What does modinfo dm-crypt say on each (working and non-working) system?

Did you try to boot with some older kernel?
 
  • Like
Reactions: sillyquota
Thanks for quick replies!

Firstly I don't use Luks - so I'm poking blind to try & help
Regardless, I appreciate your effort

Try live booting the server with Linux media & see if it can open the Luks image
Dang... I thought this was going to work. I'm not able to open the existing container in a SysRescue 9.05 (w/ kernel 5.15.74-1-lts) live CD nor am I able to create a new container & then open it. I will try SysRescue 4.3.1
Code:
[root@sysrescue /mnt]# fallocate -l 64M t.bin
[root@sysrescue /mnt]# cryptsetup luksFormat t.bin

WARNING!
========
This will overwrite data on t.bin irrevocably.

Are you sure? (Type 'yes' in capital letters): YES
Enter passphrase for t.bin: t
Verify passphrase: t
[root@sysrescue /mnt]# file t.bin
t.bin: LUKS encrypted file, ver 2, header size 16384, ID 3, algo sha256, salt 0xa84815eed44dba12..., UUID: 75dc212b-a802-483c-aeea-cae67a603b42, crc 0x70fb64fd367b9593..., at 0x1000 {"keyslots":{"0":{"type":"luks2","key_size":64,"af":{"type":"luks1","stripes":4000,"hash":"sha256"},"area":{"type":"raw","offse
[root@sysrescue /mnt]# cryptsetup isLuks -v t.bin
Command successful.
[root@sysrescue /mnt]# cryptsetup luksOpen -v --debug t.bin t
# cryptsetup 2.5.0 processing "cryptsetup luksOpen -v --debug t.bin t"
# Verifying parameters for command open.
# Running command open.
# Locking memory.
# Installing SIGINT/SIGTERM handler.
# Unblocking interruption on signal.
# Allocating context for crypt device t.bin.
# Trying to open and read device t.bin with direct-io.
# Trying to open device t.bin without direct-io.
# Initialising device-mapper backend library.
# Trying to load any crypt type from device t.bin.
# Crypto backend (OpenSSL 1.1.1q  5 Jul 2022) initialized in cryptsetup library version 2.5.0.
# Detected kernel Linux 5.15.74-1-lts x86_64.
# Loading LUKS2 header (repair disabled).
# Acquiring read lock for device t.bin.
# Verifying lock handle for t.bin.
# Device t.bin READ lock taken.
# Trying to read primary LUKS2 header at offset 0x0.
# Opening locked device t.bin
# Verifying locked device handle (regular file)
# LUKS2 header version 2 of size 16384 bytes, checksum sha256.
# Checksum:70fb64fd367b9593f29f13e289932e64a64eaa2d254294bdad9345aa72036795 (on-disk)
# Checksum:70fb64fd367b9593f29f13e289932e64a64eaa2d254294bdad9345aa72036795 (in-memory)
# Trying to read secondary LUKS2 header at offset 0x4000.
# Reusing open ro fd on device t.bin
# LUKS2 header version 2 of size 16384 bytes, checksum sha256.
# Checksum:edf6cf02a158c5e475c05e082ce16e0d77e01850e5d9f45123e4d199eb0d1d51 (on-disk)
# Checksum:edf6cf02a158c5e475c05e082ce16e0d77e01850e5d9f45123e4d199eb0d1d51 (in-memory)
# Device size 67108864, offset 16777216.
# Device t.bin READ lock released.
# PBKDF argon2id, time_ms 2000 (iterations 0), max_memory_kb 1048576, parallel_threads 4.
# Activating volume t using token (any type) -1.
# dm version   [ opencount flush ]   [16384] (*1)
# dm versions   [ opencount flush ]   [16384] (*1)
# Detected dm-ioctl version 4.45.0.
# Detected dm-crypt version 1.23.0.
# Device-mapper backend running with UDEV support enabled.
# dm status t  [ opencount noflush ]   [16384] (*1)
No usable token is available.
# Interactive passphrase entry requested.
Enter passphrase for t.bin: t
# Activating volume t [keyslot -1] using passphrase.
# dm versions   [ opencount flush ]   [16384] (*1)
# dm status t  [ opencount noflush ]   [16384] (*1)
# Keyslot 0 priority 1 != 2 (required), skipped.
# Trying to open LUKS2 keyslot 0.
# Running keyslot key derivation.
# Reading keyslot area [0x8000].
# Acquiring read lock for device t.bin.
# Verifying lock handle for t.bin.
# Device t.bin READ lock taken.
# Reusing open ro fd on device t.bin
# Device t.bin READ lock released.
# Verifying key from keyslot 0, digest 0.
# Digest 0 (pbkdf2) verify failed with -1.
No key available with this passphrase.
# Interactive passphrase entry requested.
Enter passphrase for t.bin:

Alternatively, copy the image to a USB stick & try opening it on different node/pc.
Unfortunately this is an unmanaged server in a datacenter so I don't have physical access. It's especially frustrating & worrying because this is a relatively recent install so I hadn't gotten around to making backups of the container :(

Assuming your node is fully updated - what version of Luks are you using
On PVE host:
Code:
cryptsetup:
  Installed: 2:2.6.1-4~deb12u2
  Candidate: 2:2.6.1-4~deb12u2
  Version table:
 *** 2:2.6.1-4~deb12u2 500
        500 http://192.187.120.86/repo/debian bookworm/main amd64 Packages
        100 /var/lib/dpkg/status
On home computer:
Code:
$ apt policy cryptsetup
cryptsetup:
  Installed: 2:2.4.3-1ubuntu1.2
  Candidate: 2:2.4.3-1ubuntu1.2
  Version table:
 *** 2:2.4.3-1ubuntu1.2 500
        500 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 Packages
        100 /var/lib/dpkg/status
     2:2.4.3-1ubuntu1 500
        500 http://archive.ubuntu.com/ubuntu jammy/main amd64 Packages

You could consider, reinstalling the cryptsetup package
I ran apt-cache depends cryptsetup | grep '[ |]Depends: [^<]' | cut -d: -f2 | tr -d ' ' | cat which returned cryptsetup-bin dmsetup debconf libc6 but removing those ( in order to re-install) seems like it would break everything. Then I looked at apt -s remove cryptsetup-bin dmsetup but that looks like its gonna remove pretty much every PVE package on the system. Alas, removing/reinstalling cryptsetup-bin cryptsetup didn't help.

Edit: Maybe try pinning to a previous kernel?
Did you try to boot with some older kernel?
I tried booting a few of the previous kernels: 6.8.8-1-pve and 6.2.16-20-pve (at the time of writing, the latest available is 6.8.8-2-pve) but I still got the same error message

Which "1=character" is the password if it's not a secret? :)

What does modinfo dm-crypt say on each (working and non-working) system?
A :p
On my home computer (where I can successfully unlock the header using --test-passphrase):
Code:
$ uname -a
Linux home 5.15.0-113-generic #123-Ubuntu SMP Mon Jun 10 08:16:17 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
Code:
filename:       /lib/modules/5.15.0-113-generic/kernel/drivers/md/dm-crypt.ko
license:        GPL
description:    device-mapper target for transparent encryption / decryption
author:         Jana Saout <jana@saout.de>
srcversion:     D347A276C03D4E879B9CDA6
depends:     
retpoline:      Y
intree:         Y
name:           dm_crypt
vermagic:       5.15.0-113-generic SMP mod_unload modversions
sig_id:         PKCS#7
signer:         Build time autogenerated kernel key
sig_key:        1E:F3:F8:5A:EF:E4:E4:92:77:81:41:00:76:3C:F0:64:09:A8:E2:8D
sig_hashalgo:   sha512
signature:      85:A1:D0:22:8B:03:7B:6B:78:33:1C:C6:5D:ED:98:3C:0E:AE:6E:19:
        0F:13:04:6C:CC:10:14:40:56:29:B2:A4:15:9A:DA:75:36:77:CF:FF:
        2F:B7:3F:A1:E0:B1:2E:BD:81:04:20:5D:0B:BE:A5:1B:F5:98:91:0D:
        E4:49:E3:44:02:5B:FD:C6:E1:65:8C:C1:0A:7C:9B:A4:7E:F1:F6:E7:
        CE:C8:2F:C7:C6:FC:7A:82:DC:17:E9:93:1B:90:0A:58:D3:3D:12:57:
        9E:E0:FE:AC:31:C5:BC:1E:51:17:9D:B8:5A:83:B1:4A:86:39:79:39:
        A0:D6:17:7D:9F:75:FF:22:A2:21:78:6A:4D:9B:51:3A:43:BF:70:77:
        3D:C6:E6:16:87:57:09:4C:80:A7:BC:02:FF:81:17:0A:F8:95:1E:3E:
        98:09:18:79:3C:67:50:D6:D0:86:5C:4A:67:9D:18:83:DD:56:B2:F5:
        6D:52:81:DA:49:68:7D:84:F2:58:33:59:22:94:34:29:2A:09:9D:59:
        EF:ED:85:A5:C3:84:A9:A5:AD:15:27:65:A0:07:DF:9E:13:22:E0:99:
        F3:AD:6D:3F:28:BA:58:4E:DF:A8:DD:BC:DE:B2:19:63:2A:12:20:78:
        DF:5B:1D:45:18:EF:D3:3E:07:EB:04:90:57:58:C5:F4:05:C7:E6:10:
        24:60:08:C1:F0:A3:68:64:28:A9:D6:1E:4F:F2:1A:7E:78:9A:58:BD:
        63:F3:6A:06:FF:CE:44:77:64:AE:59:B5:B1:DD:E1:06:5B:6B:90:9A:
        0A:32:46:01:0B:A5:EE:A6:20:F2:AA:CB:A8:DC:A9:99:7E:29:DE:E0:
        70:B3:CF:5E:EB:82:DF:A0:DB:CF:2F:EA:02:F9:AB:C5:D2:F8:03:60:
        80:C9:E2:4A:D7:28:40:90:7C:BB:8B:DF:C5:AE:64:3D:34:AC:07:A4:
        7C:35:3F:31:39:D8:0E:57:40:00:0A:91:C3:4B:80:34:29:F7:48:A5:
        99:1A:3A:D0:25:22:68:ED:35:78:4D:9B:EC:1F:E3:8D:BB:A6:08:45:
        07:B3:8D:C6:8B:FE:4C:23:C7:93:B5:DF:C8:3F:72:A8:F5:84:4D:7C:
        21:A8:67:55:4C:07:0D:F4:88:B9:D3:F1:AA:06:70:19:02:BE:AF:BF:
        43:6A:75:05:FD:BC:39:45:7E:CD:E6:B8:61:55:2E:2D:8B:D4:7A:DF:
        59:88:28:21:5C:8F:AB:54:AB:F8:79:5E:B3:43:D3:4E:52:3D:90:58:
        1A:5E:4A:48:B5:67:DD:53:81:C0:24:A9:18:8F:DD:60:52:F7:38:34:
        FB:20:82:F2:34:66:88:45:A9:5B:F5:6F

On my PVE server:
Code:
# uname -a
Linux host.name.local 6.8.8-2-pve #1 SMP PREEMPT_DYNAMIC PMX 6.8.8-2 (2024-06-24T09:00Z) x86_64 GNU/Linux
Code:
# modinfo dm-crypt
filename:       /lib/modules/6.8.8-2-pve/kernel/drivers/md/dm-crypt.ko
license:        GPL
description:    device-mapper target for transparent encryption / decryption
author:         Jana Saout <jana@saout.de>
srcversion:     39F9231D1DE4512F1770154
depends:      
retpoline:      Y
intree:         Y
name:           dm_crypt
vermagic:       6.8.8-2-pve SMP preempt mod_unload modversions
sig_id:         PKCS#7
signer:         Build time autogenerated kernel key
sig_key:        7A:08:37:A9:3F:FA:67:65:CC:7B:1D:0D:CF:1A:DA:42:2D:B3:6D:08
sig_hashalgo:   sha512
signature:      87:50:CB:2D:3F:02:5C:59:58:5C:5A:31:76:C7:B3:9E:21:92:EB:3D:
        82:B2:2F:64:8E:81:5D:D0:49:73:21:D3:4C:0D:B5:FE:D6:B6:F2:F3:
        91:CE:8E:0B:1B:AB:BF:1E:11:32:7C:F0:53:2D:D1:B7:B3:A4:C5:A9:
        17:2E:E6:BE:9F:43:FB:FE:EA:CF:C1:BF:F8:5D:D2:AE:37:F6:04:B1:
        D5:5B:8F:0C:D8:5E:54:45:2E:0A:5E:81:F4:39:1F:B9:13:4C:DC:C9:
        66:28:D9:DC:C3:42:33:C7:F4:24:20:27:0E:96:24:C3:2B:41:E2:D2:
        9D:BA:B6:E9:41:A0:47:7B:26:4E:72:C7:EC:1A:A4:96:F2:8E:9D:A0:
        B3:A8:1D:5A:BA:B4:65:B8:6F:2A:89:E1:EA:B5:74:DC:F8:99:E0:BB:
        E1:24:6C:3A:AB:D0:B7:A4:40:79:69:61:63:C5:6E:62:FB:2A:26:93:
        18:9A:0D:D2:EF:C6:88:1B:B1:B0:F9:E4:33:5B:83:3B:D2:8E:6A:49:
        77:C6:05:3B:1C:A3:D6:27:89:E8:A1:39:1B:0C:6C:0D:48:63:C7:46:
        9F:71:40:62:77:B8:94:46:E7:54:9A:3D:A1:6F:ED:EC:75:25:95:B9:
        5B:7D:68:4F:FF:4C:40:4C:B1:0E:BF:41:18:A5:D5:59:94:D1:A5:BE:
        46:F9:F2:95:18:7F:2E:FA:B0:B9:08:2D:45:89:5D:2D:0A:20:F5:DB:
        10:9F:13:37:F7:4D:99:EC:52:22:B8:C5:61:E4:E1:2C:26:C4:5D:60:
        B1:F3:40:5D:18:84:D9:ED:2A:58:B8:C0:68:6F:25:3F:42:05:B4:AB:
        80:9C:5D:35:9E:C1:BC:FE:80:DC:34:56:C8:89:97:21:CE:BF:1B:02:
        D1:71:17:AE:A6:F4:52:1A:EF:CF:DA:DE:59:5F:95:0C:77:BE:27:A8:
        15:0D:8D:16:ED:FA:B4:7B:50:6A:40:57:14:F9:1F:54:DA:60:26:F3:
        2B:2C:8C:35:9E:BD:71:A7:3E:78:38:DC:A7:65:BE:B0:8A:A8:2A:83:
        65:EF:02:39:F7:4D:CD:3B:4E:D0:18:1A:21:D3:6A:51:E1:64:96:52:
        2A:D1:BE:FD:55:A1:2C:EB:4E:A1:AF:CD:B4:F4:CB:CA:17:40:1E:06:
        1D:9E:50:C2:B5:91:AB:A0:7B:5F:5C:52:B4:9E:7A:0C:FB:BA:71:44:
        0F:88:6A:2C:83:EF:51:E4:99:D4:12:07:46:33:21:CB:8C:9F:6C:E4:
        6C:11:84:DD:F3:D2:AC:B6:34:E1:23:D7:18:21:05:1B:50:B4:E5:0F:
        77:6C:87:47:94:5D:5A:AB:2C:CF:46:DE
 
Last edited:
@sillyquota I think I glanced here and there were replies, now they are gone. One of the great fun with this forum with any new member is the spam blocker is utterly useless. Try not to edit messages after posting, just keep adding new ones. EU morning time (3-4 hours from now) hopefully it will get published.

I think from what I glanced at, you tried to boot with much older kernel a live system but still could not open the container. In the meanwhile, I suggest you test one more thing - create a container on the server and scp it out and see if you can open it.
 
  • Like
Reactions: sillyquota
A couple more ideas, can you post cryptsetup luksDump of the dummy container as shown on your two systems (server and home)?

Does anything show up in dmesg while you are trying to do cryptsetup open?
 
Last edited:
I thought this was going to work
TBH so did I. However with this not working, coupled with the fact that even while creating a new encrypted Luks volume, this live environment failed, I think we are left with only one of two conclusions:
Either the memory in the server is bad, so you'll need to memcheck it, or the FS/medium is bad. Maybe create a new simple Linux VM (on a different medium/disk than the original one?) & see if that one can actually crypt/decrypt successfully.
 
  • Like
Reactions: sillyquota
Happy you've (probably) found the cause.

Once you've tested that everything is sorted, maybe tag prefix the thread-title with [SOLVED], (upper right hand corner under title).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!