Impossible to boot VM with tpm "/usr/bin/swtpm exit with status 256"

aureb

New Member
Jan 26, 2023
6
0
1
Hello,

I encounter an issue while rebooting my VM with tpm.
It fails at early start_up:
when playing:
swtpm_setup --tpmstate file:///dev/zvol/len2023-sdd/vm-1004-disk-3 --createek --create-ek-cert --create-platform-cert --lock-nvram --config /etc/swtpm_setup.conf --runas 0 --not-overwrite --tpm2 --ecc
/usr/bin/swtpm exit with status 256:

I have seen old post similar (due to glusterFS or Iscsi), but I'm not in this situation, and these vm was working since a long time.

All my windows host cannot reboot anymore on this server...
I don't want to loose some other on other servers in the cluster...

Difficult to see what swtpm update can cause this issue, because my vm don't reboot so often...

Thanks in advance,
Best regards
Aurelien
 
could you post the full start log and also the system log/journal covering the start? an example VM config and your storage.cfg would also be great..
 
Here are the informations:

command:

qm start 1004
/bin/swtpm exit with status 256:
start failed: command 'swtpm_setup --tpmstate file:///dev/zvol/len2023-sdd/vm-1004-disk-0 --createek --create-ek-cert --create-platform-cert --lock-nvram --config /etc/swtpm_setup.conf --runas 0 --not-overwrite --tpm2 --ecc' failed: exit code 1


syslog:



2024-12-12T08:21:12.655286+01:00 len2023 kernel: [17538415.533247] audit: type=1400 audit(1733988072.653:92145284): apparmor="DENIED" operation="userns_create" class="namespace" info="Userns create restricted - failed to find unprivileged_userns profile" error=-13 namespace="root//lxc-844_<-var-lib-lxc>" profile="unconfined" pid=1768546 comm="coolwsd" requested="userns_create" denied="userns_create" target="unprivileged_userns"
2024-12-12T08:21:13.436871+01:00 len2023 qm[1768555]: <root@pam> starting task UPID:len2023:001AFCC6:6889D16C:675A8EE9:qmstart:1004:root@pam:
2024-12-12T08:21:13.437826+01:00 len2023 qm[1768646]: start VM 1004: UPID:len2023:001AFCC6:6889D16C:675A8EE9:qmstart:1004:root@pam:
2024-12-12T08:21:13.626919+01:00 len2023 systemd[1]: Started 1004.scope.
2024-12-12T08:21:13.667374+01:00 len2023 systemd[1]: 1004.scope: Deactivated successfully.
2024-12-12T08:21:13.669412+01:00 len2023 qm[1768646]: start failed: command 'swtpm_setup --tpmstate file:///dev/zvol/len2023-sdd/vm-1004-disk-0 --createek --create-ek-cert --create-platform-cert --lock-nvram --config /etc/swtpm_setup.conf --runas 0 --not-overwrite --tpm2 --ecc' failed: exit code 1
2024-12-12T08:21:13.681230+01:00 len2023 qm[1768555]: <root@pam> end task UPID:len2023:001AFCC6:6889D16C:675A8EE9:qmstart:1004:root@pam: start failed: command 'swtpm_setup --tpmstate file:///dev/zvol/len2023-sdd/vm-1004-disk-0 --createek --create-ek-cert --create-platform-cert --lock-nvram --config /etc/swtpm_setup.conf --runas 0 --not-overwrite --tpm2 --ecc' failed: exit code 1


conf:

cat 1004.conf
agent: 1
bios: ovmf
boot: order=virtio0
cores: 6
cpu: qemu64
efidisk0: LEN2023-SDD:vm-1004-disk-1,efitype=4m,pre-enrolled-keys=1,size=1M
machine: pc-q35-7.1
memory: 6144
meta: creation-qemu=7.1.0,ctime=1673625902
name: Virtual04
net0: virtio=82:C0:1B:01:01:15,bridge=vmbr10,firewall=1
numa: 0
ostype: win10
scsihw: virtio-scsi-pci
smbios1: uuid=...
sockets: 1
tpmstate0: LEN2023-SDD:vm-1004-disk-0,size=4M,version=v2.0
vga: std
virtio0: LEN2023-SDD:vm-1004-disk-2,cache=writeback,size=150G
vmgenid: ....

storage:

cat storage.cfg
dir: local
path /var/lib/vz
content iso,backup,vztmpl

lvmthin: local-lvm
thinpool data
vgname pve
content rootdir,images

zfspool: LEN2023-SDD
pool len2023-sdd
content rootdir,images
mountpoint /len2023-sdd
nodes len2023
sparse 0

zfspool: LEN2023-DATA
pool len2023-data
content rootdir,images
mountpoint /len2023-data
nodes len2023
sparse 0

dir: len2023-backup
path /len2023-data/backup
content backup
nodes len2023
prune-backups keep-all=1
shared 0

zfspool: hp2013-data
pool hp2013-data
content images,rootdir
mountpoint /hp2013-data
nodes hp2013
sparse 0

dir: hp2013-backup
path /hp2013-data/backup
content backup
nodes hp2013
prune-backups keep-all=1
shared 0

zfspool: DELL2015-data
pool dell2015-data
content rootdir,images
mountpoint /dell2015-data
nodes dell2015
sparse 0

dir: DELL2015-Backup
path /dell2015-data/backup
content vztmpl,backup,images,iso,snippets
nodes dell2015
prune-backups keep-all=1
shared 0

zfspool: len2020-data
pool len2020-data
content images,rootdir
mountpoint /len2020-data
nodes len2020
sparse 0

dir: len2020-backup
path /len2020-data/backup
content backup,iso
nodes len2020
prune-backups keep-all=1
shared 0

zfspool: dell2015-ssd1
pool dell2015-ssd1
content images,rootdir
nodes dell2015
sparse 0

lvm: MD3400-Data
vgname MD3400_data
content rootdir,images
nodes len2023,dell2015
shared 1

lvm: MD3400_SSD1
vgname MD3400_SSD1
content rootdir,images
nodes dell2015,len2023
shared 1

lvm: MD3400_SSD2
vgname MD3400_SSD2
content images,rootdir
nodes dell2015
saferemove 0
shared 1

lvm: MD3400_SSD3
vgname MD3400_SSD3
content rootdir,images
nodes len2023,dell2015
saferemove 0
shared 1

lvm: MD3400_SSD4
vgname MD3400_SSD4
content images,rootdir
nodes dell2015,len2023
saferemove 0
shared 1

Tanks in advance
 
could you post the output (if possible, using the code tags of the forum ;)) of:

- ls -lha /dev/zvol/len2023-sdd/vm-1004-disk-0
- zfs get all len2023-ssd/vm-1004-disk-0
 
Sorry for forgotting the tag...

Bash:
ls -lha /dev/zvol/len2023-sdd/vm-1004-disk-0
lrwxrwxrwx 1 root root 10 11 déc 16:02 /dev/zvol/len2023-sdd/vm-1004-disk-0 -> ../../zd48


Bash:
zfs get all len2023-sdd/vm-1004-disk-0
NAME                        PROPERTY              VALUE                  SOURCE
len2023-sdd/vm-1004-disk-0  type                  volume                 -
len2023-sdd/vm-1004-disk-0  creation              ven nov 10 16:51 2023  -
len2023-sdd/vm-1004-disk-0  used                  6M                     -
len2023-sdd/vm-1004-disk-0  available             1.19T                  -
len2023-sdd/vm-1004-disk-0  referenced            68K                    -
len2023-sdd/vm-1004-disk-0  compressratio         1.10x                  -
len2023-sdd/vm-1004-disk-0  reservation           none                   default
len2023-sdd/vm-1004-disk-0  volsize               4M                     local
len2023-sdd/vm-1004-disk-0  volblocksize          8K                     -
len2023-sdd/vm-1004-disk-0  checksum              on                     default
len2023-sdd/vm-1004-disk-0  compression           on                     default
len2023-sdd/vm-1004-disk-0  readonly              off                    default
len2023-sdd/vm-1004-disk-0  createtxg             5164211                -

len2023-sdd/vm-1004-disk-0  copies                1                      default
len2023-sdd/vm-1004-disk-0  refreservation        6M                     local
len2023-sdd/vm-1004-disk-0  guid                  6578222796289855743    -
len2023-sdd/vm-1004-disk-0  primarycache          all                    default
len2023-sdd/vm-1004-disk-0  secondarycache        all                    default
len2023-sdd/vm-1004-disk-0  usedbysnapshots       0B                     -
len2023-sdd/vm-1004-disk-0  usedbydataset         68K                    -
len2023-sdd/vm-1004-disk-0  usedbychildren        0B                     -
len2023-sdd/vm-1004-disk-0  usedbyrefreservation  5.93M                  -
len2023-sdd/vm-1004-disk-0  logbias               latency                default
len2023-sdd/vm-1004-disk-0  objsetid              81572                  -
len2023-sdd/vm-1004-disk-0  dedup                 off                    default
len2023-sdd/vm-1004-disk-0  mlslabel              none                   default
len2023-sdd/vm-1004-disk-0  sync                  standard               default
len2023-sdd/vm-1004-disk-0  refcompressratio      1.10x                  -
len2023-sdd/vm-1004-disk-0  written               68K                    -
len2023-sdd/vm-1004-disk-0  logicalused           44K                    -
len2023-sdd/vm-1004-disk-0  logicalreferenced     44K                    -
len2023-sdd/vm-1004-disk-0  volmode               default                default
len2023-sdd/vm-1004-disk-0  snapshot_limit        none                   default
len2023-sdd/vm-1004-disk-0  snapshot_count        none                   default
len2023-sdd/vm-1004-disk-0  snapdev               hidden                 default
len2023-sdd/vm-1004-disk-0  context               none                   default
len2023-sdd/vm-1004-disk-0  fscontext             none                   default
len2023-sdd/vm-1004-disk-0  defcontext            none                   default
len2023-sdd/vm-1004-disk-0  rootcontext           none                   default
len2023-sdd/vm-1004-disk-0  redundant_metadata    all                    default
len2023-sdd/vm-1004-disk-0  encryption            off                    default
len2023-sdd/vm-1004-disk-0  keylocation           none                   default
len2023-sdd/vm-1004-disk-0  keyformat             none                   default
len2023-sdd/vm-1004-disk-0  pbkdf2iters           0                      default
 
if the VM in question is not running, could you try running

"swtpm_setup --tpmstate file:///dev/zvol/len2023-sdd/vm-1004-disk-0 --createek --create-ek-cert --create-platform-cert --lock-nvram --config /etc/swtpm_setup.conf --runas 0 --not-overwrite --tpm2 --ecc"

and post any output here?
 
not so many output:


Bash:
root@len2023:/etc/pve# swtpm_setup --tpmstate file:///dev/zvol/len2023-sdd/vm-1004-disk-0 --createek --create-ek-cert --create-platform-cert --lock-nvram --config /etc/swtpm_setup.conf --runas 0 --not-overwrite --tpm2 --ecc
/usr/bin/swtpm exit with status 256:


In the meantime,
i have migrate a similar VM in the same situation on another node, and it is starting...

What i have done:
1- move storage on a share SAN (lvm instead of zfs)
2- remove unused
3- tried to start on origin node (same issue)
4- Migrate
5- start is OK on destination


Seems to be related to swtpm on one node
 
could you check the swtpm config file and versions on both node?
 
Bash:
root@len2023:/etc/pve# /usr/bin/swtpm -v
TPM emulator version 0.8.0, Copyright (c) 2014-2022 IBM Corp. and others
root@len2023:/etc/pve# cat /etc/swtpm_setup.conf
# Program invoked for creating certificates
create_certs_tool= /usr/bin/swtpm_localca
create_certs_tool_config = /etc/swtpm-localca.conf
create_certs_tool_options = /etc/swtpm-localca.options
# Comma-separated list (no spaces) of PCR banks to activate by default
active_pcr_banks = sha256

Bash:
root@dell2015:~# /usr/bin/swtpm -v
TPM emulator version 0.8.0, Copyright (c) 2014-2022 IBM Corp. and others
root@dell2015:~# cat /etc/swtpm_setup.conf
# Program invoked for creating certificates
create_certs_tool= /usr/bin/swtpm_localca
create_certs_tool_config = /etc/swtpm-localca.conf
create_certs_tool_options = /etc/swtpm-localca.options
# Comma-separated list (no spaces) of PCR banks to activate by default
active_pcr_banks = sha256

Tanks in advance,
 
Solved by:

Bash:
root@len2023:/etc/pve# apt install --reinstall swtpm swtpm-libs  swtpm-tools
Lecture des listes de paquets... Fait
Construction de l'arbre des dépendances... Fait
Lecture des informations d'état... Fait     
0 mis à jour, 0 nouvellement installés, 3 réinstallés, 0 à enlever et 0 non mis à jour.
Il est nécessaire de prendre 156 ko dans les archives.
Après cette opération, 0 o d'espace disque supplémentaires seront utilisés.
Réception de :1 http://download.proxmox.com/debian/pve bookworm/pve-no-subscription amd64 swtpm amd64 0.8.0+pve1 [20,8 kB]
Réception de :2 http://download.proxmox.com/debian/pve bookworm/pve-no-subscription amd64 swtpm-libs amd64 0.8.0+pve1 [36,1 kB]
Réception de :3 http://download.proxmox.com/debian/pve bookworm/pve-no-subscription amd64 swtpm-tools amd64 0.8.0+pve1 [98,7 kB]
156 ko réceptionnés en 0s (1.285 ko/s)
(Lecture de la base de données... 82313 fichiers et répertoires déjà installés.)
Préparation du dépaquetage de .../swtpm_0.8.0+pve1_amd64.deb ...
Dépaquetage de swtpm (0.8.0+pve1) sur (0.8.0+pve1) ...
Préparation du dépaquetage de .../swtpm-libs_0.8.0+pve1_amd64.deb ...
Dépaquetage de swtpm-libs:amd64 (0.8.0+pve1) sur (0.8.0+pve1) ...
Préparation du dépaquetage de .../swtpm-tools_0.8.0+pve1_amd64.deb ...
Dépaquetage de swtpm-tools (0.8.0+pve1) sur (0.8.0+pve1) ...
Paramétrage de swtpm-libs:amd64 (0.8.0+pve1) ...
Paramétrage de swtpm (0.8.0+pve1) ...
Paramétrage de swtpm-tools (0.8.0+pve1) ...

vm is starting.

Quite strange...

Thank you very much for support and help!
 
strange indeed..
 
Hello, I have the same problem however for me reinstalling those packages does not solve it. The only output I get from running the command is this:

Code:
swtpm_setup --tpmstate file://iscsi://10.0.44.88/iqn.2005-10.org.freenas.ctl:target/1 --createek --create-ek-cert --create-platform-cert --lock-nvram --config /etc/swtpm_setup.conf --runas 0 --not-overwrite --tpm2 --ecc
/usr/bin/swtpm exit with status 256:

Removing the TPM device from the VM and its able to start
 
Last edited:
that cannot work, as swtpm can only use real files, not URLs.
 
only if it is exposed as a regular block device, not as a file URL.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!