VM status: internal-error

Phocks7

New Member
Jan 26, 2026
3
0
1
I'm running a supermicro CSE-829U-10 + X11DPU dual Xeon 4110 system, which itself is very stable. Running as a node in the cluster with several weeks of uptime with no issues. I tried to create a VM, and it runs for between 10 seconds and 2 minutes, then the green arrow symbol changes to a yellow exclamation mark and gives qm status: internal-error. I've tried with both mintOS and ubuntu 24.04 iso's and have the same issue. The system has 4x 32gb DIMMs, and I've tried running with 1 DIMM per CPU, testing, then swapping both DIMMs and testing again, same issues. So if it's a memory issue multiple DIMMs would have to be bad which seems unlikely.
I've tried with dual CPUs + NUMA, and single CPU with no NUMA, same issue.
I'm using the same method/configuration I use for setting up VM's on my DL360 gen 10 (which is the same hardware but HPE branded), and I've never had any issues with VM's on that machine. The only difference is that system is installed on a sata SSD whereas this system is using a PCIe to 2x m.2 nvme adapter for two nvme SSD's in ZFS raid 1 config for the proxmox install.


I don't know if this is obsolete, but I can't find where any relevant logs would be? /var/log/syslog doesn't exist.

Summary:
  • VM's launch but stop with status: internal-error after 10 seconds to 2 minutes.
  • Rotated DIMMs to test for memory issues so this seems unlikely.
  • IOMMU enabled and passthrough configured.
  • Host configured on 2xNVME drives in ZFS raid 1.
  • Host shows no ECC or MCE errors.
  • Secure Boot toggling had no effect.
  • Tried with SR-IOV disabled and enabled.
  • Tried with CPU in host and x86-64-v2.
  • Tried with dual and single CPU, NUMA on and off.

System info:
Code:
proxmox-ve: 9.1.0
pve-manager: 9.1.4
pve-kernel: 6.14.11-4-pve
pve-qemu-kvm: 10.1.2-5
CPU: Dual Intel Xeon Silver 4110
RAM: 128GB ECC RDIMM
VM configuration
Code:
bios: ovmf
machine: q35
cpu: host
memory: 16384
sockets: 1
cores: 5
efidisk0: local:101/vm-101-disk-0.qcow2,efitype=4m,ms-cert=2023,pre-enrolled-keys=1
scsi0: /dev/disk/by-id/nvme-INTEL_SSDPE2KE016T8...,size=1.4TB
boot: order=ide2
ide2: local:iso/linuxmint-22.3-cinnamon-64bit.iso,media=cdrom
net0: virtio, bridge=vmbr0
Error codes:
Code:
QEMU[7591]: KVM: entry failed, hardware error 0x80000021
QEMU[7591]: If you're running a guest on an Intel machine without unrestricted mode support...
...
kvm_intel: VMCS ..., last attempted VM-entry on CPU 29
kvm_intel: *** Guest State ***
...
kvm_intel: VMExit: reason=80000021 qualification=0000000000000004
 
Search for "journalctl"; the journal has replaced the classic syslog in Debian 13 / Trixie.

Use journalctl -f to watch it live.

To see the latest output of the previous boot: journalctl -b -1 -e = show the end (-e) of the previous (-1) boot. "-2" for the one before the last one and so on.

The most important command: man journalctl will tell you a lot of details and filtering options.
 
The first thing you could try is checking your bios version and updating it.
Is intel-microcode installed? If not apt install intel-microcode
The logs would also be helpful - refer to UdoB's comment for that.
 
  • Like
Reactions: UdoB
I've tried with CPU set to host, x86-64-v2 and Skylake-Server, but it doesn't really seem to make a difference. I've attempted to update the BIOS but skipping from 4.0 to 4.7 seems to be blocked in both the BMC UI and SUM with error: Update BIOS failed. Update package verification failed!. And Supermicro don't make versions 4.1 to 4.6 available for download (like Dell do).
There's nominally a way to update the BIOS in EFI shell (but I haven't been able to get an EFI shell to work), and also another way to do it from another machine via the BMC. If/when I'm able to update the BIOS I'll try again.

Microcode is up to date:
Code:
intel-microcode is already the newest version (3.20251111.1~deb13u1)
Logs
Code:
Jan 29 07:45:14 proxmoxsuper kernel: kvm_intel: VMCS 00000000c1e43a59, last attempted VM-entry on CPU 29
Jan 29 07:45:14 proxmoxsuper kernel: kvm_intel: *** Guest State ***
Jan 29 07:45:14 proxmoxsuper kernel: kvm_intel: CR0: actual=0x0000000080050033, shadow=0x0000000080050033, gh_mask=fffffffffffefff7
Jan 29 07:45:14 proxmoxsuper kernel: kvm_intel: CR4: actual=0x00000000007726f0, shadow=0x0000000000772ef0, gh_mask=fffffffffffef871
Jan 29 07:45:14 proxmoxsuper kernel: kvm_intel: CR3 = 0x0000000112406006
Jan 29 07:45:14 proxmoxsuper kernel: kvm_intel: PDPTR0 = 0x0000000000000000  PDPTR1 = 0x0000000000000000
Jan 29 07:45:14 proxmoxsuper kernel: kvm_intel: PDPTR2 = 0x0000000000000000  PDPTR3 = 0x0000000000000000
Jan 29 07:45:14 proxmoxsuper kernel: kvm_intel: RSP = 0xffffcdbeb8e37980  RIP = 0xffffffffc1ac58b0
Jan 29 07:45:14 proxmoxsuper kernel: kvm_intel: RFLAGS=0x00000002         DR7 = 0x0000000000000400
Jan 29 07:45:14 proxmoxsuper kernel: kvm_intel: Sysenter RSP=fffffe547f89d000 CS:RIP=0010:ffffffff93401930
Jan 29 07:45:14 proxmoxsuper kernel: kvm_intel: CS:   sel=0x0010, attr=0x0a09b, limit=0xffffffff, base=0x0000000000000000
Jan 29 07:45:14 proxmoxsuper kernel: kvm_intel: DS:   sel=0x0000, attr=0x1c093, limit=0xffffffff, base=0x0000000000000000
Jan 29 07:45:14 proxmoxsuper kernel: kvm_intel: SS:   sel=0x0018, attr=0x0c093, limit=0xffffffff, base=0x0000000000000000
Jan 29 07:45:14 proxmoxsuper kernel: kvm_intel: ES:   sel=0x0000, attr=0x1c093, limit=0xffffffff, base=0x0000000000000000
Jan 29 07:45:14 proxmoxsuper kernel: kvm_intel: FS:   sel=0x0000, attr=0x1c093, limit=0xffffffff, base=0x000073297222e6c0
Jan 29 07:45:14 proxmoxsuper kernel: kvm_intel: GS:   sel=0x0000, attr=0x1c093, limit=0xffffffff, base=0xffff89e7ffc80000
Jan 29 07:45:14 proxmoxsuper kernel: kvm_intel: GDTR:                           limit=0x0000ffff, base=0xfffffe547f89b000
Jan 29 07:45:14 proxmoxsuper kernel: kvm_intel: LDTR: sel=0x0000, attr=0x1c000, limit=0xffffffff, base=0x0000000000000000
Jan 29 07:45:14 proxmoxsuper kernel: kvm_intel: IDTR:                           limit=0x0000ffff, base=0xfffffe0000000000
Jan 29 07:45:14 proxmoxsuper kernel: kvm_intel: TR:   sel=0x0040, attr=0x0008b, limit=0x00000067, base=0xfffffe547f89d000
Jan 29 07:45:14 proxmoxsuper kernel: kvm_intel: EFER= 0x0000000000000d01 (effective)
Jan 29 07:45:14 proxmoxsuper kernel: kvm_intel: PAT = 0x0407050600070106
Jan 29 07:45:14 proxmoxsuper kernel: kvm_intel: DebugCtl = 0x0000000000000000  DebugExceptions = 0x0000000000000000
Jan 29 07:45:14 proxmoxsuper kernel: kvm_intel: BndCfgS = 0x0000000000000000
Jan 29 07:45:14 proxmoxsuper kernel: kvm_intel: Interruptibility = 00000000  ActivityState = 00000000
Jan 29 07:45:14 proxmoxsuper kernel: kvm_intel: InterruptStatus = 00ec
Jan 29 07:45:14 proxmoxsuper kernel: kvm_intel: *** Host State ***
Jan 29 07:45:14 proxmoxsuper kernel: kvm_intel: RIP = 0xffffffffc1ac58b0  RSP = 0xffffcdbeb8e37980
Jan 29 07:45:14 proxmoxsuper kernel: kvm_intel: CS=0010 SS=0018 DS=0000 ES=0000 FS=0000 GS=0000 TR=0040
Jan 29 07:45:14 proxmoxsuper kernel: kvm_intel: FSBase=000073297222e6c0 GSBase=ffff89e7ffc80000 TRBase=fffffe547f89d000
Jan 29 07:45:14 proxmoxsuper kernel: kvm_intel: GDTBase=fffffe547f89b000 IDTBase=fffffe0000000000
Jan 29 07:45:14 proxmoxsuper kernel: kvm_intel: CR0=0000000080050033 CR3=0000000112406006 CR4=00000000007726f0
Jan 29 07:45:14 proxmoxsuper kernel: kvm_intel: Sysenter RSP=fffffe547f89d000 CS:RIP=0010:ffffffff93401930
Jan 29 07:45:14 proxmoxsuper kernel: kvm_intel: PAT = 0x0407050600070106
Jan 29 07:45:14 proxmoxsuper kernel: kvm_intel: *** Control State ***
Jan 29 07:45:14 proxmoxsuper kernel: kvm_intel: CPUBased=0xb5a06dfa SecondaryExec=0x021217fe TertiaryExec=0x0000000000000000
Jan 29 07:45:14 proxmoxsuper kernel: kvm_intel: PinBased=0x000000ff EntryControls=000153ff ExitControls=008befff
Jan 29 07:45:14 proxmoxsuper kernel: kvm_intel: ExceptionBitmap=00060042 PFECmask=00000000 PFECmatch=00000000
Jan 29 07:45:14 proxmoxsuper kernel: kvm_intel: VMEntry: intr_info=000000ec errcode=00000000 ilen=00000000
Jan 29 07:45:14 proxmoxsuper kernel: kvm_intel: VMExit: intr_info=00000000 errcode=00000000 ilen=00000003
Jan 29 07:45:14 proxmoxsuper kernel: kvm_intel:         reason=80000021 qualification=0000000000000004
Jan 29 07:45:14 proxmoxsuper kernel: kvm_intel: IDTVectoring: info=00000000 errcode=00000000
Jan 29 07:45:14 proxmoxsuper kernel: kvm_intel: TSC Offset = 0xfffff998c56671c2
Jan 29 07:45:14 proxmoxsuper kernel: kvm_intel: TSC Multiplier = 0x0001000000000000
Jan 29 07:45:14 proxmoxsuper kernel: kvm_intel: SVI|RVI = 00|ec TPR Threshold = 0x00
Jan 29 07:45:14 proxmoxsuper kernel: virt-APIC addr = 0x000000015c27b000
Jan 29 07:45:14 proxmoxsuper kernel: kvm_intel: PostedIntrVec = 0xf2
Jan 29 07:45:14 proxmoxsuper kernel: kvm_intel: EPT pointer = 0x00000001663fb05e
Jan 29 07:45:14 proxmoxsuper kernel: kvm_intel: PLE Gap=00000080 Window=00001000
Jan 29 07:45:14 proxmoxsuper kernel: kvm_intel: Virtual processor ID = 0x0001
Jan 29 07:45:14 proxmoxsuper QEMU[14931]: KVM: entry failed, hardware error 0x80000021
Jan 29 07:45:14 proxmoxsuper QEMU[14931]: If you're running a guest on an Intel machine without unrestricted mode
Jan 29 07:45:14 proxmoxsuper QEMU[14931]: support, the failure can be most likely due to the guest entering an invalid
Jan 29 07:45:14 proxmoxsuper QEMU[14931]: state for Intel VT. For example, the guest maybe running in big real mode
Jan 29 07:45:14 proxmoxsuper QEMU[14931]: which is not supported on less recent Intel processors.
Jan 29 07:45:14 proxmoxsuper QEMU[14931]: RAX=000000000d3a957a RBX=0000000915bff914 RCX=00000000000006e0 RDX=0000000000000112
Jan 29 07:45:14 proxmoxsuper QEMU[14931]: RSI=000000000d3a957a RDI=00000000000006e0 RBP=ffffffff87603d48 RSP=ffffcdbeb8e37980
Jan 29 07:45:14 proxmoxsuper QEMU[14931]: R8 =ffff891defc25dc0 R9 =ffff891defc25dc0 R10=0000000000000000 R11=0000000000000000
Jan 29 07:45:14 proxmoxsuper QEMU[14931]: R12=ffff891defc21100 R13=0000000000000001 R14=000000000000000a R15=ffff891defc25e00
Jan 29 07:45:14 proxmoxsuper QEMU[14931]: RIP=ffffffffc1ac58b0 RFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=0
Jan 29 07:45:14 proxmoxsuper QEMU[14931]: ES =0000 0000000000000000 ffffffff 00c01300
Jan 29 07:45:14 proxmoxsuper QEMU[14931]: CS =0010 0000000000000000 ffffffff 00a09b00 DPL=0 CS64 [-RA]
Jan 29 07:45:14 proxmoxsuper QEMU[14931]: SS =0018 0000000000000000 ffffffff 00c09300 DPL=0 DS   [-WA]
Jan 29 07:45:14 proxmoxsuper QEMU[14931]: DS =0000 0000000000000000 ffffffff 00c01300
Jan 29 07:45:14 proxmoxsuper QEMU[14931]: FS =0000 000073297222e6c0 ffffffff 00c01300
Jan 29 07:45:14 proxmoxsuper QEMU[14931]: GS =0000 ffff89e7ffc80000 ffffffff 00c01300
Jan 29 07:45:14 proxmoxsuper QEMU[14931]: LDT=0000 0000000000000000 ffffffff 00c00000
Jan 29 07:45:14 proxmoxsuper QEMU[14931]: TR =0040 fffffe547f89d000 00000067 00008b00 DPL=0 TSS64-busy
Jan 29 07:45:14 proxmoxsuper QEMU[14931]: GDT=     fffffe547f89b000 0000ffff
Jan 29 07:45:14 proxmoxsuper QEMU[14931]: IDT=     fffffe0000000000 0000ffff
Jan 29 07:45:14 proxmoxsuper QEMU[14931]: CR0=80050033 CR2=000060c3e8ebd080 CR3=0000000112406006 CR4=00772ef0
Jan 29 07:45:14 proxmoxsuper QEMU[14931]: DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000
Jan 29 07:45:14 proxmoxsuper QEMU[14931]: DR6=00000000ffff0ff0 DR7=0000000000000400
Jan 29 07:45:14 proxmoxsuper QEMU[14931]: EFER=0000000000000d01
Jan 29 07:45:14 proxmoxsuper QEMU[14931]: Code=?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? <??> ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ??
 
Can you try setting the CPU to kvm64? Also, if you have nested virtualization enabled, please try disabling it:

Code:
# check if nested virtualization is on, should return Y or N
cat /sys/module/kvm_intel/parameters/nested

Code:
# disable nested virtualization if necessary
echo "options kvm-intel nested=N" | sudo tee /etc/modprobe.d/kvm-intel.conf
# reload the module (or reboot)
modprobe -r kvm_intel
modprobe kvm_intel
 
  • Like
Reactions: fiona
Can you try setting the CPU to kvm64? Also, if you have nested virtualization enabled, please try disabling it:

Code:
# check if nested virtualization is on, should return Y or N
cat /sys/module/kvm_intel/parameters/nested

Code:
# disable nested virtualization if necessary
echo "options kvm-intel nested=N" | sudo tee /etc/modprobe.d/kvm-intel.conf
# reload the module (or reboot)
modprobe -r kvm_intel
modprobe kvm_intel
Same issue with kvm_intel N.
I tried with kvm64/host/various flavors of x86-64-v, same issue.
I don't know if updating the bios will change anything, but I've exhausted all other options for jumping from 4.0 to 4.7, so I'll have to wait for the EEPROM I ordered to arrive.
I might try pulling all the drives and running ubuntu on bare metal and see if it's any different. I'm really stumped.