proxmox internal error issue

Dec 27, 2021
14
0
6
23
Hi,

I am running a machine with freebsd operating system on proxmox. I am doing a virtualization again inside the FreeBSD operating system. I'm doing this with cbsd. When I create a virtual machine with cbsd and open it, the machine suddenly freezes. Then it gives an internal-error error. The syslog output is below:

Bash:
Dec 28 14:45:34 pve QEMU[3003940]: KVM internal error. Suberror: 1
Dec 28 14:45:34 pve QEMU[3003940]: emulation failure
Dec 28 14:45:34 pve QEMU[3003940]: RAX=0000000000000000 RBX=0000000000000000 RCX=0000000000000000 RDX=0000000000000f00
Dec 28 14:45:34 pve QEMU[3003940]: RSI=0000000000000000 RDI=0000000000000000 RBP=0000000000000000 RSP=fffffe0099012700
Dec 28 14:45:34 pve QEMU[3003940]: R8 =0000000000000000 R9 =0000000000000000 R10=0000000000000000 R11=0000000000000000
Dec 28 14:45:34 pve QEMU[3003940]: R12=0000000000000000 R13=0000000000000000 R14=0000000000000000 R15=0000000000000000
Dec 28 14:45:34 pve QEMU[3003940]: RIP=ffffffff8249b5d9 RFL=00000046 [---Z-P-] CPL=0 II=0 A20=1 SMM=0 HLT=0
Dec 28 14:45:34 pve QEMU[3003940]: ES =003b 0000000000000000 ffffffff 00c0f300 DPL=3 DS   [-WA]
Dec 28 14:45:34 pve QEMU[3003940]: CS =0020 0000000000000000 ffffffff 00a09b00 DPL=0 CS64 [-RA]
Dec 28 14:45:34 pve QEMU[3003940]: SS =0028 0000000000000000 ffffffff 00c09300 DPL=0 DS   [-WA]
Dec 28 14:45:34 pve QEMU[3003940]: DS =003b 0000000000000000 ffffffff 00c0f300 DPL=3 DS   [-WA]
Dec 28 14:45:34 pve QEMU[3003940]: FS =0013 0000000800b308d0 ffffffff 00c0f300 DPL=3 DS   [-WA]
Dec 28 14:45:34 pve QEMU[3003940]: GS =001b ffffffff83017000 ffffffff 00c0f300 DPL=3 DS   [-WA]
Dec 28 14:45:34 pve QEMU[3003940]: LDT=0000 0000000000000000 ffffffff 00c00000
Dec 28 14:45:34 pve QEMU[3003940]: TR =0048 ffffffff81f16078 00002068 00008b00 DPL=0 TSS64-busy
Dec 28 14:45:34 pve QEMU[3003940]: GDT=     ffffffff81f1c878 00000067
Dec 28 14:45:34 pve QEMU[3003940]: IDT=     ffffffff81f14da0 00000fff
Dec 28 14:45:34 pve QEMU[3003940]: CR0=8005003b CR2=0000000000000000 CR3=000000001ef0e64b CR4=001726e0
Dec 28 14:45:34 pve QEMU[3003940]: DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000
Dec 28 14:45:34 pve QEMU[3003940]: DR6=00000000ffff0ff0 DR7=0000000000000400
Dec 28 14:45:34 pve QEMU[3003940]: EFER=0000000000000d01
Dec 28 14:45:34 pve QEMU[3003940]: Code=50 4c 8b 67 58 4c 8b 6f 60 4c 8b 77 68 4c 8b 7f 70 48 8b 3f <0f> 01 c2 48 89 e7 b8 02 00 00 00 eb 07 b8 03 00 00 00 eb 00 41 bb 02 00 00 00 74 06 41 bb

I am using proxmox version 7.1.8.

Here is the pveversion -v command output:

Bash:
root@pve ~ # pveversion -v
proxmox-ve: 7.1-1 (running kernel: 5.13.19-2-pve)
pve-manager: 7.1-8 (running version: 7.1-8/5b267f33)
pve-kernel-helper: 7.1-6
pve-kernel-5.13: 7.1-5
pve-kernel-5.13.19-2-pve: 5.13.19-4
ceph-fuse: 16.2.7
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: residual config
ifupdown2: 3.1.0-1+pmx3
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.0
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.1-5
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-14
libpve-guest-common-perl: 4.0-3
libpve-http-server-perl: 4.0-4
libpve-storage-perl: 7.0-15
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.11-1
lxcfs: 4.0.11-pve1
novnc-pve: 1.3.0-1
proxmox-backup-client: 2.1.2-1
proxmox-backup-file-restore: 2.1.2-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.4-4
pve-cluster: 7.1-3
pve-container: 4.1-3
pve-docs: 7.1-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.3-4
pve-ha-manager: 3.3-1
pve-i18n: 2.6-2
pve-qemu-kvm: 6.1.0-3
pve-xtermjs: 4.12.0-1
qemu-server: 7.1-4
smartmontools: 7.2-1
spiceterm: 3.2-2
swtpm: 0.7.0~rc1+2
vncterm: 1.7-1
zfsutils-linux: 2.1.1-pve3

My machine's configuration file is as follows:


Bash:
root@pve ~ # cat /etc/pve/qemu-server/1056.conf
balloon: 0
boot: order=scsi0;ide2;net0
cores: 10
cpu: host
ide2: local:iso/FreeBSD-13.0-RELEASE-amd64-dvd1.iso,media=cdrom
memory: 20000
meta: creation-qemu=6.1.0,ctime=1640700711
name: xxxx
net0: virtio=02:82:61:DF:89:78,bridge=vmbr0
numa: 1
ostype: l26
parent: sifirr
scsi0: zfs-disk-pool:vm-1056-disk-0,size=500G
scsihw: virtio-scsi-pci
smbios1: uuid=fd943340-a54c-4421-8192-62a5c60dbd61
sockets: 1
vmgenid: f3d02f88-e34e-4678-bda8-5496e928033b

[sifir]
balloon: 0
boot: order=scsi0;ide2;net0
cores: 10
cpu: host
ide2: local:iso/FreeBSD-13.0-RELEASE-amd64-dvd1.iso,media=cdrom
memory: 20000
meta: creation-qemu=6.1.0,ctime=1640700711
name: xxxx
net0: virtio=02:82:61:DF:89:78,bridge=vmbr0
numa: 1
ostype: l26
scsi0: zfs-disk-pool:vm-1056-disk-0,size=500G
scsihw: virtio-scsi-pci
smbios1: uuid=fd943340-a54c-4421-8192-62a5c60dbd61
snaptime: 1640715520
sockets: 1
vmgenid: f3d02f88-e34e-4678-bda8-5496e928033b

[sifirr]
balloon: 0
boot: order=scsi0;ide2;net0
cores: 10
cpu: host
ide2: local:iso/FreeBSD-13.0-RELEASE-amd64-dvd1.iso,media=cdrom
memory: 20000
meta: creation-qemu=6.1.0,ctime=1640700711
name: xxx
net0: virtio=02:82:61:DF:89:78,bridge=vmbr0
numa: 1
ostype: l26
parent: sifir
runningcpu: host,+kvm_pv_eoi,+kvm_pv_unhalt
runningmachine: pc-i440fx-6.1+pve0
scsi0: zfs-disk-pool:vm-1056-disk-0,size=500G
scsihw: virtio-scsi-pci
smbios1: uuid=fd943340-a54c-4421-8192-62a5c60dbd61
snaptime: 1640715531
sockets: 1
vmgenid: f3d02f88-e34e-4678-bda8-5496e928033b
vmstate: zfs-disk-pool:vm-1056-state-sifirr

Additionally, nested virtualization is active on my proxmox machine. Because I'm doing a virtualization on freeBSD again.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!