Need advice with Nested Virtualization.

Guillaume Soucy

Well-Known Member
Oct 20, 2017
77
5
48
31
L'Orignal, Canada
guillaumesoucy.com
Hello,

I ran into issue that avoiding me to start VM within a VM or doing nested virtualization. On 3 of my physical hosts I’ve got no issues at all doing so. All I’d was required to do was to set the CPU of VM running on my Proxmox hosts to “Host” and then I was able to run VM in those VM. But I’m not able to do that such of same thing with one of my host. When starting the nested VM on Virtual Box, it fail.

When doing this in the Terminal of the VM using to do the nested virtualization:

cat /proc/cpuinfo| egrep "vmx|svm"

it returning me nothings. The same command in other Proxmox VM used to do nested virtualization returning me this:

“flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx pdpe1gb lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni vmx ssse3 cx16 pdcm sse4_1 x2apic tsc_deadline_timer xsave hypervisor lahf_lm cpuid_fault pti tpr_shadow vnmi flexpriority tsc_adjust arat arch_capabilities”.

The virtualization technology is forcefully enable in the BIOS and my CPU is certainly virtualization capable if I’m able to run VM on Proxmox, right?

Thanks in advance for your help!

Guillaume
 
Hello,

On 3 of my physical hosts I’ve got no issues at all doing so.

I still have an issue that I'd just find out about on a different host that we've worked on. I thought the host named virtualbox1-dc running Linux Mint on Proxmox VE have no issues but after trying to boot an nested VM on VirtualBox it seem freezing on the "Press F12" message.

When doing that code on the Proxmox host and it's VM "virtualbox1-dc":
Code:
 cat /sys/module/kvm_amd/parameters/nested
it returning me "1" . Same result when running that command in the VM running on Proxmox (virtualbox1-dc).

The CPU is AMD A4-4000, this what Proxmox is running on.

Thank-you in advance for some additional help.

Guillaume
 
Hi, nice that you made some progress.
This remaining issue could be a specific virtualbox issue.
But first thing to check is if you set your cpu type to host for virtualbox1-dc vm config inside proxmox.
Also, don't overcommit the number of cores in vm and nested vm, as you only got 2c/2t.
I assume that cat /proc/cpuinfo| egrep "vmx|svm" gives output, just checking.
Maybe installing amd microcode will help fixing this issue, see [1]
There is also some logging inside virtualbox, maybe you can get a hint from there why it's freezing.

To be honest, I haven't run virtualbox for many years now, when I need a graphical/desktop virtualization solution I now always default to virtual machine manager (virt-manager). You can install it in Mint too. It uses the same base as Proxmox: QEMU/KVM.

Similar questions for nested virtualbox have been asked in this forum, f.e [2]

Can I ask what your usecase is for using a nested vm?
If it's just running an old vbox image, you could try and convert and import it into proxmox, see [3],[4],[5].

[1] https://forum.proxmox.com/threads/what-is-correct-way-to-install-intel-microcode.75664/post-336793
[2] https://forum.proxmox.com/threads/nested-virtual-box.94918
[3] https://pve.proxmox.com/wiki/Migration_of_servers_to_Proxmox_VE
[4] https://pve.proxmox.com/wiki/Additional_ways_to_migrate_to_Proxmox_VE
[5] https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_importing_virtual_machines_and_disk_images
 
Last edited:
Hi, nice that you made some progress.
This remaining issue could be a specific virtualbox issue.
But first thing to check is if you set your cpu type to host for virtualbox1-dc vm config inside proxmox.
Also, don't overcommit the number of cores in vm and nested vm, as you only got 2c/2t.
I assume that cat /proc/cpuinfo| egrep "vmx|svm" gives output, just checking.
Maybe installing amd microcode will help fixing this issue, see [1]
There is also some logging inside virtualbox, maybe you can get a hint from there why it's freezing.

To be honest, I haven't run virtualbox for many years now, when I need a graphical/desktop virtualization solution I now always default to virtual machine manager (virt-manager). You can install it in Mint too. It uses the same base as Proxmox: QEMU/KVM.

Similar questions for nested virtualbox have been asked in this forum, f.e [2]

Can I ask what your usecase is for using a nested vm?
If it's just running an old vbox image, you could try and convert and import it into proxmox, see [3],[4],[5].

[1] https://forum.proxmox.com/threads/what-is-correct-way-to-install-intel-microcode.75664/post-336793
[2] https://forum.proxmox.com/threads/nested-virtual-box.94918
[3] https://pve.proxmox.com/wiki/Migration_of_servers_to_Proxmox_VE
[4] https://pve.proxmox.com/wiki/Additional_ways_to_migrate_to_Proxmox_VE
[5] https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_importing_virtual_machines_and_disk_images
Hello,

The CPU is set as host, this is the first thing that I'd verify. :)

Your virt-manager sound interesting, will definitively get deep look into that.

For the reason, yes I can tell you, I want to run old Windows releases to study them. VirtualBox was interesting as it can give me the access with RDP to VMs even if the OSes are old as the earth and not supporting RDP or recent version of VNC. I found that idea so great... Hope virt-manager can provide that as well. I will have to read. (a lot) :D

Thanks!
 
Hello,

Another issue with one of the other hosts while trying to setup virt-manager.

When starting a VM I got this in the "test" VM log: KVM: entry failed, hardware error 0x7

cat /sys/module/kvm_intel/parameters/nested result in a Y

In the GUI the VM get paused right after stating it.

Screenshot at 2022-01-04 03-54-45.png

This is the error message in the GUI:

Error unpausing domain: internal error: unable to execute QEMU command 'cont': Resetting the Virtual Machine is required

Traceback (most recent call last):
File "/usr/share/virt-manager/virtManager/asyncjob.py", line 75, in cb_wrapper
callback(asyncjob, *args, **kwargs)
File "/usr/share/virt-manager/virtManager/asyncjob.py", line 111, in tmpcb
callback(*args, **kwargs)
File "/usr/share/virt-manager/virtManager/object/libvirtobject.py", line 66, in newfn
ret = fn(self, *args, **kwargs)
File "/usr/share/virt-manager/virtManager/object/domain.py", line 1311, in resume
self._backend.resume()
File "/usr/lib/python3/dist-packages/libvirt.py", line 2174, in resume
if ret == -1: raise libvirtError ('virDomainResume() failed', dom=self)
libvirt.libvirtError: internal error: unable to execute QEMU command 'cont': Resetting the Virtual Machine is required

My setup topology:

I installed Ubuntu Server 20.04 as a VM on Proxmox, CPU on "host", installed virt-manager on the VM with apt-get install virt-manager and on my workstation located on the same network, I established a connextion to the new host running Ubuntu using "ssh+qemu". It connect with success but I can start any nested VMs. Was able to with VirtualBox.

I'd do some search and all I was able to find who look like my issue was topics related to full disk. The disk isn't full, about 98GB free on it.

Thank-you for more help! ;-)

Guillaume
 
Last edited:
Most likely there is an issue passing some instructions to the next virt layer.
I found this bug [1], but that was with somewhat older kernel and fixed with a new seabios release for Ubuntu.
So It's important to be at the latest release, that would be 20.04.3 with latest patches.
But as this bug report is rather lengthy there could also be some helpful info to troubleshoot.

One thing to look at is the qemu guest log, you can get it from inside the Ubuntu Server 20.04 vm:
cat /var/log/libvirt/qemu/<guestname>.log
As you can see, qemu in your nested vm is managed by libvirt, as opposed to proxmox which has it's own management framework.

Another thing to look at is your cpu type in virt-manager, it defaults to "copy host cpu configuration" and it ends up with the best possible match.
But there you also can set cpu model to "host-passthrough" , see [2]
You can check inside the vm what cpu type is recognised via lscpu.

Also another thread here [3], but no solution.

And last you could check if this host is on it's latest BIOS release.

[1] https://bugs.launchpad.net/qemu/+bug/1866870
[2] https://qemu.readthedocs.io/en/latest/system/qemu-cpu-models.html
[3] https://forum.proxmox.com/threads/vm-ubuntu-server-random-running-issues.81483/
 
Last edited: