Nesting Virtualization Status

finish06

Renowned Member
Sep 2, 2014
41
4
73
I am attempting to install a Citrix host inside of my Proxmox host to just learn more about it. Here is a list of all I have completed in my pursuit.

modify VM.conf to: args: -enable-nesting
modify "/etc/modprob.d/kvm-intel.conf" to include "options kvm-intel nested=1"
changed VM cpu via hardware tab to HOST CPU

However, I get a return of a kernel panic when attempting to boot Citrix. Anything I have missed?

I have updated and upgraded via apt-get with the Proxmox VE No-Subscription Repository. I believe I am using kernel 2.x, not the newest 3.x via the test repo. Do I need to update to the test repo? Is that what is necessary for nesting?

Thanks!
 
nesting is still experimental. if you want to play with it, use our 3.10 kernel (from pve-no-subscription repo), 2.6.32 never worked for me.

post results, also let us know which physical cpu do you have.
 
Nesting worked on my 2.6.32. Its not practical but i am also able to create nested inside nested environment.

Is it XenServer you are trying to install?
 
Nesting worked on my 2.6.32. Its not practical but i am also able to create nested inside nested environment.

Is it XenServer you are trying to install?

Yep! :)

I am attempting to get Xen installed. I have recently updated to the 3.x kernel, and played a little, but still no luck. Can I ask, what did you do to get it working in the 2.6.x kernel?

Thanks!
 
Yep! :)
I am attempting to get Xen installed. I have recently updated to the 3.x kernel, and played a little, but still no luck. Can I ask, what did you do to get it working in the 2.6.x kernel?
I did not do anything special really. Same configuration you had to do. I was not even aware that there was an issue doing nested virtualization. I will give XenServer a try in my environment and see if it works.
 
What you did looks correct.

I've only used nesting once on an AMD Phenom II X6 and it was a long time ago.

I know you added the option to modprobe, but did it actually work?
You can check with this:
Code:
modinfo kvm_intel|grep -i nested
or maybe this:
Code:
cat /sys/module/kvm_intel/parameters/nested

I've also seen directions indicating you should add "options kvm-intel nested=y" to the modprobe config
Not sure if that makes any difference or not.
 
I was never able to get Xen to properly boot up in a nested environment, however I was able to get Proxmox to boot up with the above commands.

e100: I verified kvm was working via the following command: "lsmod | grep kvm" & it returned kvm_intel... was this not correct?
 
Yep! :)
I am attempting to get Xen installed. I have recently updated to the 3.x kernel, and played a little, but still no luck.

I've also tried with both 2.6.x and 3.10 kernel. I managed to have some success with 3.10 which was the only correctly passing vmx to the guest vm (by selecting "cpu host" as vm cpu and "kvm-nested 1" on /etc/modules).
Although I only managed to get only proxmox virtualized as a guest hypervisor. Neither Xen or Hyper-V worked for me (they were crashing at boot as you said).
I finally used vmware workstation which really does a great job on this. I can run Xen,Hyper-V as nested guest vm and also run vms inside them with acceptable performance (anyways this is just for testing purposes).
 
Hi all,

In order to build a training platform, I spent some time trying things.

Physical servers are IBM X3850 X5.
CPUs are Xeon E7-4807
RAM is largely enough.

The only hypervisor I tried was proxmox (What else?) v3.3-20.

The goal is to teach how to build a 2 nodes proxmox cluster + drbd to 8 pupils at once, but I have not enough hardware. So I went for a "proxmox in proxmox".

For L0 (physical server), in <vmid>.conf of L1 proxmox VM, forget "args: -enable-nesting", it is not allowed anymore by recent kvm versions.
I use "args: -cpu qemu64,+vmx" and it works fine : vmx flag is visible in L1 hypervisor. cpu "host" may be enough.

If kernel 2.6.32 is run by L0, vmx flag is NOT visible in L1 hypervisor. So you lose hardware acceleration for L1.
If kernel 3.10.0 is run by L0, vmx flag IS available in L1 hypervisor. L1 runs fast, and L2 guests are supposed to run fast.

So kernel 3.10.0 (-7 for me) was not an option for L0.

For L1 hypervisor, no problem neither with 2.6.32 nor 3.10.0 : installation went ok, all is fine from L1 hypervisor point of view. VMX flag IS seen, so KMV hardware acceleration is available for L2 guests : promising!

But whatever OS I try to install (Fedora 20, CentOS 7.0, Win2012R2) in an L2 guest, random crashes leads to "panic double faute 0x0" (Fedora and CentOS) and "need to be restarted" (Win2012R2). IDE or Virtio disks, Intel e1000 or VirtIO NIC. Crashes occur sometimes after disk access, often before.

Disabling KVM hardware virtualization for L2 guest option does work : at least fedora is installing right now, 2012 does not want but it is not a problem.

So, as of today :
- running a KVM accelerated proxmox in proxmox (L1) is OK if L0 kernel is 3.10.0. Kernels 3.10.0 and 2.6.32 are OK for L1 hypervisor.
- running a KVM accelerated L2 guest was NOT possible for me, be it Fedora, CentOS, Win2012R2.
- running a non accelerated L2 guest reliably was possible, at least with Fedora and CentOS.

Does someone had any success with KVM hardware virtualization on L2 guest?

Christophe.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!