Nested virtualization in PVE 4B2?

dgeist

Member
Feb 26, 2015
37
0
6
I'm running PVE 4 beta2 and am trying to enable nested virtualization. I've already verified that the underlying OS is correct:
Code:
root@psp6hypd01:/etc/pve/qemu-server# cat /sys/module/kvm_intel/parameters/nested
Y

in my VM config, I have:
Code:
args: -enable-kvm
(and)
cpu: SandyBridge

because the vm would not boot with "host" as the CPU type. I have other debian (well, ubuntu) machines that will pass the nested virtualization just fine. Any idea what's going on with the PVE4 logic?

Dan
 
I've running an PVE4beta 2 nested setup here, working without any problems.

Host is required, AFAIK as else it emulates the specific processor.

[...] the vm would not boot with "host" as the CPU type. I have other debian (well, ubuntu) machines that will pass the nested virtualization just fine. [...]

Have those others 'host' as cpu type?

Whats pveversion -v of host and guest PVE and what is meant by "not booting" freezes or another error output?

What does the following command output on the guests?
Code:
egrep '(vmx|svm)' --color=always /proc/cpuinfo
 
Which physical Intel CPU do you have exactly?
 
Hi. I have only a single PVE layer. The client VM itselt (a juniper firewall) actually has a linux hypervisor and virtio I/O with a JunOS (BSD) client VM that does the routing-specific problem. Here's the version specs on PVE:

root@psp6hypd01:/etc/pve/qemu-server# pveversion -v
proxmox-ve: not correctly installed (running kernel: 4.1.3-1-pve)
pve-manager: 4.0-39 (running version: 4.0-39/ab3cc94a)
pve-kernel-3.19.8-1-pve: 3.19.8-3
pve-kernel-4.1.3-1-pve: 4.1.3-7
lvm2: 2.02.116-pve1
corosync-pve: 2.3.5-1
libqb0: 0.17.2-1
pve-cluster: 4.0-19
qemu-server: 4.0-25
pve-firmware: 1.1-7
libpve-common-perl: 4.0-24
libpve-access-control: 4.0-8
libpve-storage-perl: 4.0-23
pve-libspice-server1: 0.12.5-1
vncterm: 1.2-1
pve-qemu-kvm: 2.4-8
pve-container: 0.9-23
pve-firewall: 2.0-11
pve-ha-manager: 1.0-7
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u1
lxc-pve: 1.1.3-1
lxcfs: 0.9-pve2
cgmanager: 0.37-pve2
criu: 1.6.0-1
zfsutils: 0.6.5-pve1~jessie

...and I just noticed that somehow the "proxmox-ve" package isn't installed. I went and fixed that but still have the same result. I'm unable to see if the client machine gets the virtualization extensions since there's a kernel core dump immediately after passing grub and bootstrap.

Any ideas on how I might be able to get a text console and capture the core dump? The java-based graphical console in PVE doesn't copy/paste..

Thanks.
Dan
 
Last edited:
...

...and I just noticed that somehow the "proxmox-ve" package isn't installed. Let me go fix that and see if it changes my results.

Dan

> apt-get update
> apt-get install proxmox-ve
 
> apt-get update
> apt-get install proxmox-ve

Yes, already did that before I finished the post. The behavior is no different. I think the (outer) client VM is a wind-river embedded linux build. It's certified to run on straight up ubuntu or the Juniper Cloud platform (Openstack derived). I suspect there's a simple incompatability between the hypervisor "host" kernel definition and the embedded linux client VM. It would be nice to be able to get the console output, though. Is there something similar to the "virsh console" available on proxmox? All I can find is the web-based stuff.

Dan
 
What are the hypervisors specs from the guest? Is it KVM/qemu, and when yes which version?

Have you tried spice?

But the better option to you would always be connecting to a serial console.
Look at https://pve.proxmox.com/wiki/Serial_Terminal

in summary its:
* qm set <VMID> -serial0 socket
* when starting edit the boot parameters to match somewhat like "console=tty0 console=ttyS0" (use the e key to edit the selection in grub and search for a linux line there)
* qm terminal <VMID>
 
Which physical Intel CPU do you have exactly?

The hypervisor machine is a quad sandybridge CPU E5-2640 (24 total cores).

The guest is KVM on windriver linux (real-time optimized for network functions) http://www.windriver.com/announces/open_virtualization/. I'm not sure of the exact kernel it's based on, but from the vendor who packages it, they qualify it to run on openstack and Ubuntu trusty.

I bricked my grub config so I need to drive down to the datacenter to fix the boot a bit later and I'll try spice.

Dan
 
Here's the output from a successfully running version of the same guest VM on an adjacent server (not identical, but same class) running the latest patched trusty:

qemu-system-x86_64 -enable-kvm -name VSRX-15-2 -S -machine pc-i440fx-trusty,accel=kvm,usb=off -cpu SandyBridge,+pdpe1gb,+osxsave,+dca,+pcid,+pdcm,+xtpr,+tm2,+est,+smx,+vmx,+ds_cpl,+monitor,+dtes64,+pbe,+tm,+ht,+ss,+acpi,+ds,+vme -m 4096 -realtime mlock=off -smp 4,sockets=4,cores=1,threads=1 -uuid 5f6a9a6c-3444-7bc7-1ae6-d05edea0f0a5 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/VSRX-15-2.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/var/lib/libvirt/images/VSRX-15-2.qcow2,if=none,id=drive-ide0-0-0,format=qcow2 -device ide-hd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1 -netdev tap,fd=28,id=hostnet0 -device rtl8139,netdev=hostnet0,id=net0,mac=52:54:00:2b:e1:79,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -vnc 127.0.0.1:4 -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4

Anyone see anything that would cause the similar setup in Proxmox not to work properly?

Dan
 
I've been able to get the VM to boot under serveral different variations of the cpu type, including kvm64, etc. As long as I specify the KVM extension on CLI (I'm doing it on CLI vs. GUI so I can mess with the args more easily), I see them in the guest VM once booted.

When I specify "host" for the cpu type on the kvm commandline, the vm kernel crashes almost immediately, but with others, I get virtualization extensions, just incorrect startup behavior within the nested windriver linux VM (i.e. no launch of the sub-vm).

At this point, I suspect it's something wrong with the client VM and what it's expecting. I think proxmox and the hypervisor functions are doing what they should be. It would be great to get a stack trace of the vm boot to see what's making the kernel core.

Dan
 
I am researching if I can run Proxmox with a ZFS root nested on a cloud style hosting service like vultr.com who allow me to install from a custom iso eg. Proxmox, instead of a canned distro they provide Their cpuinfo flags are as follows: Flags : fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx rdtscp lm constant_tsc rep_good nopl pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx rdrand hypervisor lahf_lm fsgsbase bmi1 avx2 smep bmi2 erms invpcid They use virtio to access their drives. What I had in mind was to use them as a disaster recovery site, in the event of an extended power outage, fire, whatever. The idea was to use pve-zsync to replicate the virtual machines of our production Proxmox server, and only run them up manually on the nested Proxmox server if a disaster occurs. Does that sound possible? I don't care if the performance is not all that good, as it will be better than zero performance in the event of a disaster! I just want to know if it will run.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!