Nested virtualization, vagrant inside PRoxmox VM

Oct 18, 2016
15
3
23
54
Hi folks,
I'm trying to setup a VM that is used to launch other vagrant boxes with Virtualbox as provider.What have I done already:

  1. So my first step was to activate Nested Virtualization, as described in https://pve.proxmox.com/wiki/Nested_Virtualization.
    Result:
    root@proxmox5:~# cat /sys/module/kvm_intel/parameters/nested
    Y
  2. Check inside a newly created ubuntu VM:
    user@vagrant:~$ egrep vmx /proc/cpuinfo
    flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl eagerfpu pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt arat

    So that seems correct as well
  3. Create box with vagrant init minimal/trusty64
  4. Launch box, which ends unsuccessful with following logs:
[ 3.402603] cgroup: new mount options do not match the existing superblock, will be ignored
[ 3.628184] vboxdrv: module verification failed: signature and/or required key missing - tainting kernel
[ 3.631918] vboxdrv: Found 4 processor cores
[ 3.632120] vboxdrv: fAsync=0 offMin=0x2a8 offMax=0x17ba
[ 3.732304] vboxdrv: TSC mode is Synchronous, tentative frequency 2197454044 Hz
[ 3.732306] vboxdrv: Successfully loaded version 5.0.24_Ubuntu (interface 0x00240000)
[ 3.737486] VBoxNetFlt: Successfully started.
[ 3.741074] VBoxNetAdp: Successfully started.
[ 3.744362] VBoxPciLinuxInit
[ 3.746688] vboxpci: IOMMU not found (not registered)
[ 27.497498] random: nonblocking pool is initialized
[ 1274.759142] capability: warning: `VirtualBox' uses 32-bit capabilities (legacy support in use)
[ 1487.099829] SUPR0GipMap: fGetGipCpu=0x3
[ 1487.142392] general protection fault: 0000 [#1] SMP
[ 1487.142429] Modules linked in: pci_stub vboxpci(OE) vboxnetadp(OE) vboxnetflt(OE) vboxdrv(OE) ppdev kvm_intel kvm irqbypass joydev input_leds serio_raw shpchp i2c_piix4 parport_pc parport 8250_fintek mac_hid ib_iser rdma_cm iw_cm ib_cm ib_sa ib_mad ib_core ib_addr iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi autofs4 btrfs raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor hid_generic usbhid hid raid6_pq libcrc32c raid1 raid0 multipath linear crct10dif_pclmul crc32_pclmul cirrus aesni_intel ttm drm_kms_helper aes_x86_64 syscopyarea lrw sysfillrect gf128mul sysimgblt glue_helper fb_sys_fops ablk_helper cryptd drm psmouse pata_acpi floppy
[ 1487.142898] CPU: 1 PID: 1822 Comm: EMT Tainted: G OE 4.4.0-43-generic #63-Ubuntu
[ 1487.142940] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.9.1-0-gb3ef39f-prebuilt.qemu-project.org 04/01/2014
[ 1487.142997] task: ffff88040ec76040 ti: ffff88041394c000 task.ti: ffff88041394c000
[ 1487.143033] RIP: 0010:[<ffffffffc065b566>] [<ffffffffc065b566>] 0xffffffffc065b566
[ 1487.143075] RSP: 0018:ffff88041394fd70 EFLAGS: 00050206
[ 1487.143101] RAX: 00000000003406e0 RBX: 00000000ffffffdb RCX: 000000000000009b
[ 1487.143151] RDX: 0000000000000000 RSI: ffff88041394fd00 RDI: ffff88041394fcc8
[ 1487.143199] RBP: ffff88041394fd90 R08: 0000000000000000 R09: 00000000003406e0
[ 1487.143230] R10: 0000000000000004 R11: ffff880427c97c88 R12: 0000000000000020
[ 1487.143261] R13: 0000000000000000 R14: ffffc90001c6007c R15: ffffffffc04ee2a0
[ 1487.143292] FS: 00007ff8b955b700(0000) GS:ffff880427c80000(0000) knlGS:0000000000000000
[ 1487.143327] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 1487.143369] CR2: 00007ff8b9300000 CR3: 00000004151ab000 CR4: 00000000003406e0
[ 1487.143405] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 1487.143438] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 1487.143471] Stack:
[ 1487.143499] 0000000000000000 ffffffff00000000 0000000000000000 0000000000000002
[ 1487.143539] ffff88041394fdb0 ffffffffc0670e11 ffffc90001c60010 ffff88040ec8fc10
[ 1487.143580] ffff88041394fe30 ffffffffc04b32f6 ffff88041394fe10 0000000000040282
[ 1487.143620] Call Trace:
[ 1487.143650] [<ffffffffc04b32f6>] ? supdrvIOCtl+0x2d36/0x3250 [vboxdrv]
[ 1487.143702] [<ffffffffc04ac5b0>] ? VBoxDrvLinuxIOCtl_5_0_24+0x150/0x250 [vboxdrv]
[ 1487.143745] [<ffffffff8122123f>] ? do_vfs_ioctl+0x29f/0x490
[ 1487.143790] [<ffffffff8106b504>] ? __do_page_fault+0x1b4/0x400
[ 1487.143834] [<ffffffff812214a9>] ? SyS_ioctl+0x79/0x90
[ 1487.143868] [<ffffffff818318b2>] ? entry_SYSCALL_64_fastpath+0x16/0x71
[ 1487.143900] Code: 88 e4 fc ff ff b9 3a 00 00 00 0f 32 48 c1 e2 20 89 c0 48 09 d0 48 89 05 99 db 0e 00 0f 20 e0 b9 9b 00 00 00 48 89 05 72 db 0e 00 <0f> 32 48 c1 e2 20 89 c0 b9 80 00 00 c0 48 09 d0 48 89 05 6b db
[ 1487.144073] RIP [<ffffffffc065b566>] 0xffffffffc065b566
[ 1487.144115] RSP <ffff88041394fd70>
[ 1487.147128] ---[ end trace 72340e61e14cdb51 ]---​


Virtualbox hangs, the VM stays in state "starting" and only a reboot of the VM brings the Virtualbox VM it back to stopped state where it can be deleted.

Has anybody a clue why the setp fails or has to be done to get it into a working state?

Additional Info: pve-manager/4.3-3/557191d3 (running kernel: 4.4.6-1-pve)

Regards

Schnuffle
 
Thanx for the answer but that isn't a solution.
The problem is that I get a vagrant test environement that is based on the virtualbox provider. Adapting it to another provider would be a major task as the setup is quite complex and we get updates every couple of weeks.

I had hope that Nested Virtualization would allow me to setup a VM that can be used as vagrant/virtualbox environment. But appart from the URL I posted I couldn't find any docs giving more info.

For the moment I've a pysical host doing the job but that shouldn't be the final solution.

Regards

Schnuffle
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!