Proxmox 4 - troubles to install Ubuntu server 14.04

Lukeb

New Member
Oct 29, 2015
4
0
1
Hi,

I'm testing Proxmox 4 inside a virtual machine before to use it in a production environment.

My environment is so configured:
  • Centos 7.1 (kernel 3.10.0-229.14.1.el7.x86_64) on a real machine with a Phenom CPU and 12 GB ram
  • Proxmox 4 (proxmox-ve_4.0-0d8559d0-17.iso) is installed on a kvm vm machine (nested virtualization enabled)
    • 8 GB ram
    • 4 core
    • 200 GB on a SSD disk (qcow2)
  • I installed pfsense without problem
  • The installation of Ubuntu Server 14.04 gives me instead big troubles
    • the installation procedure is quite long
    • as the installation goes on, the logs in /var/log/ (kern.log, messages, syslog) increase until the main partition of the virtual disk is fully used - the 3 logs have about the same size
    • the consequence is that the installation process halts
    • here the layout of the disk generated by the proxmox installation, after the attempt to install U14.04:
      Code:
      rootATpve:/var/log# df -h
      Filesystem            Size  Used Avail Use% Mounted on
      udev                   10M     0   10M   0% /dev
      tmpfs                 2.0G  217M  1.8G  11% /run
      /dev/dm-0              44G   44G     0 100% /
      tmpfs                 4.9G   37M  4.9G   1% /dev/shm
      tmpfs                 5.0M     0  5.0M   0% /run/lock
      tmpfs                 4.9G     0  4.9G   0% /sys/fs/cgroup
      /dev/mapper/pve-data  114G  445M  113G   1% /var/lib/vz
      /dev/fuse              30M   12K   30M   1% /etc/pve
      cgmfs                 100K     0  100K   0% /run/cgmanager/fs
    • checking the fat logs gives me no useful information (at least for me!); unfortunately they are too big to attach (14GB each log) but in kern.lo I found a series of repeated messages
      Code:
      Oct 28 19:35:55 pve kernel: [ 7281.584472] Modules linked in: ip_set ip6table_filter ip6_tables softdog iptable_filter ip_tables x_tables nfsd auth_rpcgss nfs_acl nfs lockd grace fscache sunrpc ib_iser rdma_cm iw_cm ib_cm ib_sa ib_mad ib_core ib_addr iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi nfnetlink_log nfnetlink zfs(PO) zunicode(PO) zcommon(PO) znvpair(PO) spl(O) zavl(PO) snd_hda_codec_generic ppdev snd_hda_intel kvm_amd snd_hda_codec kvm snd_hda_core snd_hwdep joydev input_leds psmouse snd_pcm serio_raw pcspkr qxl snd_timer ttm snd drm_kms_helper soundcore pvpanic drm parport_pc parport 8250_fintek syscopyarea sysfillrect sysimgblt i2c_piix4 mac_hid vhost_net vhost macvtap macvlan autofs4 hid_generic usbhid hid pata_acpi floppy
      Oct 28 19:35:55 pve kernel: [ 7281.584526] CPU: 3 PID: 2489 Comm: kvm Tainted: P        W  O    4.2.2-1-pve #1
      Oct 28 19:35:55 pve kernel: [ 7281.584528] Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011
      Oct 28 19:35:55 pve kernel: [ 7281.584530]  ffffffffc052d79b ffff8802b0a4fbe8 ffffffff817c92f3 0000000000000007
      Oct 28 19:35:55 pve kernel: [ 7281.584533]  0000000000000000 ffff8802b0a4fc28 ffffffff8107776a ffff880200000000
      Oct 28 19:35:55 pve kernel: [ 7281.584535]  ffff8802b0b28000 ffffffff80f32aa6 ffff8802b0b28000 0000000000000000
      Oct 28 19:35:55 pve kernel: [ 7281.584537] Call Trace:
      Oct 28 19:35:55 pve kernel: [ 7281.584544]  [<ffffffff817c92f3>] dump_stack+0x45/0x57
      Oct 28 19:35:55 pve kernel: [ 7281.584548]  [<ffffffff8107776a>] warn_slowpath_common+0x8a/0xc0
      Oct 28 19:35:55 pve kernel: [ 7281.584551]  [<ffffffff8107785a>] warn_slowpath_null+0x1a/0x20
      Oct 28 19:35:55 pve kernel: [ 7281.584554]  [<ffffffffc05295c9>] skip_emulated_instruction+0xd9/0x170 [kvm_amd]
      Oct 28 19:35:55 pve kernel: [ 7281.584572]  [<ffffffffc027f9be>] kvm_emulate_halt+0x1e/0x60 [kvm]
      Oct 28 19:35:55 pve kernel: [ 7281.584575]  [<ffffffffc0525f8b>] halt_interception+0x4b/0x60 [kvm_amd]
      Oct 28 19:35:55 pve kernel: [ 7281.584578]  [<ffffffffc0526fe2>] handle_exit+0x132/0x990 [kvm_amd]
      Oct 28 19:35:55 pve kernel: [ 7281.584590]  [<ffffffffc02828fc>] ? kvm_set_cr8+0x1c/0x20 [kvm]
      Oct 28 19:35:55 pve kernel: [ 7281.584593]  [<ffffffffc05233d0>] ? nested_svm_get_tdp_cr3+0x20/0x20 [kvm_amd]
      Oct 28 19:35:55 pve kernel: [ 7281.584606]  [<ffffffffc02908c7>] kvm_arch_vcpu_ioctl_run+0x367/0x11e0 [kvm]
      Oct 28 19:35:55 pve kernel: [ 7281.584618]  [<ffffffffc028a72f>] ? kvm_arch_vcpu_load+0x15f/0x1e0 [kvm]
      Oct 28 19:35:55 pve kernel: [ 7281.584628]  [<ffffffffc027a46d>] kvm_vcpu_ioctl+0x2fd/0x570 [kvm]
      Oct 28 19:35:55 pve kernel: [ 7281.584631]  [<ffffffff810a8b86>] ? set_next_entity+0xa6/0x4d0
      Oct 28 19:35:55 pve kernel: [ 7281.584633]  [<ffffffff810aa9e5>] ? update_curr+0x75/0x150
      Oct 28 19:35:55 pve kernel: [ 7281.584636]  [<ffffffff8120172a>] do_vfs_ioctl+0x2ba/0x490
      Oct 28 19:35:55 pve kernel: [ 7281.584639]  [<ffffffff8109e074>] ? finish_task_switch+0x64/0x1c0
      Oct 28 19:35:55 pve kernel: [ 7281.584641]  [<ffffffff81201979>] SyS_ioctl+0x79/0x90
      Oct 28 19:35:55 pve kernel: [ 7281.584644]  [<ffffffff817cfd72>] entry_SYSCALL_64_fastpath+0x16/0x75
      Oct 28 19:35:55 pve kernel: [ 7281.584646] ---[ end trace af82020ad4f775e4 ]---
      Oct 28 19:35:55 pve kernel: [ 7281.587662] ------------[ cut here ]------------
      Oct 28 19:35:55 pve kernel: [ 7281.587677] WARNING: CPU: 3 PID: 2489 at arch/x86/kvm/svm.c:516 skip_emulated_instruction+0xd9/0x170 [kvm_amd]()

What could be the problem?

Lukeb
 
I would guess it's your nested setup. Which qemu version has your physical host?

Is your nested setup working? Whats the output of:
Code:
egrep '(vmx|svm)' --color=always /proc/cpuinfo
in Proxmox VE?

As a note: It should work in general, as I have quite often nested PVE installations (4.0 on 4.0 and 3.4 on 4.0) for testing purposes and did a few installations of Ubuntu (and various other distributions) without any problems.
 
You cannot nest qemu-2.4 on qemu-2.2. qemu-2.4 on qemu-2.3 is possible so newest proxmox 4.0 (qemu-2.4) is not possible to nest over proxmox 3.4 (qemu-2.2).
 
I think this is the problem: I'm using qemu 2.0 on Centos 7 and Proxmox uses qemu 2.4. The problem now is finding out how to upgrade qemu on centos....but, I could install proxmox 4 on a proxmox 4 installation, as suggested by t.lamprecht. I should give a try to this configuration!

Thanks a lot for the replies!

Lukeb
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!