Can one install proxmox from a debian 9 .7 container inside a proxmox node?

ES2Burn

New Member
Feb 9, 2019
3
0
1
49
Hi,

I'm new to proxmox but am trying to experiment Ceph with the idea of doing nested virtualization.
I tried to create multiple proxmox nodes by manually installing proxmox on top of a debian 9.7 lxc container, however, I always run into the issue at the last step where pve-manager is unable to be configured, and proxmox-ve is unable to be configured, either due to it has dependency on pve-manager configuration.

After looking into systemctl status pvstatd.service, it shows:
ipcc_send_rec[1] failed: Connection refused
Unable to load access control list: Connection refused
pvestatd.service: Control process existed, code=exited sta...
Failed to start PVE Status Daemon
pvestatd.service: Unit entered failed state.
pvestatd.service: Failed with result 'exit-code'.

I'm wondering if there's something I'm doing incorrectly, or it's generally that proxmox can not be installed on top of a debian lxc container within a proxmox node?

Thanks in advance!
 
Very nice idea! For simplicity, however, I'd go with KVM/QEMU based virtual machines for all experiments. Even it you could install it, I doubt that lxc allows the use of kvm-based nested virtualization, but I do not know for sure. Nested LXC inside of LXC works if setup correctly.

I only tried it once in LXC and in a Docker container, but it was not straight forward, therefore I just went with a "real" VM and everything works as expected. However, in order to get fast nested KVM, you will need hardware support for that.
 
  • Like
Reactions: ES2Burn
Hi LnxBil,

Thanks for your reply!

By KVM/QEMU, I'm assuming you mean the standard "VM" creation, correct?

If so, I did try this route, and it seems to work until the end after creating a virtualized cluster of 4 virtualized nodes and successfully installed/setup Ceph from there. Will experiment more with it and see how it performed.

My CPU does support hardware virtualization (I believe) and I had enabled it on my physical proxmox node.
For the virtualized cluster's 4 nodes, I did use 'host' as CPU type for each virtualized ceph node.

But back to the lxc idea, the original intent is to save as much system resource as I could thinking going with lxc should achieve that as I have a fairly light weight hardware configuration for this home lab setup (i7-3770T + 16GB non-ECC rams only). From initial googling, it leads to some kind of network errors that I do not know what it is or how to exactly fix yet as my setup is different from what other people had despite similar error messages.

I will continue to do more googling or experimenting with the lxc idea, and update back if I ever get it figured out.
If anyone else have any ideas, information or feedbacks, your inputs are greatly appreciated in my experimenting journey!

Thanks in advance!
 
I will continue to do more googling or experimenting with the lxc idea, and update back if I ever get it figured out.
If anyone else have any ideas, information or feedbacks, your inputs are greatly appreciated in my experimenting journey!

The problem will probably also be the FUSE-based PVE cluster filesystem, so also check if there is fuse support in your lxc setup.
 
I just installed PVE on Debian in LXC without any problem using the normal PVE-on-Stretch guide. One big pitfall is that you have to set pvelocalhost to your public IP. I also enabled fuse and nested virtualization inside of the lcx options:

Code:
root@proxmox-ve:~# virt-what
lxc

root@proxmox-ve:~# df -hT /etc/pve/
Filesystem     Type  Size  Used Avail Use% Mounted on
/dev/fuse      fuse   30M   16K   30M   1% /etc/pve

root@proxmox-ve:~# ps auxf
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root       269  0.0  0.6  20056  3376 ?        Ss   09:22   0:00 /bin/bash
root     23458  0.0  0.5  38460  2848 ?        R+   09:30   0:00  \_ ps auxf
root         1  0.0  1.2  57296  6748 ?        Ss   09:21   0:02 /sbin/init
root        39  0.0  1.6  46100  8796 ?        Ss   09:21   0:00 /lib/systemd/systemd-journald
root        66  0.0  0.5 250144  2776 ?        Ssl  09:21   0:00 /usr/sbin/rsyslogd -n
root        67  0.0  0.4  29816  2332 ?        Ss   09:21   0:00 /usr/sbin/cron -f
message+    68  0.0  0.6  45216  3284 ?        Ss   09:21   0:00 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation
root        77  0.0  0.8  37988  4248 ?        Ss   09:21   0:00 /lib/systemd/systemd-logind
root       113  0.0  0.3  14316  1724 console  Ss+  09:21   0:00 /sbin/agetty --noclear --keep-baud console 115200,38400,9600 linux
root       114  0.0  0.8  69956  4396 ?        Ss   09:21   0:00 /usr/sbin/sshd -D
root       264  0.0  0.6  81192  3580 ?        Ss   09:21   0:00 /usr/lib/postfix/sbin/master -w
postfix    265  0.0  0.9  83428  5140 ?        S    09:21   0:00  \_ pickup -l -t unix -u -c
postfix    267  0.0  0.9  83484  5232 ?        S    09:21   0:00  \_ qmgr -l -t unix -u
root     11807  0.0  0.5  49876  2952 ?        Ss   09:26   0:00 /sbin/rpcbind -f -w
root     12560  0.0  0.0  24984   232 ?        Ss   09:27   0:00 /sbin/iscsid
root     12561  0.0  0.9  25488  4804 ?        S<Ls 09:27   0:00 /sbin/iscsid
root     12823  0.0  0.5  39744  3008 ?        Ss   09:27   0:00 /usr/lib/x86_64-linux-gnu/lxc/lxc-monitord --daemon
root     17731  0.0  0.5 253920  2708 ?        Ssl  09:27   0:00 /usr/bin/rrdcached -B -b /var/lib/rrdcached/db/ -j /var/lib/rrdcached/journal/ -p /var/run/rrdcached.pid -l unix:/var/run/rrdcached.s
root     18038  0.0  6.9 726976 36204 ?        Ssl  09:27   0:00 /usr/bin/pmxcfs
root     18131  0.0  1.8 546860  9748 ?        Ss   09:27   0:00 pvedaemon
root     18132  0.0  2.7 549228 14524 ?        S    09:27   0:00  \_ pvedaemon worker
root     18133  0.0  2.7 549228 14456 ?        S    09:27   0:00  \_ pvedaemon worker
root     18134  0.0  2.7 549228 14456 ?        S    09:27   0:00  \_ pvedaemon worker
root     18227  0.0  0.3  94148  1872 ?        Ssl  09:27   0:00 /usr/sbin/pvefw-logger
root     18230  0.0  9.3 511588 49088 ?        Ss   09:27   0:00 pve-firewall
root     18279  0.0  0.0   6144    88 ?        Ss   09:27   0:00 /usr/sbin/qmeventd /var/run/qmeventd.sock
www-data 18495  0.0  5.1 554760 27172 ?        Ss   09:27   0:00 pveproxy
www-data 18496  0.0  6.3 556964 33372 ?        S    09:27   0:00  \_ pveproxy worker
www-data 18497  0.0  6.3 556964 33428 ?        S    09:27   0:00  \_ pveproxy worker
www-data 18498  0.0  6.3 556964 33428 ?        S    09:27   0:00  \_ pveproxy worker
www-data 18507  0.0  4.3 283428 22584 ?        Ss   09:27   0:00 spiceproxy
www-data 18508  0.0  4.8 285840 25620 ?        S    09:27   0:00  \_ spiceproxy worker
root     18517  0.0 13.0 509644 68604 ?        Ss   09:27   0:00 pvestatd
root     23114  0.0 10.8 520416 56780 ?        Ss   09:28   0:00 pve-ha-lrm
root     23140  0.0 14.1 520784 74084 ?        Ss   09:28   0:00 pve-ha-crm

Yet I doubt that you will get ceph running here, due to more kernel interaction, which is hidden by lxc.
 
Thank you for testing this, too!
I'll certainly look up the pvelocalhost & fuse to get better understanding!
There are still a lot to learn for me and on how to properly configure them as well.

I'll test out on my build and see if the 2 parameter settings will make difference on mine.

Appreciated!


I just installed PVE on Debian in LXC without any problem using the normal PVE-on-Stretch guide. One big pitfall is that you have to set pvelocalhost to your public IP. I also enabled fuse and nested virtualization inside of the lcx options:

Code:
root@proxmox-ve:~# virt-what
lxc

root@proxmox-ve:~# df -hT /etc/pve/
Filesystem     Type  Size  Used Avail Use% Mounted on
/dev/fuse      fuse   30M   16K   30M   1% /etc/pve

root@proxmox-ve:~# ps auxf
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root       269  0.0  0.6  20056  3376 ?        Ss   09:22   0:00 /bin/bash
root     23458  0.0  0.5  38460  2848 ?        R+   09:30   0:00  \_ ps auxf
root         1  0.0  1.2  57296  6748 ?        Ss   09:21   0:02 /sbin/init
root        39  0.0  1.6  46100  8796 ?        Ss   09:21   0:00 /lib/systemd/systemd-journald
root        66  0.0  0.5 250144  2776 ?        Ssl  09:21   0:00 /usr/sbin/rsyslogd -n
root        67  0.0  0.4  29816  2332 ?        Ss   09:21   0:00 /usr/sbin/cron -f
message+    68  0.0  0.6  45216  3284 ?        Ss   09:21   0:00 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation
root        77  0.0  0.8  37988  4248 ?        Ss   09:21   0:00 /lib/systemd/systemd-logind
root       113  0.0  0.3  14316  1724 console  Ss+  09:21   0:00 /sbin/agetty --noclear --keep-baud console 115200,38400,9600 linux
root       114  0.0  0.8  69956  4396 ?        Ss   09:21   0:00 /usr/sbin/sshd -D
root       264  0.0  0.6  81192  3580 ?        Ss   09:21   0:00 /usr/lib/postfix/sbin/master -w
postfix    265  0.0  0.9  83428  5140 ?        S    09:21   0:00  \_ pickup -l -t unix -u -c
postfix    267  0.0  0.9  83484  5232 ?        S    09:21   0:00  \_ qmgr -l -t unix -u
root     11807  0.0  0.5  49876  2952 ?        Ss   09:26   0:00 /sbin/rpcbind -f -w
root     12560  0.0  0.0  24984   232 ?        Ss   09:27   0:00 /sbin/iscsid
root     12561  0.0  0.9  25488  4804 ?        S<Ls 09:27   0:00 /sbin/iscsid
root     12823  0.0  0.5  39744  3008 ?        Ss   09:27   0:00 /usr/lib/x86_64-linux-gnu/lxc/lxc-monitord --daemon
root     17731  0.0  0.5 253920  2708 ?        Ssl  09:27   0:00 /usr/bin/rrdcached -B -b /var/lib/rrdcached/db/ -j /var/lib/rrdcached/journal/ -p /var/run/rrdcached.pid -l unix:/var/run/rrdcached.s
root     18038  0.0  6.9 726976 36204 ?        Ssl  09:27   0:00 /usr/bin/pmxcfs
root     18131  0.0  1.8 546860  9748 ?        Ss   09:27   0:00 pvedaemon
root     18132  0.0  2.7 549228 14524 ?        S    09:27   0:00  \_ pvedaemon worker
root     18133  0.0  2.7 549228 14456 ?        S    09:27   0:00  \_ pvedaemon worker
root     18134  0.0  2.7 549228 14456 ?        S    09:27   0:00  \_ pvedaemon worker
root     18227  0.0  0.3  94148  1872 ?        Ssl  09:27   0:00 /usr/sbin/pvefw-logger
root     18230  0.0  9.3 511588 49088 ?        Ss   09:27   0:00 pve-firewall
root     18279  0.0  0.0   6144    88 ?        Ss   09:27   0:00 /usr/sbin/qmeventd /var/run/qmeventd.sock
www-data 18495  0.0  5.1 554760 27172 ?        Ss   09:27   0:00 pveproxy
www-data 18496  0.0  6.3 556964 33372 ?        S    09:27   0:00  \_ pveproxy worker
www-data 18497  0.0  6.3 556964 33428 ?        S    09:27   0:00  \_ pveproxy worker
www-data 18498  0.0  6.3 556964 33428 ?        S    09:27   0:00  \_ pveproxy worker
www-data 18507  0.0  4.3 283428 22584 ?        Ss   09:27   0:00 spiceproxy
www-data 18508  0.0  4.8 285840 25620 ?        S    09:27   0:00  \_ spiceproxy worker
root     18517  0.0 13.0 509644 68604 ?        Ss   09:27   0:00 pvestatd
root     23114  0.0 10.8 520416 56780 ?        Ss   09:28   0:00 pve-ha-lrm
root     23140  0.0 14.1 520784 74084 ?        Ss   09:28   0:00 pve-ha-crm

Yet I doubt that you will get ceph running here, due to more kernel interaction, which is hidden by lxc.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!