Problem with hostname on LXC

eugeniopacheco

New Member
Aug 27, 2015
7
0
1
Hi,

I know this is an old topic, but from openvz to LXC somethings seems to have changed. I'm running a VM on CentOS 6.7 inside Proxmox 4 and I'm having a little problem with cPanel running inside that VM. By the way, I have imported that VM from OpenVZ running on Proxmox 3.4.

My configuration is as follows:

root@ovh1:/usr/share/perl5/PVE/LXC/Setup# cat /etc/pve/lxc/100.conf
arch: amd64
cpulimit: 4
cpuunits: 1024
hostname: hosting.mydomain.com
memory: 4096
net0: bridge=vmbr0,hwaddr=xx:xx:xx:xx:xx:xx,name=eth0,type=veth
onboot: 1
ostype: centos
rootfs: cbrasovz:subvol-100-disk-1,size=150G
swap: 12288

When I start my VM, here's what I get:

root@hosting [/]# cat /etc/hosts
127.0.0.1 localhost.localdomain localhost localhost4.localdomain4 localhost4
# Auto-generated hostname. Please do not remove this comment.
# 127.0.1.1 hosting
::1 localhost
127.0.1.1 hosting
x.x.x.x hosting.mydomain.com

root@hosting [/]# cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=hosting
GATEWAY=x.x.x.x

As said above, I'm using cPanel, which unfortunatelly requires hostname to be FQDN. Is it possible to bypass the settings of LXC in order to get it to write a hostname that's FQDN?

Thanks in advance, and I'm very sorry if it's a repeated post, but I couldn't find any on new Proxmox.

Best regards,

Eugenio Pacheco
 
Hi,

Actually I'm not sure as to how cPanel works, but I do know that even hostname -f doesn't seem to be working as expected with the presented configuration.

root@hosting [~]# hostname -f
hosting

root@hosting [~]# cat /etc/sysconfig/network
NETWORKING=yes
GATEWAY=x.x.x.x

HOSTNAME=hosting
DOMAINNAME=domain.com

root@hosting [~]# cat /etc/hosts
127.0.0.1 localhost.localdomain localhost localhost4.localdomain4 localhost4
# Auto-generated hostname. Please do not remove this comment.
# 127.0.1.1 hosting
::1 localhost
127.0.1.1 localhost hosting
y.y.y.y hosting.domain.com hosting

I also noticed another problem after converting from OpenVZ to LXC. Everytime I need to start the VM twice in order for it to boot correctly:

root@ovh1:~# pct start 100
lxc-start: lxc_start.c: main: 344 The container failed to start.
lxc-start: lxc_start.c: main: 346 To get more details, run the container in foreground mode.
lxc-start: lxc_start.c: main: 348 Additional information can be obtained by setting the --logfile and --logpriority options.
root@ovh1:~# pct start 100
root@ovh1:~#

root@ovh1:~# lxc-start -n 100 -F
lxc-start: conf.c: instantiate_veth: 2643 failed to create veth pair (veth100i0 and veth0JO0IF): File exists
lxc-start: conf.c: lxc_create_network: 2960 failed to create netdev
lxc-start: start.c: lxc_spawn: 920 failed to create the network
lxc-start: start.c: __lxc_start: 1172 failed to spawn '100'
lxc-start: lxc_start.c: main: 344 The container failed to start.
lxc-start: lxc_start.c: main: 348 Additional information can be obtained by setting the --logfile and --logpriority options.
root@ovh1:~# lxc-start -n 100 -F
Starting udev: /sbin/start_udev: line 269: /proc/sys/kernel/hotplug: Read-only file system
[ OK ]
Just to bring more information, I'm running this proxmox server on OVH and not on vRack, so my network configuration is:
net0: bridge=vmbr0,hwaddr=xx:xx:xx:xx:xx:xx,name=eth0,ty pe=veth

Best regards,

Eugenio Pacheco
 
root@ovh1:~# lxc-start -n 100 -F
lxc-start: conf.c: instantiate_veth: 2643 failed to create veth pair (veth100i0 and veth0JO0IF): File exists

Strange. Does those devices exist after stopping the container? Please check with

# cat /proc/net/dev
 
Hi,

Before stopping:

root@ovh1:~# cat /proc/net/dev
Inter-| Receive | Transmit
face |bytes packets errs drop fifo frame compressed multicast|bytes packets errs drop fifo colls carrier compressed
veth100i0: 22301453 185953 0 0 0 0 0 0 85952035 867027 0 0 0 0 0 0
veth101i0: 338605015 287578 0 0 0 0 0 0 45660741 403756 0 0 0 0 0 0
veth102i0: 1091581107 540896 0 0 0 0 0 0 123210275 943243 0 0 0 0 0 0
veth103i0: 6176656 47588 0 0 0 0 0 0 83833976 1174511 0 0 0 0 0 0
eth0: 157766893703 122324997 0 2021 0 0 0 135967 3804547245 23492755 0 0 0 0 0 0
eth1: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
lo: 141017619 274509 0 0 0 0 0 0 141017619 274509 0 0 0 0 0 0
vmbr0: 150400381042 21500915 0 0 0 0 0 0 1494273131 19458085 0 0 0 0 0 0

After stopping:

root@ovh1:~# cat /proc/net/dev
Inter-| Receive | Transmit
face |bytes packets errs drop fifo frame compressed multicast|bytes packets errs drop fifo colls carrier compressed
veth100i0: 22534992 188858 0 0 0 0 0 0 86968877 880758 0 0 0 0 0 0
veth101i0: 340517174 289057 0 0 0 0 0 0 45998929 406039 0 0 0 0 0 0
veth102i0: 1095711561 542143 0 0 0 0 0 0 123456518 946055 0 0 0 0 0 0
veth103i0: 6179420 47633 0 0 0 0 0 0 83874640 1175130 0 0 0 0 0 0
eth0: 157768459778 122343301 0 2023 0 0 0 136120 3810989473 23500857 0 0 0 0 0 0
eth1: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
lo: 141017619 274509 0 0 0 0 0 0 141017619 274509 0 0 0 0 0 0
vmbr0: 150400435778 21501994 0 0 0 0 0 0 1494281289 19458142 0 0 0 0 0 0

Looks like they do exist even after stopping the container. Do I need to assign each VM with a different net device? net0, net1, net2? If so, my mistake. I put them all with net0.
 
Hi,

It has been submited: Bugzilla – Bug 772 Submitted.

As to the hostname problem, any idea on how to fix it?

Best regards,

Eugenio Pacheco
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!