Bonjour and virtual servers?

O.k. Did that. Started the container. Went to the network tab. Under "Bridged Ethernet Devices", Next to ETH0, I selected vmbr0 and clicked save. I get the following error:

Error: unable to apply VM settings, command failed: /usr/bin/ssh -n -o BatchMode=yes x.x.x.x /usr/bin/pvectl vzset 109 --netif ifname=eth0,bridge=vmbr0

Any suggestions?
 
there is no need to change the network settings after create - you need to configure the network inside the container.
 
Bad news on disabling ipv6. I finally got a chance to do this, but it busted the proxmox web interface (and possibly other services). The web interface complained that it couldn't locate the cluster node when ipv6 was disabled.

So if IPv6 is what's blocking/stopping bonjour services, we may be out of luck on using it in OpenVZ containers.
 
how did you disable ipv6?

- Dietmar

In /etc/modprobe.d/aliases:
Changed: alias net-pf-10 ipv6
To: alias net-pf-10 off
Added: alias ipv6 off

In /etc/modprobe.d/blacklist:
Added: blacklist ipv6

Created file: /etc/modprobe.d/00local
Added: install ipv6 /bin/true


I originally tried just editing the aliases file as mentioned, but that didn't get rid of the inet6 addresses, so then I did edited the blacklist file, but that also didn't get rid of the inet6 addresses. But combining all 3 steps above got the inet6 addresses to disappear.

I also edited the /etc/hosts file to comment out all ipv6 related entries for good measure.
 
Just create a standard centos-5-standard_5-1 container.
Log into it.
start the messages bus: /etc/init.d/messagebus start

It will not start. It will create the pid but that is all. I have tied this on three nodes with the same effect. It does not even generate any info in the dmesg log.
Thank you,
Ben

hi,

I just tried this with a fedora9 template (veth mode):
Code:
[root@fedora /]# /etc/init.d/messagebus start          
Starting system message bus:    [  OK  ]    
[root@fedora /]#
how to check if its really working?
 
The node does have additional ethernet ports. How hard would it be to enable an additional one. Then just have the container connect directly to that port?
 
You cant connect directly, but you can create a new bridge and add the new port there - thought I don't know how that should solve your problem.
 
O.k. I tried making a containter with the veth option. In the OS I created a new file: /etc/sysconfig/network-scripts/ifcfg-eth0 with the following settings:
DEVICE=venet0
BOOTPRO=static
ONBOOT=yes
IPADDR=x.x.x.x
NETMASK=255.255.0.0
BROADCAST=x.x.x.x

I do a service network restart and it says for etho: RTNETLINk answers: file exists [OK}

I still get no traffic and when logged into the Web Admin window I still get:

Went to the network tab. Under "Bridged Ethernet Devices", Next to ETH0, I selected vmbr0 and clicked save. I get the following error:

Error: unable to apply VM settings, command failed: /usr/bin/ssh -n -o BatchMode=yes x.x.x.x /usr/bin/pvectl vzset 109 --netif ifname=eth0,bridge=vmbr0
 
O.k. I tried making a containter with the veth option. In the OS I created a new file: /etc/sysconfig/network-scripts/ifcfg-eth0 with the following settings:
DEVICE=venet0

if you use brigded networking, you have a eth0 in your container and not venet0. uur currently published centos5 template has an issue with brigdes network, pls use the centos templates from here:

http://download.openvz.org/beta/templates/precreated/


BOOTPRO=static
ONBOOT=yes
IPADDR=x.x.x.x
NETMASK=255.255.0.0
BROADCAST=x.x.x.x

I do a service network restart and it says for etho: RTNETLINk answers: file exists [OK}

I still get no traffic and when logged into the Web Admin window I still get:

Went to the network tab. Under "Bridged Ethernet Devices", Next to ETH0, I selected vmbr0 and clicked save. I get the following error:

Error: unable to apply VM settings, command failed: /usr/bin/ssh -n -o BatchMode=yes x.x.x.x /usr/bin/pvectl vzset 109 --netif ifname=eth0,bridge=vmbr0

looks like a already fixed bug. we are really just a few days before the release of 1.0, so maybe you just wait a few days.
 
O.k. I can hold off for the new version and start again then. I have too many other projects on deck right now anyway.
Thank you as always for the support for a great product,
Ben
 
With everything else going on I know this will be a low priority, but I tried doing this again with the new prebuilt centos container in version 1. I have the same issue with dbus not wanting to start. Any thoughts?
 
With everything else going on I know this will be a low priority, but I tried doing this again with the new prebuilt centos container in version 1. I have the same issue with dbus not wanting to start. Any thoughts?

if you answer my questions I will debug.
(see my last posts where I tried this with fedora9 and veth.

how to check if its really working?)
 
Try this on a clean install of Fedora 9 template:

Code:
/etc/init.d/avahi-daemon start

You should get (or at least I am):

Code:
/etc/init.d/avahi-daemon: line 33: [: =: unary operator expected
Starting Avahi daemon...                                   [FAILED]

In the /var/log/messages I get the following:

Code:
Oct 30 14:33:46 avahi-daemon[440]: Found user 'avahi' (UID 499) and group 'avahi' (GID 499).
Oct 30 14:33:46 avahi-daemon[440]: Successfully dropped root privileges.
Oct 30 14:33:46 avahi-daemon[440]: avahi-daemon 0.6.22 starting up.
Oct 30 14:33:46 avahi-daemon[440]: WARNING: No NSS support for mDNS detected, consider installing nss-mdns!
Oct 30 14:33:46 avahi-daemon[440]: dbus_bus_get_private(): Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory
Oct 30 14:33:46 avahi-daemon[440]: WARNING: Failed to contact D-Bus daemon.

For giggles I then tried:
Code:
/etc/init.d/messagesbus start
It will say it starts but it does not. The /var/log/dmesg log is blank and if you run:
Code:
/etc/init.d/messagebus status
It will say that it is stopped.
 
How can I tell it does not work is simple. The server does not appear on the network. As a test I installed mt-daapd on it. When I went to start the program it would fail because it was not able to connect to the dbus.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!