Add a 2nd network on host in cluster

informant

Renowned Member
Jan 31, 2012
773
10
83
Hi, if we have the standard network on eth0 and would add on network eth1 a 2nd network, do we must only change in webitnerface on network eth1 ip address, subnet mask and gateway or do we need other options in online mode? Must be observed at some something? Do the ct´s and vm´s have permission to this network, if we use the 2nd network for backup-storage (with nfs ...)?

regards
 
Hi, if we have the standard network on eth0 and would add on network eth1 a 2nd network, do we must only change in webitnerface on network eth1 ip address, subnet mask and gateway or do we need other options in online mode? Must be observed at some something? Do the ct´s and vm´s have permission to this network, if we use the 2nd network for backup-storage (with nfs ...)?

regards

Hi,
not clear for me wat you want.

If you only want an second network for storage access (don't used by VMs directly) simply add an different network with IP (but without gateway) on eth1.

If you want to add an second network, usable for VMs, add a second bridge (vmbr1 or vmbr20 or what name you like) with eth1 as nic. For this network you don't need an IP if only the VMs should use this bridge.
In this case use bridged interfaces for the VMs to use this network (or vmbr0).

An bridge is like an network hub - all NICs (physical and virtual) are connected together.

Udo
 
Hi udo,

I mean only a 2.nd network on eth1 for storage.

I would use the storage with nfs on eth1 for backups. Do i must connect all nodes with eth1 to 2nd network or only the cluster host?
If i add a ip and subnetmask on eth1, do i must use autostart for autostart this connector, well if i add the ip 10.11.12.60 without gateway and without autostart i get the message: Undefined subroutine &Net::IP::ip_to_bin called at /usr/share/perl5/PVE/API2/Network.pm line 168. (500)

regards
 
Last edited:
Hi,
the networkconfig is on all nodes different.
But if you want to use the network on all nodes, you must define the network also on all nodes. With an nfs-mount you can also use the normal routing, like
node1 has nic eth1 for storage-network 192.168.50.0/24 (ip 192.168.50.11)
node2 has nic eth2 for storage-network 192.168.50.0/24 (ip 192.168.50.12)
node3 use the defaultroute (vmbr0 ip 192.168.10.13 -> 192.168.10.1 -> 192.168.50.0/24)
Only as example...


Udo
 
Hm, ok,

but what i must add if i get the message: Undefined subroutine &Net::IP::ip_to_bin called at /usr/share/perl5/PVE/API2/Network.pm line 168. (500)


i have add on cluster/node on eth1 the ip 10.11.12.60 with subnetmask and no more informations, after ok the message comes.


thanks for your exsample,

if i have also 3 nodes i must add :

cluster/node1 10.11.12.60
node2 10.11.12.61
node3 10.11.12.62

right? I would only use this network for my storage for all nodes.

eth1 said active: no. how can i set it to active (yes) in proxmox?

regards
 
Last edited:
Ok:

Code:
cat /etc/network/interfaces*
auto lo
iface lo inet loopback

auto vmbr0
iface vmbr0 inet static
        address 217.*.*.67
        netmask 255.255.254.0
        gateway 217.*.*.1
        bridge_ports eth0
        bridge_stp off
        bridge_fd 0

storage i haven´t add at the moment, well i must add 1st the network address of storage:
Code:
 cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content images,iso,vztmpl,rootdir
        maxfiles 0
 
Ok:

Code:
cat /etc/network/interfaces*
auto lo
iface lo inet loopback

auto vmbr0
iface vmbr0 inet static
        address 217.*.*.67
        netmask 255.255.254.0
        gateway 217.*.*.1
        bridge_ports eth0
        bridge_stp off
        bridge_fd 0

storage i haven´t add at the moment, well i must add 1st the network address of storage:
Code:
 cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content images,iso,vztmpl,rootdir
        maxfiles 0
Hi,
is the netmask correct?

If you simply add eth1 like this:
Code:
auto lo
iface lo inet loopback

auto vmbr0
iface vmbr0 inet static
        address 217.*.*.67
        netmask 255.255.254.0
        gateway 217.*.*.1
        bridge_ports eth0
        bridge_stp off
        bridge_fd 0

auto eth1
iface eth1 inet static
        address 10.11.12.60
        netmask 255.255.255.0
Udo
 
Yes, netmask is correct.

If i add
Code:
auto eth1 iface eth1 inet static         address 10.11.12.60         netmask 255.255.255.0
manually, it´s online in proxmox, but i can´t add this in proxmox interface.

Ok than i add it manually to work this eth1. very thanks @udo.

regards
 
Last edited:
but what i must add if i get the message: Undefined subroutine &Net::IP::ip_to_bin called at /usr/share/perl5/PVE/API2/Network.pm line 168. (500)

Looks your system is not up to date. What is the output of

# pveversion -v
 
Hi dietmar,

# pveversion -v

Code:
pve-manager: 2.3-11 (pve-manager/2.3/bc33273b)
running kernel: 2.6.32-18-pve
proxmox-ve-2.6.32: 2.3-88
pve-kernel-2.6.32-16-pve: 2.6.32-82
pve-kernel-2.6.32-18-pve: 2.6.32-88
pve-kernel-2.6.32-17-pve: 2.6.32-83
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-4
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-36
qemu-server: 2.3-17
pve-firmware: 1.0-21
libpve-common-perl: 1.0-48
libpve-access-control: 1.0-26
libpve-storage-perl: 2.3-5
vncterm: 1.0-3
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.4-6
ksm-control-daemon: 1.1-1

regards
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!