Test : Proxmox4-ceph-network dedicated...

badji

Renowned Member
Jan 14, 2011
236
35
93
Hello,

I test the CEPH installation on my cluster proxmox 4.0 béta.

Installation problem of network dedicated to ceph :

root@pve-ceph1:~# ifconfig
eth0 Link encap:Ethernet HWaddr 00:1f:d0:cd:17:2d
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:3020 errors:0 dropped:0 overruns:0 frame:0
TX packets:2887 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 lg file transmission:1000
RX bytes:863934 (843.6 KiB) TX bytes:682384 (666.3 KiB)

eth1 Link encap:Ethernet HWaddr 00:e0:1c:3c:15:5f
inet adr:10.10.0.1 Bcast:10.10.0.255 Masque:255.255.255.0
adr inet6: fe80::2e0:1cff:fe3c:155f/64 Scope:Lien
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:10 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 lg file transmission:1000
RX bytes:0 (0.0 B) TX bytes:828 (828.0 B)

lo Link encap:Boucle locale
inet adr:127.0.0.1 Masque:255.0.0.0
adr inet6: ::1/128 Scope:Hôte
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:425 errors:0 dropped:0 overruns:0 frame:0
TX packets:425 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 lg file transmission:0
RX bytes:79604 (77.7 KiB) TX bytes:79604 (77.7 KiB)

vmbr0 Link encap:Ethernet HWaddr 00:1f:d0:cd:17:2d
inet adr:192.168.100.10 Bcast:192.168.100.255 Masque:255.255.255.0
adr inet6: fe80::21f:d0ff:fecd:172d/64 Scope:Lien
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:3015 errors:0 dropped:0 overruns:0 frame:0
TX packets:2887 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 lg file transmission:0
RX bytes:821282 (802.0 KiB) TX bytes:682384 (666.3 KiB)

root@pve-ceph1:~# ping 10.10.0.1
PING 10.10.0.1 (10.10.0.1) 56(84) bytes of data.
64 bytes from 10.10.0.1: icmp_seq=1 ttl=64 time=0.027 ms
64 bytes from 10.10.0.1: icmp_seq=2 ttl=64 time=0.009 ms
^C
--- 10.10.0.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.009/0.018/0.027/0.009 ms

root@pve-ceph1:~# ping 10.10.0.2
PING 10.10.0.2 (10.10.0.2) 56(84) bytes of data.
64 bytes from 10.10.0.2: icmp_seq=1 ttl=64 time=0.200 ms
64 bytes from 10.10.0.2: icmp_seq=2 ttl=64 time=0.090 ms
^C
--- 10.10.0.2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.090/0.145/0.200/0.055 ms

root@pve-ceph1:~# ping 10.10.0.3
PING 10.10.0.3 (10.10.0.3) 56(84) bytes of data.
64 bytes from 10.10.0.3: icmp_seq=1 ttl=64 time=0.262 ms
64 bytes from 10.10.0.3: icmp_seq=2 ttl=64 time=0.162 ms
^C
--- 10.10.0.3 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.162/0.212/0.262/0.050 ms

root@pve-ceph1:~# pveceph init --network 10.10.0.0/24

root@pve-ceph1:~# pveceph createmon
unable to find local address within network '10.10.0.0/24'

I change : iface eth1 inet static -----> to manual
address 10.10.0.1
netmask 255.255.255.0

I Reboot my server, and no more dedicated network!!!

I deliver static and it works again but I'm bloqued for the installation of creation with createmon.

Merci.
 
post your pveversion -v (did you get the updates from yesterday)?
 
post your pveversion -v (did you get the updates from yesterday)?

hello,

root@pve-ceph1:~# pveversion -v
proxmox-ve: 4.0-3 (running kernel: 3.19.8-1-pve)
pve-manager: 4.0-24 (running version: 4.0-24/946af136)
pve-kernel-3.19.8-1-pve: 3.19.8-3
lvm2: 2.02.116-pve1
corosync-pve: 2.3.4-2
libqb0: 0.17.1-3
pve-cluster: 4.0-14
qemu-server: 4.0-13
pve-firmware: 1.1-5
libpve-common-perl: 4.0-10
libpve-access-control: 4.0-5
libpve-storage-perl: 4.0-12
pve-libspice-server1: 0.12.5-1
vncterm: 1.2-1
pve-qemu-kvm: 2.3-6
pve-container: 0.9-3
pve-firewall: 2.0-4
pve-ha-manager: 1.0-4
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2
lxc-pve: 1.1.2-1
lxcfs: 0.9-pve1
cgmanager: 0.37-pve1

Thank's.
 
Hello,

I test the CEPH installation on my cluster proxmox 4.0 béta.
...

which ceph packages do you use? afaik there are not yet any official packages for jessie available. self compiled?
 
Hi,Same Problem here:root@mon170:~# pveceph createmonunable to find local address within network '192.168.101.0/24'Vrsions:----------------------------------------root@mon170:~# pveversion -vproxmox-ve: 4.0-3 (running kernel: 3.19.8-1-pve)pve-manager: 4.0-24 (running version: 4.0-24/946af136)pve-kernel-3.19.8-1-pve: 3.19.8-3lvm2: 2.02.116-pve1corosync-pve: 2.3.4-2libqb0: 0.17.1-3pve-cluster: 4.0-14qemu-server: 4.0-13pve-firmware: 1.1-5libpve-common-perl: 4.0-10libpve-access-control: 4.0-5libpve-storage-perl: 4.0-12pve-libspice-server1: 0.12.5-1vncterm: 1.2-1pve-qemu-kvm: 2.3-6pve-container: 0.9-3pve-firewall: 2.0-4pve-ha-manager: 1.0-4ksm-control-daemon: 1.2-1glusterfs-client: 3.5.2-2lxc-pve: 1.1.2-1lxcfs: 0.9-pve1cgmanager: 0.37-pve1root@mon170:~# ceph -vceph version 0.80.7 (6c0127fcb58008793d3c8b62d925bc91963672a3)(apt-get source debian.org jessie)---------------------------------I tried it with all kinds of interfaces configs and other networks - same result.At the moment 2 nodes for testing as Monitors.When Problem solved, a 3rd node as monitor.Thanx,Ben
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!