changing ceph public network

MasterTH

Renowned Member
Jun 12, 2009
229
7
83
www.sonog.de
hi,

i've installed ceph with 3 nodes, after doing some tests i figured out that i'd like to activate the performance a increasement i'd get when i add a third network device.
How can i change the public network of ceph?

changeing only the public network in ceph.conf isn't it right?

kind regrads
 
Hi,
but I'm assume you have an interruption of the storage access.
Your Mons + OSDs must be restartet with the same network - settings - and the guest too, because in the kvm-command are the used mon-addresses... so you must also change /etc/pve/storage.cfg

I would do stop clients first.

Udo
 
  • Like
Reactions: MasterTH
managed to change it.
but. not easy to do it.

Code:
ceph mon getmap -o tmpfile
monmaptool --print tmpfile 
monmaptool --rm 0 --rm 1 --rm 2 tmpfile
monmaptool --add 1 10.255.247.13 --add 0 10.255.247.15 --add 2 10.255.247.16 tmpfile

then stop the monitors
and reload the monmap into the ceph configuration (on all monitors - be sure to match the right mon-id):
Code:
ceph-mon -i 2 --inject-monmap tmpfile
i = is the mon-id


change the configs ceph.conf and storage.conf
start the monitors and wait a few seconds. the cluster should be OK then with the new IPs.


The only issue i run into (maybe someone can help me with) i do not get any performance output anymore in proxmox-webinterface and ceph-s


btw - how can i delete a imagefile from the ceph-storage?
 
  • Like
Reactions: Andrey Zentavr
@kifeo just found a draft for myself which I had created some time ago:
Markdown (GitHub flavored):
# Migration running cluster to the new IPs

## Ceph Network overview

Ceph Network overview was done [in this article][Ceph Network Configuration Reference]. Please read it before you
continue with the current page.

  ![Ceph Network](pics/ceph_network.png)

## Setup an the goals
`stor0{1,2,3,4}-htz-nbg1` servers host so called *slow ceph storage*. Legacy setup was:
```
[global]
fsid = fd49607d-ce8f-45c7-bdc9-14edc00d72eb
public_network = 172.26.2.0/24
cluster_network = 172.26.2.0/24
mon_initial_members = stor01-htz-nbg1, stor02-htz-nbg1, stor03-htz-nbg1, stor04-htz-nbg1
mon_host = 172.26.2.251,172.26.2.250,172.26.2.249,172.26.2.248
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
mds standby replay = true
```

The goal was to migrate `public_network` from `172.26.2.0/24` to `192.168.1.0/24, 10.94.181.0/24`.
The network configuration is the next:

| Network Interface | stor01-htz-nbg1 | stor02-htz-nbg1 | stor03-htz-nbg1 | stor04-htz-nbg1 |
| ---               | ---             | ---             | ---             | ---             |          
| `eth0`            | a.b.c.251       | a.b.c.250       | a.b.c.249       | a.b.c.248       |
| `eth1`            | 192.168.1.251   | 192.168.1.250   | 192.168.1.249   | 192.168.1.248   |
| `eth1.14`         | 172.26.2.251    | 172.26.2.250    | 172.26.2.249    | 172.26.2.248    |
| `eth1.77`         | 10.94.181.251   | 10.94.181.250   | 10.94.181.249   | 10.94.181.248   |

## Migration Process

### Generating Monmap while running legacy configuration
Do not stop/reboot/change configs while doing these steps.
```
> ceph mon getmap -o ~/MAPS/mon.getmap
```

### Printing file
```
> monmaptool --print ~/MAPS/mon.getmap

monmaptool: monmap file ~/MAPS/mon.getmap
epoch 1
fsid fd49607d-ce8f-45c7-bdc9-14edc00d72eb
last_changed 2019-11-04 20:00:40.996888
created 2019-11-04 20:00:40.996888
min_mon_release 14 (nautilus)
0: [v2:172.26.2.248:3300/0,v1:172.26.2.248:6789/0] mon.stor04-htz-nbg1
1: [v2:172.26.2.249:3300/0,v1:172.26.2.249:6789/0] mon.stor03-htz-nbg1
2: [v2:172.26.2.250:3300/0,v1:172.26.2.250:6789/0] mon.stor02-htz-nbg1
3: [v2:172.26.2.251:3300/0,v1:172.26.2.251:6789/0] mon.stor01-htz-nbg1
```

### Dropping legacy IPs
```
monmaptool --rm stor04-htz-nbg1 ~/MAPS/mon.getmap
monmaptool --rm stor03-htz-nbg1 ~/MAPS/mon.getmap
monmaptool --rm stor02-htz-nbg1 ~/MAPS/mon.getmap
monmaptool --rm stor01-htz-nbg1 ~/MAPS/mon.getmap
```

### Adding new IPs
`--addv` seems to be non-documented parameter. It adds the monitors addresses using the new config string format.

```
monmaptool --addv stor04-htz-nbg1 "[v2:192.168.1.248:3300/0,v2:10.94.181.248:3300/0,v1:192.168.1.248:6789/0,v1:10.94.181.248:6789/0]" ~/MAPS/mon.getmap
monmaptool --addv stor03-htz-nbg1 "[v2:192.168.1.249:3300/0,v2:10.94.181.249:3300/0,v1:192.168.1.249:6789/0,v1:10.94.181.249:6789/0]" ~/MAPS/mon.getmap
monmaptool --addv stor02-htz-nbg1 "[v2:192.168.1.250:3300/0,v2:10.94.181.250:3300/0,v1:192.168.1.250:6789/0,v1:10.94.181.250:6789/0]" ~/MAPS/mon.getmap
monmaptool --addv stor01-htz-nbg1 "[v2:192.168.1.251:3300/0,v2:10.94.181.251:3300/0,v1:192.168.1.251:6789/0,v1:10.94.181.251:6789/0]" ~/MAPS/mon.getmap
```

### Printing file
```
> monmaptool --print ~/MAPS/mon.getmap

monmaptool: monmap file ~/MAPS/mon.getmap
epoch 1
fsid fd49607d-ce8f-45c7-bdc9-14edc00d72eb
last_changed 2019-11-04 20:00:40.996888
created 2019-11-04 20:00:40.996888
min_mon_release 14 (nautilus)
0: [v2:172.26.2.248:3300/0,v2:10.94.181.248:3300/0,v1:172.26.2.248:6789/0,v1:10.94.181.248:6789/0] mon.stor04-htz-nbg1
1: [v2:172.26.2.249:3300/0,v2:10.94.181.249:3300/0,v1:172.26.2.249:6789/0,v1:10.94.181.249:6789/0] mon.stor03-htz-nbg1
2: [v2:172.26.2.250:3300/0,v2:10.94.181.250:3300/0,v1:172.26.2.250:6789/0,v1:10.94.181.250:6789/0] mon.stor02-htz-nbg1
3: [v2:172.26.2.251:3300/0,v2:10.94.181.251:3300/0,v1:172.26.2.251:6789/0,v1:10.94.181.251:6789/0] mon.stor01-htz-nbg1
```

### Redistrubute the file across the nodes of the cluster
```bash
scp mon.getmap rsdeploy@stor01-htz-nbg1:
scp mon.getmap rsdeploy@stor02-htz-nbg1:
scp mon.getmap rsdeploy@stor03-htz-nbg1:
scp mon.getmap rsdeploy@stor04-htz-nbg1:
```

The file would be in `/home/rsdeploy/mon.getmap` path.

### Stop all the monitors
Stop the monitors and insert the new CFG at the every node which hosts the monitor ():

```bash
root@stor01-htz-nbg1:/home/rsdeploy# systemctl status ceph-mon@stor01-htz-nbg1
* ceph-mon@stor01-htz-nbg1.service - Ceph cluster monitor daemon
   Loaded: loaded (/lib/systemd/system/ceph-mon@.service; indirect; vendor preset: enabled)
   Active: active (running) since Tue 2019-12-03 01:08:10 CET; 16h ago
root@stor01-htz-nbg1:/home/rsdeploy# systemctl stop ceph-mon@stor01-htz-nbg1
root@stor01-htz-nbg1:/home/rsdeploy# ceph-mon --id stor01-htz-nbg1 --inject-monmap /home/rsdeploy/mon.getmap
```

Obviously, the IDs are different at the nodes.

### Edit `/etc/hosts`

```
172.26.2.251    stor01-htz-nbg1
172.26.2.250    stor02-htz-nbg1
172.26.2.249    stor03-htz-nbg1
172.26.2.248    stor04-htz-nbg1

192.168.1.251   stor01-htz-nbg1
192.168.1.250   stor02-htz-nbg1
192.168.1.249   stor03-htz-nbg1
192.168.1.248   stor04-htz-nbg1

10.94.181.251   stor01-htz-nbg1
10.94.181.250   stor02-htz-nbg1
10.94.181.249   stor03-htz-nbg1
10.94.181.248   stor04-htz-nbg1
```

### change the configs ceph.conf
```bash
cat /home/rsdeploy/CEPH-RS-HDD-STOR/ceph.conf

[global]
fsid = fd49607d-ce8f-45c7-bdc9-14edc00d72eb
public_network = 192.168.1.0/24, 10.94.181.0/24
cluster_network = 172.26.2.0/24
mon_initial_members = stor01-htz-nbg1, stor02-htz-nbg1, stor03-htz-nbg1, stor04-htz-nbg1
mon_host = 192.168.1.251,192.168.1.250,192.168.1.249,192.168.1.248
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
mds standby replay = true
```

Do ceph deploy:
```bash
ceph-deploy --overwrite-conf config push stor0{1,2,3,4}-htz-nbg1
```

### Check that the monitor listens at the new interfaces
```bash
root@stor04-htz-nbg1:~# netstat -pantu | grep ceph-mon
tcp        0      0 10.94.181.248:3300      0.0.0.0:*               LISTEN      9155/ceph-mon
tcp        0      0 192.168.1.248:3300      0.0.0.0:*               LISTEN      9155/ceph-mon
tcp        0      0 10.94.181.248:6789      0.0.0.0:*               LISTEN      9155/ceph-mon
tcp        0      0 192.168.1.248:6789      0.0.0.0:*               LISTEN      9155/ceph-mon
tcp        0      0 172.26.2.248:51088      172.26.2.250:6882       ESTABLISHED 9155/ceph-mon
tcp        0      0 192.168.1.248:59114     192.168.1.251:3300      ESTABLISHED 9155/ceph-mon
tcp        0      0 192.168.1.248:41414     192.168.1.250:3300      ESTABLISHED 9155/ceph-mon
tcp        0      0 192.168.1.248:50592     192.168.1.249:3300      ESTABLISHED 9155/ceph-mon
```

### Restart Ceph Managers, Metadata services and OSDs
```bash
root@stor01-htz-nbg1:/etc# systemctl restart ceph-mgr@stor01-htz-nbg1
root@stor01-htz-nbg1:/etc# systemctl restart ceph-mds@stor01-htz-nbg1
root@stor01-htz-nbg1:/etc# systemctl restart ceph-osd@{0..9}
```

```bash
root@stor02-htz-nbg1:/etc# systemctl restart ceph-mgr@stor02-htz-nbg1
root@stor02-htz-nbg1:/etc# systemctl restart ceph-mds@stor02-htz-nbg1
root@stor02-htz-nbg1:~# systemctl restart ceph-osd@{10..19}
```

```bash
root@stor03-htz-nbg1:/etc# systemctl restart ceph-mgr@stor03-htz-nbg1
root@stor03-htz-nbg1:/etc# systemctl restart ceph-mds@stor03-htz-nbg1
root@stor03-htz-nbg1:~# systemctl restart ceph-osd@{20..29}
```

```bash
root@stor04-htz-nbg1:/etc# systemctl restart ceph-mgr@stor04-htz-nbg1
root@stor04-htz-nbg1:/etc# systemctl restart ceph-mds@stor04-htz-nbg1
root@stor04-htz-nbg1:~# systemctl restart ceph-osd@{30..39}
```

---
[Proxmox forum]: https://forum.proxmox.com/threads/changing-ceph-public-network.33083/
[Ceph Network Configuration Reference]: https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/1.2.3/html/ceph_configuration_guide/network-configuration-reference

P.S.: In my case I imported the maps as a root user, so had an issue with restarting the monitors. The reason was the wrong file permissions in /var/lib/ceph/mon/ceph-******/store.db folder. `chown -r ceph:ceph /var/lib/ceph/mon/` helped (at all monitors' nodes)
 
Last edited:
  • Like
Reactions: Marc Lacoursière

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!