pveceph mon create mon-address rejects ipv6

Ardje

New Member
Apr 2, 2023
4
0
1
Hi Guys,
I have the following issue: I am renumbering my ceph network, which is easy from ceph, but I decided to just remove and add mon's the pve way. However, the PVE gives me the following:
Code:
root@pve1:~# pveceph mon create  --mon-address fd02:c14f:6023:c::1
value does not look like a valid CIDR network
root@pve1:~# pveceph mon create  --mon-address fd02:c14f:6023:c::1/64
400 Parameter verification failed.
mon-address: invalid format - value does not look like a valid IP address

pveceph mon create  [OPTIONS]
root@pve1:~# ceph-conf --name mon.pve2 --lookup "public_network"
0::/0
root@pve1:~# ceph-conf --name mon.pve2 --lookup "cluster_network"
0::/0
root@pve1:~# pveceph mon create
value does not look like a valid CIDR network
So, I am a bit surprised at the first, which is the problem I think already. A mon address should not never take a mask.
The 0::/0 is because I was worried that PVE couldn't parse ::/0. It happens when public_network is ::/0 or 0::/1,8000::/1 . In the later case a warning about routing is issued.

The ceph cluster has been set up with PVE 6/ceph 14,
and subsequently migrated to pve7/ceph 16 .
All networks are reachable. Cephfs works on the new address,

Something has changed in /usr/share/perl5/PVE/API2/Ceph/MON.pm regarding parameter checking between v6 and v7,
and it looks like a regression.
I try my best to figure it out, but if I am correct, pveceph is posting this using posix ipc?
I have no doubt the parameters are correct, it's not my first ip address migration, but usually I take the monmap route and edit the ip in the monmap and ceph.conf .
 
Ok, so ceph never really looks at that public or cluster network the way PVE does, but maybe that's just a minor check
In my understanding the cluster and public networks were acl kind of things.
The PVE view on it, it is the literal L2 network.
So I gathered all the 4 L2 networks, put them in public and cluster network and now it works:
Code:
vi /etc/pve/ceph.conf # edit cluster_network and public_network to display the /64 addresses
root@pve1:~# pveceph mon create  --mon-address fd02:c14f:6023:c::1
Multiple Ceph public networks detected on pve1: fd02:c14f:6023:c::/64,2a10:3781:2923:c::/64,2a02:58:9a:9b0c::/64,2001:984:74c7:c::/64
Networks must be capable of routing to each other.
monmaptool: monmap file /tmp/monmap
epoch 6
fsid 54055810-d8e6-4217-a9a9-b403f96b81a8
last_changed 2023-04-01T10:13:46.186942+0000
created 2020-10-13T10:02:27.036236+0000
min_mon_release 16 (pacific)
election_strategy: 1
0: [v2:[2001:984:74c7:c::2]:3300/0,v1:[2001:984:74c7:c::2]:6789/0] mon.pve2
1: [v2:[2001:984:74c7:c::3]:3300/0,v1:[2001:984:74c7:c::3]:6789/0] mon.pve3
2: [v2:[fd02:c14f:6023:c::1]:3300/0,v1:[fd02:c14f:6023:c::1]:6789/0] mon.pve1
monmaptool: writing epoch 6 to /tmp/monmap (3 monitors)
Created symlink /etc/systemd/system/ceph-mon.target.wants/ceph-mon@pve1.service -> /lib/systemd/system/ceph-mon@.service.
root@pve1:~#

# I don't care about the public network, since it hasn't been mine for over a year ;-)>

So no, it's not a regression, well, it was, in my brain, sorry.

Anyway: for total IP change in the cluster, this is the way to do it:
have your old and new address both working.

For corosync: make sure you have v4 and v6 working as first and second interface. Change one of the 2 ip's per host in /etc/pve/corosync.conf, restart corosync on all one by one. Go to the next node, repeat.
Once all nodes are done, you have renumbered your pve cluster network without a single outage.

For ceph: add the new network to the ceph.conf cluster/public_network lines.
Destroy a mon and create a new one specifying the new ip. Slowly go through all your mons.
OSD's will automatically use that new IP.
(same for manager and cephfs, but those will not really matter).

Once you are done, you can remove the old IP range from your network and ceph.conf.

* Note: the pve/proxmox view is correct.
 
Hi,
the --mon-address in the command really only takes the IP (or list of IPs). The error you see is actually about Proxmox VE not being able recognize 0::/0 as a CIDR. We currently assume that the subnet is at least 8 byte for some reason: https://git.proxmox.com/?p=pve-comm...5491471e9dc662350e610a9dec1e8154c588a5a4#l467

Code:
root@pve701 ~ # cat cidr.pm
#!/bin/perl
use PVE::JSONSchema;
eval  { PVE::JSONSchema::pve_verify_cidr("0::/0"); }; warn "0::/0 - $@" if $@;
eval  { PVE::JSONSchema::pve_verify_cidr("0::/1"); }; warn "0::/1 - $@" if $@;
eval  { PVE::JSONSchema::pve_verify_cidr("0::/8"); }; warn "0::/8 - $@" if $@;
root@pve701 ~ # perl cidr.pm
0::/0 - value does not look like a valid CIDR network
0::/1 - value does not look like a valid CIDR network

EDIT: The reason is historical, that the CIDR verification was used for /etc/network/interfaces originally, so the restrictions make sense there. But not here, I sent a patch: https://lists.proxmox.com/pipermail/pve-devel/2023-April/056493.html
 
Last edited:
Hi,

Thanks! I was thinking along the same lines that verify_cidr might not be functioning correct (in this part), but my perl is not that good and the regex seemed fine at first glance. Testing is better than assuming, so I will try to remember this test way of testing for perl. The last time I used perl was 23 years ago ;-).

My next step would be to comment out that _cidr test. But the strace did not make it clear to me if the message came from the API or the cli part.

As for the /0, I still think it's a valid choice as ceph itself is not link level/broadcast level aware, so using /0 and keeping everything behind a firewall and proxy prevents any problems. It practically defines any IP on the host to be usable for ceph cluster or public services.

Thanks for the fix!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!