PVE6 ceph ipv6

czechsys

Renowned Member
Nov 18, 2015
386
36
93
Hi,

i am trying to make ceph works on ipv6. Ipv6 can ping each other on their subnets. But i am stuck in ceph...

Code:
#ipv6
    cluster_network = fd8d:7868:1cfd:6443::/64
    public_network = fd8d:7868:1cfd:6444::/64
    mon_host = [fd8d:7868:1cfd:6444::1],[fd8d:7868:1cfd:6444::2],[fd8d:7868:1cfd:6444::3]
    ms_bind_ipv4 = false
    ms_bind_ipv6 = true

Code:
root@pve-02:~# ss -l | grep 3300
tcp                LISTEN              0                    128                                                                                  IPv4:3300                                                 0.0.0.0:*                  
root@pve-02:~# ss -l | grep 6789
tcp                LISTEN              0                    128                                                                                  IPv4:6789                                                 0.0.0.0:*

Code:
root@pve-02:~# ss -l -6
Netid                  State                   Recv-Q                  Send-Q                                     Local Address:Port                                       Peer Address:Port                  
udp                    UNCONN                  0                       0                                                   [::]:sunrpc                                             [::]:*                     
tcp                    LISTEN                  0                       128                                                 [::]:19999                                              [::]:*                     
tcp                    LISTEN                  0                       128                                                    *:8006                                                  *:*                     
tcp                    LISTEN                  0                       128                                                 [::]:sunrpc                                             [::]:*                     
tcp                    LISTEN                  0                       128                                                 [::]:ssh                                                [::]:*                     
tcp                    LISTEN                  0                       128                                                    *:3128                                                  *:*                     
tcp                    LISTEN                  0                       100                                                [::1]:smtp                                               [::]:*

Code:
2019-09-20 12:39:38.205578 mon.pve-02 (mon.1) 4 : cluster [INF] mon.pve-02 calling monitor election
2019-09-20 12:39:38.215241 mon.pve-01 (mon.0) 20 : cluster [INF] mon.pve-01 calling monitor election
2019-09-20 12:39:41.291155 mon.pve-01 (mon.0) 21 : cluster [INF] mon.pve-01 is new leader, mons pve-01,pve-02,pve-03 in quorum (ranks 0,1,2)
2019-09-20 12:39:41.294514 mon.pve-01 (mon.0) 22 : cluster [WRN] 2 clock skew 0.110716s > max 0.05s
2019-09-20 12:39:41.299182 mon.pve-01 (mon.0) 27 : cluster [WRN] Health check failed: clock skew detected on mon.pve-03 (MON_CLOCK_SKEW)
2019-09-20 12:39:41.299246 mon.pve-01 (mon.0) 28 : cluster [INF] Health check cleared: MON_DOWN (was: 1/3 mons down, quorum pve-01,pve-02)
2019-09-20 12:39:41.305228 mon.pve-01 (mon.0) 29 : cluster [WRN] message from mon.2 was stamped 0.113483s in the future, clocks not synchronized
2019-09-20 12:39:41.309553 mon.pve-01 (mon.0) 30 : cluster [WRN] overall HEALTH_WARN 2 osds down; 1 host (2 osds) down; no active mgr; Degraded data redundancy: 42211/126633 objects degraded (33.333%), 256 pgs degraded; clock skew detected on mon.pve-03
2019-09-20 12:39:47.359383 mon.pve-01 (mon.0) 31 : cluster [WRN] message from mon.2 was stamped 0.113495s in the future, clocks not synchronized
2019-09-20 12:40:13.770773 mon.pve-01 (mon.0) 32 : cluster [INF] Health check cleared: MON_CLOCK_SKEW (was: clock skew detected on mon.pve-03)
root@pve-02:/etc/ceph#

It looks as monitors works, but they are probably still on ipv4 and not ipv6...osds don't start anymore. I tried option mon_host in different format with IPs but nothing succesfull. Where is problem?

Thanks.
 
It looks as monitors works, but they are probably still on ipv4 and not ipv6...osds don't start anymore. I tried option mon_host in different format with IPs but nothing succesfull. Where is problem?
did you restart the monitors after the change?
can you please post the complete ceph conf?
 
Yea, i restarted x-times all cluster nodes.

Code:
[global]
         auth_client_required = cephx
         auth_cluster_required = cephx
         auth_service_required = cephx
         fsid = f039060a-3424-4953-be2a-57eadf0076d1
         mon_allow_pool_delete = true
         mon_cluster_log_file_level = info
         osd_pool_default_min_size = 2
         osd_pool_default_size = 2

#redhat
         bluestore_rocksdb_options = compression=kNoCompression,max_write_buffer_number=32,min_write_buffer_number_to_merge=2,recycle_log_file_num=32,compaction_style=kCompactionStyleLevel,write_buffer_size=67108864,target_file_size_base=67108864,max_background_compactions=31,level0_file_num_compaction_trigger=8,level0_slowdown_writes_trigger=32,level0_stop_writes_trigger=64,max_bytes_for_level_base=536870912,compaction_threads=32,max_bytes_for_level_multiplier=8,flusher_threads=8,compaction_readahead_size=2MB
         bluestore_cache_autotune = 0
         bluestore_cache_size_ssd = 8G
         bluestore_cache_kv_ratio = 0.2
         bluestore_cache_meta_ratio = 0.8
         osd_min_pg_log_entries = 10
         osd_max_pg_log_entries = 10
         osd_pg_log_dups_tracked = 10
         osd_pg_log_trim_min = 10

#grace period
#        osd_heartbeat_grace = 10
#        osd_heartbeat_interval = 3

#ipv6
        cluster_network = fd8d:7868:1cfd:6443::/64
        public_network = fd8d:7868:1cfd:6444::/64
        mon_host = [fd8d:7868:1cfd:6444::1],[fd8d:7868:1cfd:6444::2],[fd8d:7868:1cfd:6444::3]
        ms_bind_ipv4 = false
        ms_bind_ipv6 = true

[client]
         keyring = /etc/pve/priv/$cluster.$name.keyring

[mds]
         keyring = /var/lib/ceph/mds/ceph-$id/keyring

[osd]
         debug_filestore = 0
         debug_journal = 0
         debug_ms = 0
         debug_osd = 0

root@pve-01:~# ping6 fd8d:7868:1cfd:6443::2
PING fd8d:7868:1cfd:6443::2(fd8d:7868:1cfd:6443::2) 56 data bytes
64 bytes from fd8d:7868:1cfd:6443::2: icmp_seq=1 ttl=64 time=0.252 ms
64 bytes from fd8d:7868:1cfd:6443::2: icmp_seq=2 ttl=64 time=0.254 ms
^C
--- fd8d:7868:1cfd:6443::2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 19ms
rtt min/avg/max/mdev = 0.252/0.253/0.254/0.001 ms

root@pve-01:~# ping6 fd8d:7868:1cfd:6444::2
PING fd8d:7868:1cfd:6444::2(fd8d:7868:1cfd:6444::2) 56 data bytes
64 bytes from fd8d:7868:1cfd:6444::2: icmp_seq=1 ttl=64 time=0.705 ms
64 bytes from fd8d:7868:1cfd:6444::2: icmp_seq=2 ttl=64 time=0.278 ms
^C
--- fd8d:7868:1cfd:6444::2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 19ms
rtt min/avg/max/mdev = 0.278/0.491/0.705/0.214 ms

I can make any test needed, cluster isn't in production yet.
 
Last edited:
how does the /etc/host look?
can the nodenames be resolved to the ipv6 addresses?
 
See code section. PVE nodes are ipv4 (and ceph works with it), trying to move PVE nodes to ipv6 (and ceph too).

Code:
root@pve-01:~# cat /etc/hosts
127.0.0.1 localhost.localdomain localhost
IPv4 pve-01.subdomain.domain.tld pve-01
fd8d:7868:1cfd:6444::1 pve-01.subdomain.domain.tld pve-01

# The following lines are desirable for IPv6 capable hosts

::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts


allow-vmbr1 bond0
iface bond0 inet manual
ovs_bonds eno1 eno2
ovs_type OVSBond
ovs_bridge vmbr1
ovs_options lacp=active bond_mode=balance-tcp

auto lo
iface lo inet loopback

iface eno1 inet manual

iface eno2 inet manual

auto ens1f0
iface ens1f0 inet manual

auto ens1f1
iface ens1f1 inet manual

allow-ovs vmbr1

allow-vmbr1 pve01_mgmt
iface pve01_mgmt inet static
address IPv4/24
gateway IPv4
ovs_type OVSIntPort
ovs_bridge vmbr1
ovs_options tag=X

allow-vmbr1 pve01_mgmt
iface pve01_mgmt inet6 static
address fd8d:7868:1cfd:6444::0001/64
ovs_type OVSIntPort
ovs_bridge vmbr1
ovs_options tag=X

auto bond1
iface bond1 inet6 static
address fd8d:7868:1cfd:6443::0001/64
bond-slaves ens1f0 ens1f1
bond-miimon 100
bond-mode 802.3ad
bond-xmit-hash-policy layer2
bond-downdelay 200
bond-updelay 200

auto vmbr1
iface vmbr1 inet manual
ovs_type OVSBridge
ovs_ports bond0 pve01_mgmt
 
In the first post i showed, that i enabled ms_bind_ipv6 and disabled ms_bind_ipv4...
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!