You can use, for example, SATA USB enclosure with a small SATA SSD inside for OS and dedicate both your NVMe drives for ceph. That's what I had to do when I faced similar limitations.
I am pretty sure the Proxmox GUI would not allow you to use partitions for ceph, even if that is possible with...
Working as expected to me. You have both interfaces in the same 10.10.10.x subnet, so it chooses whatever interface it likes more, in your case the first one defined vmbr1...
Just use the different IP subnets between your interfaces and enable routing... Or I think you might be able to...
You can run ' ss -tunap|grep ceph-osd' and confirm that there are connections on the new cluster subnet.
Note that the subnets in cluster_network act as a filter for accepting incoming cluster connections, so if you want to change the networks non-disruptively you will need to ensure that there...
I used the configuration with multiple cluster and public networks when I needed to migrate my network to new IP subnets. My understanding is that those two settings act like ACL by allowing connections only with the source IP within the specified range. And I believe OSD/MON select the first...
I
I had this issue. If I understand the root cause correctly, some DHCP servers are using client ID, not MAC address to assign IP addresses. You will need to reset the machine id before cloning. I used the commands below and it worked for me:
echo -n >/etc/machine-id
rm...
I believe those are the defaults for the newly created pools. For the existing pool you need to use 'ceph osd pool set' command
And yes, min_size 1 with replicated_size 2 is risky, only use it for something that you can afford to lose and re-create easily, like a lab... You can have multiple...
I think there is min_size parameter on each pool, and according to your config it will be 2 by default. If your pool replicated size is 2 then you need pool min_size to be 1 to be able to survive a downtime.
You can check your pool parameters with ceph osd pool ls detail
You can set the pool...
I should have read the docs ;)... Without looking at the docs I assumed that 'exit node local routing' means to use the local route tables on the exit nodes to reach the rest of the network, so I thought it should be enabled...
Anyway, hard-coded IP addresses probably not the best thing and...
I started testing BGP EVPN configuration and noticed that on some nodes (looks like the exit nodes) there is 10.255.255.1 and 10.255.255.2 addresses assigned. I do use 10.255.255.0/24 subnet for one of my vlans. Is it possible to reconfigure SDN to use something else? I can find the addresses in...
I tried to do bgp at first, but it didn't work right away and was a little bit steep learning curve, so I decided to do OSPF first. It's definitely on my plate to do my config with bgp later.
Regarding /etc/frr/frr.conf.local can you please elaborate how it might be helpful?
As for the frr...
I have 3 permanent nodes with 2 votes each. On rare occasions I added fourth node with default 1 vote. The whole cluster becomes 7 votes, so you need 4 votes for quorum. That means any permanent 2 nodes should be enough.
I actually run this exact configuration with each node having two votes for quite some time. As Fabian said it will not make any difference regarding the cluster quorum. My reason for this configuration was that sometimes I may add a temporary test node to the cluster and want to make sure that...
The heavy customization part mostly resides within FRR. It's an ECMP routed configuration with each server connecting to two L3 switches, each connection on its own subnet, using /32 addresses on dummy interface for communication between each other and using OSPF to exchange the routing...
I have a heavily customized networking configuration and for some reason not yet apparent to me it does not survive the ifupdown2 reload.
I made a change to my SDN configuration (changed one of the VXLAN VTEPs) and need to apply the change. If I do 'Apply' from GUI it will do reload on all...
I just upgraded my node to 7.2, and after the host restart one of my OSD would not start. That was the last remaining old-style (non-LVM) simple volume OSD. The following error reported
2022-05-12T15:31:02.538-0400 7fcc3f134f00 1 bdev(0x560e6221c400 /var/lib/ceph/osd/ceph-7/block) open path...
I upgraded the other two nodes of the cluster, and there were no issues. But what I did differently is that I installed ifupdown2 before the upgrade. So I compared my /etc/network/interfaces, and found that the node that did not work had 'allow-vmbr0 <interface>' commands, but the working...
Just wanted to share my experience.
I also had openvswitch configuration with a bond and vlans, and it also stopped working after the upgrade. Did not know about the 'systemctl restart networking' trick, but manually creating vlan interfaces and assigning ip addresses with 'ip' command worked...
I had to use backports to install the version of ganesha that supports rados. I documented the procedure for my own reference on my site: http://mykb.mife.ca/post/ceph-ganesha/
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.