Search results

  1. M

    Ceph Disks

    You can use, for example, SATA USB enclosure with a small SATA SSD inside for OS and dedicate both your NVMe drives for ceph. That's what I had to do when I faced similar limitations. I am pretty sure the Proxmox GUI would not allow you to use partitions for ceph, even if that is possible with...
  2. M

    Network problem bond+lacp

    Just a suggestion to run cat /proc/net/bonding/bond0 for each bond to see if the additional information would give any hints why it's not working...
  3. M

    Trouble Adding 2 Port 10G NIC - Only One Port Working At A Time!

    Working as expected to me. You have both interfaces in the same 10.10.10.x subnet, so it chooses whatever interface it likes more, in your case the first one defined vmbr1... Just use the different IP subnets between your interfaces and enable routing... Or I think you might be able to...
  4. M

    What is the best way to mount a CephFS inside LXC

    I use mp0: /mnt/cephfs,mp=/mnt/cephfs,shared=1 in the config file and it seems to work quite well for me
  5. M

    How to reconfigure the network in ceph on Proxmox VE?

    You can run ' ss -tunap|grep ceph-osd' and confirm that there are connections on the new cluster subnet. Note that the subnets in cluster_network act as a filter for accepting incoming cluster connections, so if you want to change the networks non-disruptively you will need to ensure that there...
  6. M

    CEPH: Can we run 2x Cluster network subnet simultanously ?

    I used the configuration with multiple cluster and public networks when I needed to migrate my network to new IP subnets. My understanding is that those two settings act like ACL by allowing connections only with the source IP within the specified range. And I believe OSD/MON select the first...
  7. M

    Cloned VM has same ip address as original

    I I had this issue. If I understand the root cause correctly, some DHCP servers are using client ID, not MAC address to assign IP addresses. You will need to reset the machine id before cloning. I used the commands below and it worked for me: echo -n >/etc/machine-id rm...
  8. M

    [SOLVED] Cluster with Ceph become degraded if I shut down or restart a node

    I believe those are the defaults for the newly created pools. For the existing pool you need to use 'ceph osd pool set' command And yes, min_size 1 with replicated_size 2 is risky, only use it for something that you can afford to lose and re-create easily, like a lab... You can have multiple...
  9. M

    [SOLVED] Cluster with Ceph become degraded if I shut down or restart a node

    I think there is min_size parameter on each pool, and according to your config it will be 2 by default. If your pool replicated size is 2 then you need pool min_size to be 1 to be able to survive a downtime. You can check your pool parameters with ceph osd pool ls detail You can set the pool...
  10. M

    BGP EVPN SDN and 10.255.255.x subnet

    I should have read the docs ;)... Without looking at the docs I assumed that 'exit node local routing' means to use the local route tables on the exit nodes to reach the rest of the network, so I thought it should be enabled... Anyway, hard-coded IP addresses probably not the best thing and...
  11. M

    BGP EVPN SDN and 10.255.255.x subnet

    I started testing BGP EVPN configuration and noticed that on some nodes (looks like the exit nodes) there is 10.255.255.1 and 10.255.255.2 addresses assigned. I do use 10.255.255.0/24 subnet for one of my vlans. Is it possible to reconfigure SDN to use something else? I can find the addresses in...
  12. M

    Apply changes to SDN configuration on a single node

    I tried to do bgp at first, but it didn't work right away and was a little bit steep learning curve, so I decided to do OSPF first. It's definitely on my plate to do my config with bgp later. Regarding /etc/frr/frr.conf.local can you please elaborate how it might be helpful? As for the frr...
  13. M

    Extra node votes in cluster (homelab)

    I have 3 permanent nodes with 2 votes each. On rare occasions I added fourth node with default 1 vote. The whole cluster becomes 7 votes, so you need 4 votes for quorum. That means any permanent 2 nodes should be enough.
  14. M

    Extra node votes in cluster (homelab)

    I actually run this exact configuration with each node having two votes for quite some time. As Fabian said it will not make any difference regarding the cluster quorum. My reason for this configuration was that sometimes I may add a temporary test node to the cluster and want to make sure that...
  15. M

    Apply changes to SDN configuration on a single node

    The heavy customization part mostly resides within FRR. It's an ECMP routed configuration with each server connecting to two L3 switches, each connection on its own subnet, using /32 addresses on dummy interface for communication between each other and using OSPF to exchange the routing...
  16. M

    Apply changes to SDN configuration on a single node

    I have a heavily customized networking configuration and for some reason not yet apparent to me it does not survive the ifupdown2 reload. I made a change to my SDN configuration (changed one of the VXLAN VTEPs) and need to apply the change. If I do 'Apply' from GUI it will do reload on all...
  17. M

    OSD not starting after Proxmox upgrade to 7.2

    I just upgraded my node to 7.2, and after the host restart one of my OSD would not start. That was the last remaining old-style (non-LVM) simple volume OSD. The following error reported 2022-05-12T15:31:02.538-0400 7fcc3f134f00 1 bdev(0x560e6221c400 /var/lib/ceph/osd/ceph-7/block) open path...
  18. M

    [SOLVED] No bond/ovs after 6 to 7 upgrade

    I upgraded the other two nodes of the cluster, and there were no issues. But what I did differently is that I installed ifupdown2 before the upgrade. So I compared my /etc/network/interfaces, and found that the node that did not work had 'allow-vmbr0 <interface>' commands, but the working...
  19. M

    [SOLVED] No bond/ovs after 6 to 7 upgrade

    Just wanted to share my experience. I also had openvswitch configuration with a bond and vlans, and it also stopped working after the upgrade. Did not know about the 'systemctl restart networking' trick, but manually creating vlan interfaces and assigning ip addresses with 'ip' command worked...
  20. M

    NFS-Gensha in Proxmox

    I had to use backports to install the version of ganesha that supports rados. I documented the procedure for my own reference on my site: http://mykb.mife.ca/post/ceph-ganesha/

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!