Search results

  1. M

    Tiny/Mini/Micro low-power hardware for PBS at home?

    Last GC took an hour and a half. Anyway, the local HDD seems to be quite fine as a PBS target. I see that you were talking about an NFS storage backed by HDD, and I think I would agree, PBS datastore on NFS mount might not be a good idea, and might not perform well even if backed by SSD (I...
  2. M

    Ceph Disks

    I used USB sticks for OS few years ago, and that was working fine for about a year, never disconnected or hang up, but then one of the servers started to produce odd errors during updates (so don't recommend USB sticks) . I switched to a USB SATA enclosure then but only for a few months, so I...
  3. M

    Tiny/Mini/Micro low-power hardware for PBS at home?

    Looked at my logs, the last verification job took 2 hours. Last prune finished in a minute. Nothing prevents you from trying on your local proxmox server and you should see if it is going to work...
  4. M

    Tiny/Mini/Micro low-power hardware for PBS at home?

    BTW, if you want to save the money just add a disk to your existing proxmox server, install pbs there and do the backups to it. For additional protection you can backup the pbs data to your NAS. From my experience HDD for backups work just fine, but surely if you can afford SSDs that would be...
  5. M

    Network problem bond+lacp

    In addition to @spirit option, I would also double-check if it cabled correctly to the intended ports on the switch. Assuming the partner port number parameter is the encoded port number, you probably connected to the wrong ports. I would start with one bond, confirm it's working fine, then...
  6. M

    Tiny/Mini/Micro low-power hardware for PBS at home?

    Pretty sure any one of those servers will work. I had a PBS running on an old Celeron-based mini-PC with 2TB HDD, and it was fine. Just had to disable verification immediately after backups, they would consume too much CPU and made backups to fail..
  7. M

    Ceph Disks

    You can use, for example, SATA USB enclosure with a small SATA SSD inside for OS and dedicate both your NVMe drives for ceph. That's what I had to do when I faced similar limitations. I am pretty sure the Proxmox GUI would not allow you to use partitions for ceph, even if that is possible with...
  8. M

    Network problem bond+lacp

    Just a suggestion to run cat /proc/net/bonding/bond0 for each bond to see if the additional information would give any hints why it's not working...
  9. M

    Trouble Adding 2 Port 10G NIC - Only One Port Working At A Time!

    Working as expected to me. You have both interfaces in the same 10.10.10.x subnet, so it chooses whatever interface it likes more, in your case the first one defined vmbr1... Just use the different IP subnets between your interfaces and enable routing... Or I think you might be able to...
  10. M

    What is the best way to mount a CephFS inside LXC

    I use mp0: /mnt/cephfs,mp=/mnt/cephfs,shared=1 in the config file and it seems to work quite well for me
  11. M

    How to reconfigure the network in ceph on Proxmox VE?

    You can run ' ss -tunap|grep ceph-osd' and confirm that there are connections on the new cluster subnet. Note that the subnets in cluster_network act as a filter for accepting incoming cluster connections, so if you want to change the networks non-disruptively you will need to ensure that there...
  12. M

    CEPH: Can we run 2x Cluster network subnet simultanously ?

    I used the configuration with multiple cluster and public networks when I needed to migrate my network to new IP subnets. My understanding is that those two settings act like ACL by allowing connections only with the source IP within the specified range. And I believe OSD/MON select the first...
  13. M

    Cloned VM has same ip address as original

    I I had this issue. If I understand the root cause correctly, some DHCP servers are using client ID, not MAC address to assign IP addresses. You will need to reset the machine id before cloning. I used the commands below and it worked for me: echo -n >/etc/machine-id rm...
  14. M

    [SOLVED] Cluster with Ceph become degraded if I shut down or restart a node

    I believe those are the defaults for the newly created pools. For the existing pool you need to use 'ceph osd pool set' command And yes, min_size 1 with replicated_size 2 is risky, only use it for something that you can afford to lose and re-create easily, like a lab... You can have multiple...
  15. M

    [SOLVED] Cluster with Ceph become degraded if I shut down or restart a node

    I think there is min_size parameter on each pool, and according to your config it will be 2 by default. If your pool replicated size is 2 then you need pool min_size to be 1 to be able to survive a downtime. You can check your pool parameters with ceph osd pool ls detail You can set the pool...
  16. M

    BGP EVPN SDN and 10.255.255.x subnet

    I should have read the docs ;)... Without looking at the docs I assumed that 'exit node local routing' means to use the local route tables on the exit nodes to reach the rest of the network, so I thought it should be enabled... Anyway, hard-coded IP addresses probably not the best thing and...
  17. M

    BGP EVPN SDN and 10.255.255.x subnet

    I started testing BGP EVPN configuration and noticed that on some nodes (looks like the exit nodes) there is 10.255.255.1 and 10.255.255.2 addresses assigned. I do use 10.255.255.0/24 subnet for one of my vlans. Is it possible to reconfigure SDN to use something else? I can find the addresses in...
  18. M

    Apply changes to SDN configuration on a single node

    I tried to do bgp at first, but it didn't work right away and was a little bit steep learning curve, so I decided to do OSPF first. It's definitely on my plate to do my config with bgp later. Regarding /etc/frr/frr.conf.local can you please elaborate how it might be helpful? As for the frr...
  19. M

    Extra node votes in cluster (homelab)

    I have 3 permanent nodes with 2 votes each. On rare occasions I added fourth node with default 1 vote. The whole cluster becomes 7 votes, so you need 4 votes for quorum. That means any permanent 2 nodes should be enough.
  20. M

    Extra node votes in cluster (homelab)

    I actually run this exact configuration with each node having two votes for quite some time. As Fabian said it will not make any difference regarding the cluster quorum. My reason for this configuration was that sometimes I may add a temporary test node to the cluster and want to make sure that...