Search results

  1. gurubert

    Ceph mgr become unresponsive and switching to standby very frequently

    /etc/ceph/ceph.conf is a symlink to /etc/pve/ceph.cobf on Proxmox nodes. /etc/pve is the FUSE mounted ckustered Proxmox config database. You should check why ceph.conf is not available. BTW: do not run with an even number of MONs. Add one (less risk) or remove one (equal risk as with 4).
  2. gurubert

    Wrong CEPH Cluster remaining disk space value

    The target ratio of a pool is for the balancer and does not influence the total capacity calculation. You should set the nearfull_ratio from 0.85 to 0.66. ceph osd set-nearfull-ratio 0.67
  3. gurubert

    Network ipv4 and ipv4

    The IPv4 packets come from the same MAC address as the IPv6 packets. Have you removed 94.130.55.108 from enp0s31f6?
  4. gurubert

    Network ipv4 and ipv4

    Is that at a hosting provider that maybe blocks unknown MAC addresses? Do you see traffic when you run "tcpdump -i enp0s31f6 -ne" on the Proxmox host?
  5. gurubert

    Network ipv4 and ipv4

    You should remove these IP addresses from the ethernet interface. Add a bridge (vmbr0) with the ethernet interface as a port and add the IPv6 address to the bridge interface. Atttach the VM to the bridge and configure the IPv4 address inside the VM on its interface.
  6. gurubert

    Installing Ceph | Unable to correct problems, you have held broken packages.

    No need to remove the packages, you can downgrade them with: apt install ceph-common=18.2.4-pve3 libsqlite3-mod-ceph=18.2.4-pve3 librados2=18.2.4-pve3 and possible any other package that is newer than reef.
  7. gurubert

    Ceph recovery: Wiped out 3-node cluster with OSDs still intact

    You need to stop the MON and replace its store.db. Make sure that the MON runs on the same IP as in the old cluster. It's all in the Ceph documentation.
  8. gurubert

    Ceph recovery: Wiped out 3-node cluster with OSDs still intact

    As you have created a new cluster (id 9c9daac0-736e-4dc1-8380-e6a3fa7d2c23) and the OSDs on disk belong to the old cluster (id c3c25528-cbda-4f9b-a805-583d16b93e8f) you cannot just add them. Have you read through the procedure to recover the MON db from the copies stored on the OSDs...
  9. gurubert

    move storage

    No, the NFS is read on the node where the VM is currently located.
  10. gurubert

    USV via SNTP einbinden in NAT

    NUT hat auch einen snmp-driver, damit geht das üblicherweise.
  11. gurubert

    Proxmox routing - BGP lab

    VyOS-1 already has a link to the Huawei router according to your network diagram. You just have to enable a BGP session between them.
  12. gurubert

    How to Clone a VM and Assign a Different IP Address to the Clone?

    Either the VM is using a DHCP client, then the clone will get a different IP configuration from the DHCP server because it gets a different MAC address. Or you build a VM template that includes cloud-init and clone your VMs from that. Then you can set a static IP configuration from the Proxmox...
  13. gurubert

    Help, Ceph Configuration Showing Warnings

    You have to have at least three nodes active before being able to do anything with Ceph storage.
  14. gurubert

    Proxmox routing - BGP lab

    Does the Huawei router have two interfaces with both the same IP network 20.20.20.0/24 attached? This will not work.
  15. gurubert

    Where does Proxmox get the gateway IP?

    A host can only have one default gateway. If you add a second interface with its own IP subnet just do not add a default gateway in its configuration.
  16. gurubert

    [SOLVED] Traffic separation and nics - how to plan the network?

    That's another important aspect. To establish a good quorum an odd number of nodes should be used.
  17. gurubert

    Empty root user password allows login with any password

    The root password field in /etc/shadow should never be empty. Put an x or an exclamation mark (just not a valid hash value) in it to disable password based logins for the account. This is nothing new. A blank password field allows login with any password since the dawn of Unix.
  18. gurubert

    [SOLVED] Traffic separation and nics - how to plan the network?

    My 2c: Do not use the 100G network for the Ceph cluster network but instead for the public network. No need for a separate cluster network here. Use 2x 10G for Proxmox management and VM migration. Use the other 2x 10G for VM guest traffic. Use the remaining 1G ports for additional corossnc...