Recent content by Alex_u-94

  1. A

    Ceph issue

    I added a keyring once (it has spread to all nodes) ceph auth get-or-create client.crash mon 'profile crash' mgr 'profile crash' -o /etc/pve/priv/ceph.client.crash.keyring I checked the logs, an additional key was required for each node On each node executed the command corresponding to the...
  2. A

    Ceph issue

    That is, to restore a lost keyring, I need to run the following command on each node and restart CRASH? ceph auth get-or-create client.crash mon 'profile crash' mgr 'profile crash' -o /etc/pve/priv/ceph.client.crash.keyring
  3. A

    Ceph issue

    Finally I managed to eliminate the original cause of the CEPH failure. It was a hardware problem with network equipment (switch + some network cards). At the moment I have a problem with CRASH. CRASH logs contain keyring error: auth: unable to find a keyring on...
  4. A

    Strange issue with IPv6

    `ip link`, `ip addr`, `ip -6 route` - won't help I did on 2 nodes OVS Bridge and assign IPv6 on Bridges. With this it's no matter which physical interface attached to the bridge. Output of commands will be the same. `ip link`, `ip addr`, `ip -6 route` - with Bridge + 1Gb interface (nodes have...
  5. A

    Strange issue with IPv6

    Hello. I created a new topic and updated the current issue. https://forum.proxmox.com/threads/ceph-issue.114670/ the system was not updated before the problem occurred I am the only one responsible for the infrastructure and there was no work or changes during the specified period (I was on...
  6. A

    Ceph issue

    A few days ago, I had a VM hang on a cluster of 4 servers. 3 servers have 2 SSDs per server for CEPH, in total 6 SSDs. At the time of the problem, version 7.1 was installed. CEPH 16.2.7 was also launched. There are 3 Ethernet interfaces on each node. Two built-in motherboard (1Gb) and one 10Gb...
  7. A

    Strange issue with IPv6

    Hello. I have a Proxmox cluster of four physical servers. The cluster has been working successfully for a long time. In addition to the Proxmox cluster, a CEPH cluster was also created. Everything worked fine until this morning. Today one of the nodes disconnected from CEPH. I ran a diagnostic...
  8. A

    ceph SSD and HDD pools

    Please tell me, can I change the rule on an already working pool without data loss? Default rule is used for the pool. All storage media are SSD type. I need to create a dedicated pool for cold data. HDDs is added for this purpose and rules is created. Unfortunately I have VMs that cannot be...