Search results

  1. H

    [SOLVED] After the last Update the not directly connected nodes are grey

    Here's the output of the first node: Node-1: proxmox-ve: 6.2-1 (running kernel: 5.4.44-2-pve) pve-manager: 6.2-6 (running version: 6.2-6/ee1d7754) pve-kernel-5.4: 6.2-4 pve-kernel-helper: 6.2-4 pve-kernel-5.3: 6.1-6 pve-kernel-5.0: 6.0-11 pve-kernel-5.4.44-2-pve: 5.4.44-2...
  2. H

    [SOLVED] After the last Update the not directly connected nodes are grey

    Hello all, after the last Update of one of our 3-Nodes CEPH-Cluster the not directly connected nodes are grey. When i look to the storage or other vm-details, i can see that the cluster is connected and alok to pvecm status sais "quorum for all nodes are ok". Do someone have any idea what's...
  3. H

    Is it possible to add support for r8168-NIC to proxmox 6.x ?

    I need to add support for Realtek NIC through the r8168-dkms kernel-module for a onboard-NIC. Is that possible? I already tried to add this via apt but there are no linux-headers installed by default for the latest kernel-5.3.13-3. What is the best way to get this done, or is support for this...
  4. H

    Cannot create a new osd on a new cluster-member

    OK that was it. Now i can add the osd.4. Thanks a lot !!
  5. H

    Cannot create a new osd on a new cluster-member

    I cannot find anything in this directory. For what text or name should i search for?
  6. H

    Cannot create a new osd on a new cluster-member

    Hello all, i've allready reinstalled a cluster-member and added it first to the pmx-cluster an than to the ceph-cluster. This all went well. But now i want to add a new inserted HDD as osd into the pool, and i got an error. create OSD on /dev/sdc (bluestore) wipe disk/partition: /dev/sdc...
  7. H

    CEPH-OSD is continuous visible although it has been removed

    I found the command ceph osd rm <osd-id> ceph osd crush rm <osd-id> And this command does what i want. The osd is now not longer visible.
  8. H

    CEPH-OSD is continuous visible although it has been removed

    Hello all, on one of any VE-Hosts i want to remove the OSD and later also the complete VE-Host from that cluster to reorganize it. I stopped and removed the OSD on that Host, but it is continuous visible as outdated OSD. I also can see the VE-Host in the menu of OSDs but without the OSD itself...
  9. H

    Got messages [rbd error: rbd: listing images failed: (2) No such file or directory (500)] in Cephpool after last upgrade

    Perfect !!!! That was it. Thanks a lot. Now i can see the hole content of the storage again. Regards Hans-Peter
  10. H

    Got messages [rbd error: rbd: listing images failed: (2) No such file or directory (500)] in Cephpool after last upgrade

    The name of the storage is different to the name of the pool itself. I hope that's not authoritative. Here is the output.
  11. H

    Got messages [rbd error: rbd: listing images failed: (2) No such file or directory (500)] in Cephpool after last upgrade

    I cannot see the content of the ceph-pool-storage available on all ceph-cluster-members, but I can see the Summary of that Storage. On other Clusters i can see the images of all in such a CEPH-POOL-Storage stored VMs. When i click on the Content TAB on PVE i get the described message: [rbd...
  12. H

    Got messages [rbd error: rbd: listing images failed: (2) No such file or directory (500)] in Cephpool after last upgrade

    I'm not shure that the failure is really since the last upgrade. I can see the overview of the ceph-pooI, but cannot see the content of it, with the message: [rbd error: rbd: listing images failed: (2) No such file or directory (500)]. It's possible that this failure is since a longer time...
  13. H

    Got messages [rbd error: rbd: listing images failed: (2) No such file or directory (500)] in Cephpool after last upgrade

    Hello all, after upgrade all CEPH-Clustermembers to latest 6.1 i got the message [rbd error: rbd: listing images failed: (2) No such file or directory (500)] when looking into the cephpool. Subsequently i cannot migrate a VM to another node in the cluster. Not online and also not offline...
  14. H

    TLS on internal SMTP set to "smtpd_tls_security_level=none"

    Hello, you're right. After two steps updates this option doesn't exist any longer in master.cf in the section of port 26, and tls is running also in the intern connections. :) Thanks
  15. H

    TLS on internal SMTP set to "smtpd_tls_security_level=none"

    Hello all, on the internal SMTP Port 26 TLS is set to "none" by default (smtpd_tls_security_level=none). Is there a possibility to set this permanent to "may". I've set this by hand in master.cf of postfix to "may", but when i change something in the PMG-web-configuration this will be...
  16. H

    PVE won't start after a crash of VM an Host in the night

    Hello, after renaming the "/var/lib/pve-cluster/config.db" to "/var/lib/pve-cluster/config.db.old" the pve starts normal and all VMs are running fine. Do someone have any idea how this can happen? Regards Hans-Peter Straub
  17. H

    PVE won't start after a crash of VM an Host in the night

    Hello again, i found the following log-entries: Jul 20 08:46:49 virtfarm-stpgp-3 pmxcfs[2147]: [database] crit: found entry with duplicate name (inode = 0000000015AA3F02, parent = 000000000000000B, name = '139.conf') Jul 20 08:46:49 virtfarm-stpgp-3 pmxcfs[2147]: [database] crit: DB load...
  18. H

    PVE won't start after a crash of VM an Host in the night

    Hello all, after a crash of a VM in the night, the proxmox host won't start in the cluster. What i see is, that the /etc/pve isn't mounted. Does anybody have an idea what i can do?
  19. H

    Reboot of all cluster nodes after lost network/quorum

    Hello all, does a VE-cluster, or all cluster-nodes, reboot automatically after they lost the network-connection on which the cluster-quorum is running? I've updated a router and just in that moment the complete cluster, with a configured ceph-filesystem on a seperate net, have rebooted...