Search results

  1. gurubert

    Proxmox cluster with shared Ceph partition in VMs

    Are these two separate CephFS volumes? What does "ceph fs status" show? If that's the case you need to specify the fs name with the fs= option to the mount command.
  2. gurubert

    OSD ghost

    Ceph is not able to run dual stack. You have to either use IPv4 or IPv6 on both networks.
  3. gurubert

    Ceph total storage/useable

    Failure donain host will not allow two copies on the same host. All PGs of the pool will be degraded in such a situation as Ceph is not able to find a location for the fourth copy.
  4. gurubert

    Ceph total storage/useable

    No, the size of the pool defines the number of copies it creates (when being replicated, not erasure coded). It has nothing to do with the number of nodes. A size=3 pool will distribute the three copies over four nodes randomly. If an OSD is down the missing third copy will be recreated from...
  5. gurubert

    Ceph total storage/useable

    Your datastore01 pool has a size of 4. It stores four copies of each object, hence its usable capacity is only 25% of the total capacity.
  6. gurubert

    Multipath Ceph storage network

    This is Ethernet and not Fibre-Channel. You cannot have two separate interfaces in the same VLAN. Create a stack out of the two switches (or MLAG or whatever the vendor calls it) and then a link aggregation group for each Ceph node with one port from each switch. On the Ceph nodes create a...
  7. gurubert

    Ceph RBD Storage Shrinking Over Time – From 10TB up to 8.59TB

    This is for the rule, it is not the size of the pool.
  8. gurubert

    Ceph RBD Storage Shrinking Over Time – From 10TB up to 8.59TB

    Why are there 8TB data stored in device_health_metrics? Do you store RBD data in this pool? You should create a separate pool for RBD data. https://pve.proxmox.com/wiki/Ceph_Pacific_to_Quincy#Guest_images_are_stored_on_pool_device_health_metrics
  9. gurubert

    Can I run vmware in proxmox?

    Can you ping the IP address? What does "curl -v http://ip_address" show?
  10. gurubert

    Proxmox Ceph

    This may be because this host is not part of the Proxmox cluster.
  11. gurubert

    Ceph mgr become unresponsive and switching to standby very frequently

    /etc/ceph/ceph.conf is a symlink to /etc/pve/ceph.cobf on Proxmox nodes. /etc/pve is the FUSE mounted ckustered Proxmox config database. You should check why ceph.conf is not available. BTW: do not run with an even number of MONs. Add one (less risk) or remove one (equal risk as with 4).
  12. gurubert

    Wrong CEPH Cluster remaining disk space value

    The target ratio of a pool is for the balancer and does not influence the total capacity calculation. You should set the nearfull_ratio from 0.85 to 0.66. ceph osd set-nearfull-ratio 0.67
  13. gurubert

    Network ipv4 and ipv4

    The IPv4 packets come from the same MAC address as the IPv6 packets. Have you removed 94.130.55.108 from enp0s31f6?
  14. gurubert

    Network ipv4 and ipv4

    Is that at a hosting provider that maybe blocks unknown MAC addresses? Do you see traffic when you run "tcpdump -i enp0s31f6 -ne" on the Proxmox host?
  15. gurubert

    Network ipv4 and ipv4

    You should remove these IP addresses from the ethernet interface. Add a bridge (vmbr0) with the ethernet interface as a port and add the IPv6 address to the bridge interface. Atttach the VM to the bridge and configure the IPv4 address inside the VM on its interface.
  16. gurubert

    Installing Ceph | Unable to correct problems, you have held broken packages.

    No need to remove the packages, you can downgrade them with: apt install ceph-common=18.2.4-pve3 libsqlite3-mod-ceph=18.2.4-pve3 librados2=18.2.4-pve3 and possible any other package that is newer than reef.
  17. gurubert

    Ceph recovery: Wiped out 3-node cluster with OSDs still intact

    You need to stop the MON and replace its store.db. Make sure that the MON runs on the same IP as in the old cluster. It's all in the Ceph documentation.