Search results

  1. gurubert

    Info ceph

    Some placement groups are currently deep scrubbing. This is normal and helps with data consistency.
  2. gurubert

    No VMs with CEPH Storage will start after update to 8.3.2 and CEPH Squid

    This would define the availability zone at the root of the CRUSH map tree and will surely not work.
  3. gurubert

    Can't figure out why proxmox crashes.

    The loglines you posted from the Proxmox host contain dhclient lines which is unusual. Can you show more details about the "crash"? My suspicion is that dhclient on the host messes up the network config and this makes the host unreachable but does not crash it.
  4. gurubert

    Can't figure out why proxmox crashes.

    OK, but then they need to run their own DHCP client and not the Proxmox host. You have not posted log lines from the actual crash. Please set up remote syslogging or take a screen capture from the crash message.
  5. gurubert

    Can't figure out why proxmox crashes.

    Why is there a DHCP client running on your Proxmox server?
  6. gurubert

    No VMs with CEPH Storage will start after update to 8.3.2 and CEPH Squid

    Does Pool 6 still have size=3? It is really not recommended to run OSDs on just two nodes.
  7. gurubert

    Issues creating CEPH EC pools using pveceph command

    Why do you want to create a Ceph pool (or even use Ceph at all) in a single Proxmox server?
  8. gurubert

    NICs within multiple bonds

    This will not work. A physical interface can only be in one bonding interface.
  9. gurubert

    Cloning Customized VM and LXCs: Prepping for Use (MachineID, MAC, etc.)

    1. There is an option to create new MAC addresses when cloning a VM template. 2. Remove /etc/machine-id as a last step before creating the VM template. A new machine ID will be created at first boot.
  10. gurubert

    [PLEASE HELP]I can't create Ceph OSD.

    Does the kernel log any IO errors in the same time for this device?
  11. gurubert

    Ceph + Cloud-Init Troubleshooting

    cloud-init configures an account and the network settings inside a newly cloned VM. What is the connection to CephFS or RBD that is not working for you?
  12. gurubert

    [TUTORIAL] FabU: can I use Ceph in a _very_ small cluster?

    Yes, because the assumed failure zone is the host. If just an OSD fails it should be replaced. In small clusters the time to replace a failed disk is more crucial than in larger clusters where the data is more easily replicated to the other OSDs in the nodes.
  13. gurubert

    [TUTORIAL] FabU: can I use Ceph in a _very_ small cluster?

    There is a neat calculator at https://florian.ca/ceph-calculator/ that will show you how the set the nearfull ratio for a specific number of disks and nodes.
  14. gurubert

    Ceph Ruined My Christmas

    Check all Interfaces if the MTU has been set to 1500 again. Do not mix MTUs in the same Ethernet segment.
  15. gurubert

    Ceph min_size 1 for Elasticsearch / MySQL Clusters

    All these applications already replicate their data in the application level. They do not need a storage system that does this. Let these VMs run on local storage and you will get way better performance than Ceph with size=1.
  16. gurubert

    Using Ceph as Storage for K8S

    https://rook.io/ can also be used to integrate an external Ceph cluster into Kubernetes.
  17. gurubert

    Procedure for cycling CEPH keyrings cluster-wide

    IMHO this question should be asked on the Ceph mailing list.
  18. gurubert

    Migrating Ceph from 3 small OSDs to 1 large OSD per host?

    If you keep the smaller disks the larger ones will get 8 times the IOPS because the size is one of the factors of Ceph's algorithms to distribute the data. The NVMe disks may be able to handle that.
  19. gurubert

    Migrating Ceph from 3 small OSDs to 1 large OSD per host?

    Usually Ceph will move the data for you. Before you bring down a host drain and remove its OSDs. After swapping the disks create a new OSD on the new NVMe and Ceph will happily use it. After doing this with all nodes all your data will be on the new disks.