Search results

  1. gurubert

    [TUTORIAL] FabU: can I use Ceph in a _very_ small cluster?

    It really depends. If the cluster is large enough to spread over several racks with rack being the failure zone you could argue that you do not need redundant top of rack switches. The loss of a whole rack can then be easily mitigated. But yes, in small clusters network redundancy is a must.
  2. gurubert

    [SOLVED] SDN VXLAN requirements switch

    VXLAN packets are transported via UDP. Switches and other network hops do not need any special configuration.
  3. gurubert

    Pve Ceth Cluster und Snapshots

    Nein, die dreifache Replikation bezieht sich auf den Pool und die Objekte darin. OSDs kann ein Ceph-Cluster tausende haben. Die Objektkopien werden gleichmäßig auf sie verteilt.
  4. gurubert

    Pve Ceth Cluster und Snapshots

    Sehr geringe Latenz ist bei Ceph nur schwer zu erreichen, gerade bei nur 3 Knoten. Warum nicht in allen 9 Knoten OSDs ausrollen? Snapshots sind problemlos möglich.
  5. gurubert

    Diskrepanz Stromverbrauch bei Server 2022 und Server 2025

    Welchen CPU-Typ haben die VMs? Den generischen kvm64 oder host oder einen dazwischen?
  6. gurubert

    Slow restore speed on Ceph

    A 1G network interface has a max bandwidth of around 100-120MB/s. This is why even 60MB/s seem to be fast for writing at least for me.
  7. gurubert

    Slow restore speed on Ceph

    Writing to Ceph means three network trips instead of just one for LVM. A 1G network is way too slow for anything related to shared network storage.
  8. gurubert

    Info ceph

    Some placement groups are currently deep scrubbing. This is normal and helps with data consistency.
  9. gurubert

    No VMs with CEPH Storage will start after update to 8.3.2 and CEPH Squid

    This would define the availability zone at the root of the CRUSH map tree and will surely not work.
  10. gurubert

    Can't figure out why proxmox crashes.

    The loglines you posted from the Proxmox host contain dhclient lines which is unusual. Can you show more details about the "crash"? My suspicion is that dhclient on the host messes up the network config and this makes the host unreachable but does not crash it.
  11. gurubert

    Can't figure out why proxmox crashes.

    OK, but then they need to run their own DHCP client and not the Proxmox host. You have not posted log lines from the actual crash. Please set up remote syslogging or take a screen capture from the crash message.
  12. gurubert

    Can't figure out why proxmox crashes.

    Why is there a DHCP client running on your Proxmox server?
  13. gurubert

    No VMs with CEPH Storage will start after update to 8.3.2 and CEPH Squid

    Does Pool 6 still have size=3? It is really not recommended to run OSDs on just two nodes.
  14. gurubert

    Issues creating CEPH EC pools using pveceph command

    Why do you want to create a Ceph pool (or even use Ceph at all) in a single Proxmox server?
  15. gurubert

    NICs within multiple bonds

    This will not work. A physical interface can only be in one bonding interface.
  16. gurubert

    Cloning Customized VM and LXCs: Prepping for Use (MachineID, MAC, etc.)

    1. There is an option to create new MAC addresses when cloning a VM template. 2. Remove /etc/machine-id as a last step before creating the VM template. A new machine ID will be created at first boot.
  17. gurubert

    [PLEASE HELP]I can't create Ceph OSD.

    Does the kernel log any IO errors in the same time for this device?
  18. gurubert

    Ceph + Cloud-Init Troubleshooting

    cloud-init configures an account and the network settings inside a newly cloned VM. What is the connection to CephFS or RBD that is not working for you?
  19. gurubert

    [TUTORIAL] FabU: can I use Ceph in a _very_ small cluster?

    Yes, because the assumed failure zone is the host. If just an OSD fails it should be replaced. In small clusters the time to replace a failed disk is more crucial than in larger clusters where the data is more easily replicated to the other OSDs in the nodes.