Search results

  1. gurubert

    New Ceph cluster advice

    There is enough RAM available for 9 SSD OSDs in each host. You usually calculate with 5 GB RAM per OSD.
  2. gurubert

    CEPH unable to create VM - () TASK ERROR: unable to create VM 100 - rbd error: 'storage-CEPH-Pool-1'-locked command timed out - aborting

    Your placement groups are not active, meaning no data transfer (read or write) can take place. You seem to have only one host (pve01) with OSDs. With the default replication size of 3 and the default failure zone "host" Ceph is unable to place the second and third copy. You need to add at least...
  3. gurubert

    [SOLVED] Unexpected Behavior with Multiple Network Interfaces

    The default route (gateway 192.168.1.200) goes via vmbr0, so when you pull the cable from enp87s0 the host is only reachable from within its LAN 192.168.1.0/24.
  4. gurubert

    Was kann ich an Ceph Performance erwarten

    Max 67000 IOPS bzw 3,2GB/s sieht doch gar nicht so schlecht aus. Das ist dann die summierte Performance, die über alle VMs zu erwarten ist. Die einzelne VM erhält eher das Ergebnis mit 4K und 1 IO-Thread.
  5. gurubert

    Was kann ich an Ceph Performance erwarten

    Ich hoffe, auf dem VM-Image vm-113-disk-0 war nichts wichtiges drauf… Für 4K Blocksize und 4 Threads sieht das ganz OK aus. Setz die iodepth mal auf 128 und danach die bs auf 4M mit einer iodepth von 8 oder 16.
  6. gurubert

    Was kann ich an Ceph Performance erwarten

    Direkt auf dem Proxmox-Knoten. Fio hat auch die Möglichkeit, mit RBDs zu sprechen.
  7. gurubert

    Was kann ich an Ceph Performance erwarten

    Mach den Test mit fio mal direkt auf einem RBD zum Vergleich. Und spiele mal mit den Parameter --bs und --iodepth.
  8. gurubert

    Ceph below min_size why not read-only?

    Ceph is all abaout data consistency. It is not guaranteed that the single remaining copy is a valid copy. This can only be assured when there is a "majority" of copies available that are all the same. It's basically the same principle like with the quorum of the MONs. BTW: It is not the number...
  9. gurubert

    ceph active+remapped+backfill_toofull recovery problem

    This is because the data will be moved from other OSDs. Give Ceph time to do it.
  10. gurubert

    ceph active+remapped+backfill_toofull recovery problem

    Set the reweight value for the OSDs back to 1.0
  11. gurubert

    Proxmox + Ceph - kernel: libceph: osd3 (1)192.168.1.212:6811 bad crc/signature

    KRBD is an option in the storage configuration of Proxmox. It can be enabled or disabled on the pool and affects all VMs stored there. You would need to create a new pool in Ceph and a new storage on that pool in Proxmox without the KRBD setting. After that you could migrate the VM image to...
  12. gurubert

    Proxmox + Ceph - kernel: libceph: osd3 (1)192.168.1.212:6811 bad crc/signature

    Do you happen to use KRBD for the VMs? With LVM on top of a mapped RBD? What is the load of the machines and how saturated is the network? Have you tried to switch to qemu+rbd (userspace RBD) for this VM?
  13. gurubert

    Routing - 1 Nic - 2 Public IPs on Different Subnet

    You need to add the physical port eth0 to the bridge vmbr0. Without that there is no way for the network packets to reach the Internet.
  14. gurubert

    Proxmox + Ceph - kernel: libceph: osd3 (1)192.168.1.212:6811 bad crc/signature

    Have you enabled compression on the RBD pool for the "vm-compression" storage? https://docs.ceph.com/en/latest/rados/configuration/bluestore-config-ref/#inline-compression
  15. gurubert

    [SOLVED] Ubuntu 24.04 not supported??

    Why would a virtualized Ubuntu 24.04 not be supported by KVM / Proxmox? If you made a snapshot of the VM before the upgrade you could roll back to it.
  16. gurubert

    Proxmox + Ceph - kernel: libceph: osd3 (1)192.168.1.212:6811 bad crc/signature

    Weird things can happen when the MTU sizes of the involved interfaces do not match.
  17. gurubert

    Unable to install ceph

    The hostname mirrors.tencentyun.com has no IP address in the global DNS. download.proxmox.com seems to be blocked on the "Great Firewall".
  18. gurubert

    Any news on lxc online migration?

    I learn something new every day. :)
  19. gurubert

    Any news on lxc online migration?

    I do not think that it is possible to live migrate a container. A container are just processes running in a different namespace on the host's kernel. You cannot freeze a process and transport it to another host and unfreeze it there like you can do with a virtual machine.
  20. gurubert

    IPv6 link-local on SDN VLAN interface

    AFAIK the IPv6 link local address is assigned automatically by the Linux kernel whenever a new interface goes up. As the bridge interface "Servers" should only transport Ethernet to and from the VMs and not the host you could disable IPv6 on it entirely: echo '1' >...