Search results

  1. B

    finding ceph disk / bluestore relation

    I have 12 HDD's and 2 NVME's. Each 18TB HDD uses a 164G nvme partition for DB/WAL. One OSD daemon for each HDD. To split the IOPS load, i've put 6 partitions per nvme, rest of the available space is used for VM's.
  2. B

    finding ceph disk / bluestore relation

    Hi, I use HDD for ceph OSD and nvme partitioned for bluestore db/wal I have a disk that won't come up, i've done a destroy (with cleanup option) task so I can add it again however the bluestore partition didn't get cleaned and I can't find it example server 12 disks, 2 nvme's disks 1-6 uses...
  3. B

    PVE and Ceph on Mellanox infiniband parts, what is the current state of support?

    Well, 20Gbps is not that much... i mean, I use Mellanox Connect3 IB pci cards with Arista switch in Ethernet mode and I got 25Gbps :( - i'm sure I can fine tune that but it's enough for my needs.
  4. B

    Backup - libceph: read_partial_message 00000000df61d3e0 signature check failed

    For me latest and greatest did nothing. It seems that this is a known bug in kernel 5.13 and above, so I'm back on 5.11 and see if this persists.
  5. B

    cephfs - bad crc/signature

    Hi Mira, All hdds have nvme partitions set as bluestore cache (16tb disk with 164g nvme bluestore ) I know recommended size is 4%, others suggest that due to how bluestore bucket mechanism works, 30gb will be enough... So I put 1% For cephfs I've put replica 2, considering the data value.
  6. B

    cephfs - bad crc/signature

    Hi Mira, Thank you for looking on this. The copy is done via rsync from a TrueNAS server to the pve node, directly on the /mnt/pve/cephfs folder, on a different server. So I have members in ceph cluster A,B,C,D,E,F - I start copy in server A and server F starts to throw bad/crc signatures...
  7. B

    cephfs - bad crc/signature

    Sure: # pveversion -v proxmox-ve: 7.2-1 (running kernel: 5.15.39-1-pve) pve-manager: 7.2-7 (running version: 7.2-7/d0dd0e85) pve-kernel-5.15: 7.2-6 pve-kernel-helper: 7.2-6 pve-kernel-5.15.39-1-pve: 5.15.39-1 ceph: 16.2.9-pve1 ceph-fuse: 16.2.9-pve1 corosync: 3.1.5-pve2 criu: 3.15-1+pve-1...
  8. B

    cephfs - bad crc/signature

    Hi, While copying data to a cephfs mounted folder , I encounter tons of messages like this: [52122.119364] libceph: osd34 (1)10.0.40.95:6881 bad crc/signature [52122.119599] libceph: osd9 (1)10.0.40.98:6801 bad crc/signature [52122.119722] libceph: read_partial_message 00000000a1a2de9f...
  9. B

    Ceph configuration override

    Hi, I have a ceph cluster configured via proxmox and added object storage via radosgw package. My cluster is unstable for the moment and I have random reboots - something that I'm investigating now. My issue is that after some nodes are rebooted, the /etc/ceph/ceph.conf file is overwrited...
  10. B

    cephadm installed along proxmox

    Hi, Since proxmox uses a custom setup for installing and managing ceph, does it work to do other tasks with cephadm ? I see in the dashboard that many features are unavailable due to a missing "orchestrator" and the choice is rook (k8s) or cephadm. I haven't tried to install it - afraid of...
  11. B

    PVE and Ceph on Mellanox infiniband parts, what is the current state of support?

    Bumping this up. Anyone successfully using proxmox with Infiniband cards/switches ?
  12. B

    Node failed to join cluster

    Hi, I've tried to add a new node to my cluster and it failed - and now the cluster config buttons are greyed: Node pvedell6 was supposed to join the cluster - and it's pve interface its not reachable (if I try to access it directly) Rebooted, nothing. root@pvedell1:~# pvecm status...
  13. B

    How to migrate openstack volumes to Proxmox

    Hi, I could not find any docs about this - maybe it's not possible but here's the situation: - Openstack (Rocky) cluster behaving weirdly - so I decided to try to move the vm's to proxmox - copied the volume to pve, raw. No boot. Convert it to qcow, no boot. - I'm able to mount the disk on...
  14. B

    [SOLVED] unable to mount remote RBD

    seems that pvestatd is messed up after restarting the service the pool works almost properly - i can add disks in it but total available storage is not displayed.
  15. B

    [SOLVED] unable to mount remote RBD

    So I have a small cluster with ceph installed, all nice and dandy. I've configured a couple of weeks ago a few other PVE's to use that Ceph cluster... but now I want to add new servers and nothing happens. I've read the docs and from other blog posts and i'm stuck. Steps done on working and...
  16. B

    [SOLVED] Cannot find Hard drive

    Have you tried to disable AHCI ? I know it's not related but i've seen situations in windows (nvme won't be available until ahci is disabled from bios)
  17. B

    Errors while creating a container on a remote RBD Pool

    Hi, I have added a remote rbd pool (ceph cluster) to a proxmox machine that is not in that cluster. (added via GUI and copy the keyring in /etc/pve/priv/ceph/xxxx.keyring) Things I noticed: - after the RBD pool is added, I need to restart pvestatd.service so that the GUI shows the pool...
  18. B

    CEPH pool without replication?

    bumping up this. I'm in a situation that on a proxmox cluster with small local storage I need to map several 8TB block devices that will be used for a hadoop cluster. Having in the same LAN a working Ceph, i'm considering creating a pool with min_size 1. The data from that pool will be...
  19. B

    [SOLVED] rbd error: rbd: listing images failed: (2) No such file or directory (500)

    just a quick follow up, i've encountered this error when I move a disk from local storage to ceph storage. All tasks finished without errors but somehow if you already have a disk with the same name on the pool, proxmox will rename your new image and update the VM config. This happens when you...