Search results

  1. C

    Ceph with multipath

    Read through all logs and didn't see anything special. Maybe this is beacuse of a mixed use of pve and native ceph tools?
  2. C

    Ceph with multipath

    You're right, there is not much support from Ceph but it's still possible and Ceph doesn't say "Don't do this" they just say that they don't know the multipath configuration and hence reject to do it. But in the end it's just as simple as creating the pv/vg/lv. First step would be to identify...
  3. C

    Ceph with multipath

    I have had a look at the PVE Code more precisely in Diskmanage.pm->get_disks and now it's clear that multipath devices will never show up as only whitelisted devicename patterns and special types like zfs, ceph and lvm are handled and the individual disks will show up as they match the pattern...
  4. C

    Ceph with multipath

    Well and that's actually not the case. Multipath is working fine and configured correctly. -> multipath -l mpathe mpathd mpathc mpathb mpathn mpatha mpathm mpathl mpathk mpathj mpathi mpathh mpathg mpathf But the disks section shows the individual disks only: It doesn't matter if...
  5. C

    Ceph with multipath

    In general i agree with you but throwing away FC storage systems which cost a lot is not an option. But to be honest, this has nothing to do with the setup pain. The same problems will arise with any multipath enabled system like a simple SAS Shelf with two HBAs. So IMHO we should talk about...
  6. C

    Ceph with multipath

    This makes me think that you assume the FC Storage does redundancy on its own by doing RAID. Am i right? That's not the case the FC Storage runs in JBOD Mode. So the data redundancy is done by ceph only. FC is in this context really just the transmission channel. The only added redundancy is...
  7. C

    Ceph with multipath

    Maybe it would make sense to explain why you don't recommend the usage of ceph with a direct-attached-storage using multipath.
  8. C

    Ceph with multipath

    Not really. The FC storage is directly connected to a single host through its HBAs. Multipath is for hardware redundancy (failing paths to the disks) and ceph for distributing the data across the directly attached storage of all 3 nodes. The same would happen with a SAS storage or whatever...
  9. C

    Ceph with multipath

    Hi everyone. I'm struggling with ceph using multipath and the pve provided tools. My setup is a 3 node cluster with each node having a dedicated FC storage. Each server has 2 HBAs to achieve hardware redundancy. Multipath is configured, running and the device mapper block devices are present...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!