[SOLVED] Cant create ceph with fusion-IO ioDrive2

kantalization

New Member
Jan 12, 2019
10
1
3
30
Dear all,

I used proxmox 5.1 and cluster 2 nodes. Then i want to create ceph storage using /dev/fioa as per i installed the fuison-IO with drivers. Try with pveceph createosd /dev/fioa show unable to get device info for 'fioa'. Previously i have try with ceph-disk zap /dev/fioa and ceph-disk prepare /dev/fioa it seems ok operation was finished, but it take no effect still got unable to get device info for 'fioa'.
Any ideas to fix this?

Many thanks,


 
Ya everything is oke except cephosd creating,previously i mounted the fusion io to directory and made it disk images its oke
 
That is the strangest thing ever. Can you send me the logs for ceph-deploy....I am using fusion iodrives with ceph but on external cluster and they work fine.
 
i got error from this step : pveceph createosd /dev/fioa, unable to get device info for 'fioa' , i cant moving forward before this step finished
 
i got error from this step : pveceph createosd /dev/fioa, unable to get device info for 'fioa' , i cant moving forward before this step finished
May you please share the solution with us. Maybe there is something where our pveceph can be improved.
 
I think its the hardcoded of proxmox disk extention,fioa as fusion-io disk not belonging?now i just try wirh drbd for storage but it takes a long time to sync and i dont know how stable to do cluster node
 
unable to get device info for 'fioa'.
If this message is from ceph-disk, that it can't handle the fioa device, then it is not likely a issue in the PVE code. As an alternative there is the ceph-volume to create OSDs but its handling is not integrated into our tooling yet. It still should work though.
 
its mean that pveceph tools work under ceph-disk utility, when i got this error should be try to use ceph-volume?it can create bluestore to share the disk for HA in proxmox?please advice thank you
 
its mean that pveceph tools work under ceph-disk utility, when i got this error should be try to use ceph-volume?
As an alternative means, yes.

it can create bluestore to share the disk for HA in proxmox?
Ceph is a distributed, shared storage. You need at least three nodes for Ceph and HA.
 
Thanks alwin for explanation,so it couldn't reach for 2 nodes?when i try with 2 node got no quorum when one node is down,but in each other i user proxmox 3 with 2 noden with fences it running till now can you explain again?thanks
 
when i try with 2 node got no quorum when one node is down,but in each other i user proxmox 3 with 2 noden with fences it running till now can you explain again?
Come again? I don't quite understand. For HA you need at least three nodes. OSD server could be only two of course.
 
Hi Alwin,
Previously i had HA proxmox(version 3.1) cluster 2 nodes with fences and iscsci disk mounted on it running well till now. Why on proxmox 5 version must be have at least e nodes?When i tried with 2 nodes clusterr iscsci disk mounted(without fences exactly)and then i tested the HA got error no quorum dead words,can u explain again why and what the different?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!