Hi, I'm testing Proxmox 5 (latest updates) and it's Ceph (luminous).
I've 3 VM with Proxmox installed, 2 as storage with 32GB virtual disk each, and the 3° only with local storage as 3° monitor.
I've followed instructions on https://pve.proxmox.com/wiki/Ceph_Server and some googling around
I've the following questions/problems:
a) pveceph createosd is there a way to make it create a bluestore storage? I don't understand if is the default, with fdisk I find 2 partitions:
Is it bluestore? How can I tell?
b) being just a nenwbie of Ceph in proxmox, after ods and monitor creation, since I've only 2 nodes with ODS (1 in each of the 2 storage server only) and having not heatlhy status, I've issued
and now status is ok, but free space is stated (ceph -s or proxmox gui) to be 55GB instead of 32.
My question is: what else should I do to have a "replica 2", a sort of raid1 between the 2 storage servers?
c) 3° node is monitor only, but I've read that monitor do heavy I/O too (really?), so I would like to know if, not having a ODS or other storage than local storage, is it writing to local storage (and how many GB do it needs then?) or is not functioning well or is writing through lan to the 2 storage nodes.
Thanks in advance
(BTW I'm trying to move away from drbd9 and the 2 storage, 3° quorum node that I had configured in the past, with a similar in term of costs, solution)
I've 3 VM with Proxmox installed, 2 as storage with 32GB virtual disk each, and the 3° only with local storage as 3° monitor.
I've followed instructions on https://pve.proxmox.com/wiki/Ceph_Server and some googling around
I've the following questions/problems:
a) pveceph createosd is there a way to make it create a bluestore storage? I don't understand if is the default, with fdisk I find 2 partitions:
Code:
Device Start End Sectors Size Type
/dev/sdb1 10487808 67108830 56621023 27G Ceph OSD
/dev/sdb2 2048 10487807 10485760 5G Ceph Journal
b) being just a nenwbie of Ceph in proxmox, after ods and monitor creation, since I've only 2 nodes with ODS (1 in each of the 2 storage server only) and having not heatlhy status, I've issued
Code:
ceph osd pool set rbd size 2
My question is: what else should I do to have a "replica 2", a sort of raid1 between the 2 storage servers?
c) 3° node is monitor only, but I've read that monitor do heavy I/O too (really?), so I would like to know if, not having a ODS or other storage than local storage, is it writing to local storage (and how many GB do it needs then?) or is not functioning well or is writing through lan to the 2 storage nodes.
Thanks in advance
(BTW I'm trying to move away from drbd9 and the 2 storage, 3° quorum node that I had configured in the past, with a similar in term of costs, solution)