Hello guys, i have few questions.
I have 2 node proxmox cluster newest 3.2 pve with no external storage, just free avaible space on each node as /dev/mapper/pve-data.
0. Is Ceph the best way to do shared storage? on drives that are on node1 and node2 and completly no other external NAS ?
Some told me about GFS2 but why not to use ceph if it is implemented and i finally know how to use it
1. Creating OSD
Can i create osd from avaible free space in my /dev/mapper/pve-data or does it need to be raw disk like /dev/sdb with no partitions at all ?
I woul like to use free space from current /dev/mapper/pve-data , how can i do it ?
Can i alternatively use another LVM partition to be used in OSD ?
2. RAID in Ceph ?
I made on each node second disc to test ceph /dev/vdb with 10GB space
Now i made OSD and journal took 5GB so only 5GB is left of space - its normal and ok.
http://scr.hu/2o2c/somli
I added this DRB shared drive to resources
http://scr.hu/2o2c/08qzp
And i see that the avaible space is 10GB and i think it should be 5 GB as this should act like RAID1 right ?
So i have 2 OSD 5GB and 5 GB, and now i have 10GB of space ?
I think it should be 5 GB so that id one node will burn, then my data will presist like in RAID1 right ?
Here is screenshot of my pool.
Tell me what i am doing wrong ?
i want to have RAID1 of this ceph storage so on each node there will be FULL data
http://scr.hu/2o2c/3suqd
Another Q is should i create LVM on created DRB ? why?
How can i access do files created on DRB DRB resource ? - in case i want to upload VM image manually ?
And is this normal? http://scr.hu/2o2c/a3qm7
512 pgs degraded, stuck unclean, 50% objects degraded ?
3. Manual HA
I made proxmox 2 node cluster with ceph shared storage and as i see its not recommended to use HA on such setup so i just want to have possibility of turning on manually VMs on node B if node A will go down.
How can i do that ?
When all VM's are running on node A and no VM's running on node B,
then node A will go down, and i will see that, and i want to make manual migration but i cant do migration because node A is unavaible "no route"
How can i do that ?
Thx for answers
I have 2 node proxmox cluster newest 3.2 pve with no external storage, just free avaible space on each node as /dev/mapper/pve-data.
0. Is Ceph the best way to do shared storage? on drives that are on node1 and node2 and completly no other external NAS ?
Some told me about GFS2 but why not to use ceph if it is implemented and i finally know how to use it
1. Creating OSD
Can i create osd from avaible free space in my /dev/mapper/pve-data or does it need to be raw disk like /dev/sdb with no partitions at all ?
I woul like to use free space from current /dev/mapper/pve-data , how can i do it ?
Can i alternatively use another LVM partition to be used in OSD ?
2. RAID in Ceph ?
I made on each node second disc to test ceph /dev/vdb with 10GB space
Now i made OSD and journal took 5GB so only 5GB is left of space - its normal and ok.
http://scr.hu/2o2c/somli
I added this DRB shared drive to resources
http://scr.hu/2o2c/08qzp
And i see that the avaible space is 10GB and i think it should be 5 GB as this should act like RAID1 right ?
So i have 2 OSD 5GB and 5 GB, and now i have 10GB of space ?
I think it should be 5 GB so that id one node will burn, then my data will presist like in RAID1 right ?
Here is screenshot of my pool.
Tell me what i am doing wrong ?
i want to have RAID1 of this ceph storage so on each node there will be FULL data
http://scr.hu/2o2c/3suqd
Another Q is should i create LVM on created DRB ? why?
How can i access do files created on DRB DRB resource ? - in case i want to upload VM image manually ?
And is this normal? http://scr.hu/2o2c/a3qm7
512 pgs degraded, stuck unclean, 50% objects degraded ?
3. Manual HA
I made proxmox 2 node cluster with ceph shared storage and as i see its not recommended to use HA on such setup so i just want to have possibility of turning on manually VMs on node B if node A will go down.
How can i do that ?
When all VM's are running on node A and no VM's running on node B,
then node A will go down, and i will see that, and i want to make manual migration but i cant do migration because node A is unavaible "no route"
How can i do that ?
Thx for answers
Last edited: