a few questions on how to work with Ceph on Proxmox.

jim.bond.9862

Renowned Member
Apr 17, 2015
395
34
68
OK, hello everyone.
I am entertaining an idea of using Ceph on Proxmox in single node setup.
I understand that it is not optimal and not 100% safe but it seems to be a good option for my needs.
my ultimate goals is to build out a Proxmox/file server setup on a single physical machine.
first of all , this is a home based setup.
I have one(1) oldish SuperMicro AMD server
Chassis: Supermicro SC846 24 Bay chassis
Motherboard:
Supermicro H8DME-2 BIOS v3.5 (latest)
CPU: 2 AMD Opteron Hex Core 2431 @ 2.4Ghz for total of 12 cores
RAM: 49GB DDR2 PC-5300f @ 667mHz ECC
4x1Gb NICs.
the storage is SATA.
4 ports on MB and 3x8 PCI-X HBA direct to non-expander back wall.

Disks : 2x120Gb SSD for OS
2x1Tb
3 or 4 x 2Tb
3 or 4 x 3TB

I will setup Proxmox on 2 ssds using ZFS-RAID1
all other is for storage.


my issue(s) and ideas are:
#1. I do not like ZFS for storage so I want to use some other file system. currently all my data disks are BTRFS in various raid-1 pools.

#2. as proxmox does not support BTRFS directly I want a setup that will help me to expose all my BTRFS pools to the world but help me control access to it. I would entertain having OMV VM with pass-through but I am having issues setting this up so it works. hence I am looking for an alternative options.

now to the gist of the matter.

I am playing with Proxmox+ceph on single node setup using several how-tos I found.
I am suing latest ISO for v5.

all is installed and running just fine as far as I can see.
I am using a nested VM setup.
Hyper-V on windows 10 pro.
the proxmox VM is Gen 1 with 2 CPU, 5GB RAM , 2 nics
5 harddrives.
2x127 for OS
2x200GB data
3x100GB data


this is my Ceph config , Line in Bold Red is the one added during setup to allow single node mode, as per several how-tos.


Code:
[global]
     auth client required = cephx
     auth cluster required = cephx
     auth service required = cephx
     cluster network = 192.168.1.0/24
     fsid = c48799a3-920b-4b38-bb7a-cfa91d5e6700
     keyring = /etc/pve/priv/$cluster.$name.keyring
     mon allow pool delete = true
     osd crush chooseleaf type = 0
     osd journal size = 5120
     osd pool default min size = 2
     osd pool default size = 3
     public network = 192.168.1.0/24

[osd]
     keyring = /var/lib/ceph/osd/ceph-$id/keyring

[mon.pve]
     host = pve
     mon addr = 192.168.1.69:6789

the OSD is (this is a latest test out of many I did before). using 1 200GB drive and 1 100GB

upload_2017-9-12_9-55-11.png


and here the disk list

upload_2017-9-12_9-55-54.png

the issue here is no matter how I do things to create OSD UI or CLI
I get at least one disk of the same size set as partitions
and I get only one Ceph OSD for 2 disks.
as per images I created OSD-0 using UI, choose bluestore and sdc with nothing(default) for journal
than I tried to create second OSD with second 200GB drive and that is what I got.
a drive set as partitions and no OSD for it.
same story with OSD1 (after I tried the 200GB) using CLI.
same story. run " pveceph createosd /dev/sdd -bluestore 1"
got the Ceph.
run " pveceph createosd /dev/sde -bluestore 1"
got the partitions.
same with sdf. , after reboot got Ceph on it. and osd.2

am I doing anything wrong? or maybe I do not understand something with Ceph.

now, second issue I have is that I do not understand how to use the space.
I create a pool but I can not get to it. how do I use it?

if anyone have a link to a good tutorial I will greatly appreciate the help

thanks Vlad
 

Attachments

  • upload_2017-9-12_9-20-47.png
    upload_2017-9-12_9-20-47.png
    28.6 KB · Views: 2
  • upload_2017-9-12_9-22-2.png
    upload_2017-9-12_9-22-2.png
    45.3 KB · Views: 1
While it is possible to run a single ceph node, it is strongly not recommended. Besides Ceph luminous is still technology preview in PVE 5.0.

If that single monitor fails, your will very likely not be able to get to your data. The data distribution might be a issue too as you have different sized disks, so they are hit differently while writing data into ceph. You also introduce higher complexity and a bigger performance hit as compared to having a couple of RAIDs setup.

Still while you don't like ZFS, I would recommend you to go with it, as you will have a filesystem and a volume manager at your disposal. You can create your VMs on zvol or file if you like. More info here.
 
thanks Alwin,
yes, I do understand that right this moment Ceph luminous is not marked as stable.
I am just trying to get a feel of any alternative setup that might work for my needs.

I also do plan to have ZFS in my setup but for server use only. I do not want to use it for my data needs.
ideally, I want to have and plan to have setup kike this
(based on my existing hardware )
a ZFS RAID-1 using 2x120 SSD for Proxmox install . this are connected to MB SATA ports.
a ZFS RAID-1 using 2x1TB drives for all system related storage i.e. all VMs and ISO, images etc. also connected to MB SATA.

now all the rest of my drives a mix of 2TB and 3TB I want connected to my HBAs and exposed to network in some way that makes it easier to manage the data on them. frankly I am a bti confused on how ZFS works and is used.
and also the limitation of ZFS when using mixed size drives make my head spin when I sit down and try to layout my storage.
my setup is not overly large and my needs is not extensive by far. just want an easy low maintenance setup for home use.
proxmox free edition would work for me the best as it is based on Debian which I am already using for some of my other PCs, I also use Linux Mint (ubuntu based ) so I can work with it ok.
and also Proxmox supports BTRFS, kind of, which I also like.
but initial setup and layout is maddening. can't figure it all out . :)

best setup is to not use pass-through but rather have HOST (the Proxmox) act as Hypervisor and File server. but so far can not find a good how-to to set this up.
 
You could use a nfs/smb server on the PVE host to share your data across devices. Whatever fileystem you might want to use underneath.
 
yes, I am leaning that way more and more.
I want to try getting the File server container going too.
 
#1. I do not like ZFS for storage so I want to use some other file system. currently all my data disks are BTRFS in various raid-1 pools.

#2. as proxmox does not support BTRFS directly I want a setup that will help me to expose all my BTRFS pools to the world but help me control access to it.
You can. the proxmox storage manager allows local mounted locations so this should work without issue. passing extents to VMs would have to be manual but can be done. Not sure its worth the effort vs ZFS- I would suggest you might want to rethink your aversion to it...

I am playing with Proxmox+ceph on single node setup using several how-tos I found.
What for?! Ceph doesnt make any sense for one node. If you want to play with it just to see how it works, create 3 VMs and set up a proper cluster. Come to think of it, since you're all virtual anyway on top of hyper-v, may as well set up a 3 node proxmox cluster as well.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!