OK, hello everyone.
I am entertaining an idea of using Ceph on Proxmox in single node setup.
I understand that it is not optimal and not 100% safe but it seems to be a good option for my needs.
my ultimate goals is to build out a Proxmox/file server setup on a single physical machine.
first of all , this is a home based setup.
I have one(1) oldish SuperMicro AMD server
Chassis: Supermicro SC846 24 Bay chassis
Motherboard: Supermicro H8DME-2 BIOS v3.5 (latest)
CPU: 2 AMD Opteron Hex Core 2431 @ 2.4Ghz for total of 12 cores
RAM: 49GB DDR2 PC-5300f @ 667mHz ECC
4x1Gb NICs.
the storage is SATA.
4 ports on MB and 3x8 PCI-X HBA direct to non-expander back wall.
Disks : 2x120Gb SSD for OS
2x1Tb
3 or 4 x 2Tb
3 or 4 x 3TB
I will setup Proxmox on 2 ssds using ZFS-RAID1
all other is for storage.
my issue(s) and ideas are:
#1. I do not like ZFS for storage so I want to use some other file system. currently all my data disks are BTRFS in various raid-1 pools.
#2. as proxmox does not support BTRFS directly I want a setup that will help me to expose all my BTRFS pools to the world but help me control access to it. I would entertain having OMV VM with pass-through but I am having issues setting this up so it works. hence I am looking for an alternative options.
now to the gist of the matter.
I am playing with Proxmox+ceph on single node setup using several how-tos I found.
I am suing latest ISO for v5.
all is installed and running just fine as far as I can see.
I am using a nested VM setup.
Hyper-V on windows 10 pro.
the proxmox VM is Gen 1 with 2 CPU, 5GB RAM , 2 nics
5 harddrives.
2x127 for OS
2x200GB data
3x100GB data
this is my Ceph config , Line in Bold Red is the one added during setup to allow single node mode, as per several how-tos.
the OSD is (this is a latest test out of many I did before). using 1 200GB drive and 1 100GB
and here the disk list
the issue here is no matter how I do things to create OSD UI or CLI
I get at least one disk of the same size set as partitions
and I get only one Ceph OSD for 2 disks.
as per images I created OSD-0 using UI, choose bluestore and sdc with nothing(default) for journal
than I tried to create second OSD with second 200GB drive and that is what I got.
a drive set as partitions and no OSD for it.
same story with OSD1 (after I tried the 200GB) using CLI.
same story. run " pveceph createosd /dev/sdd -bluestore 1"
got the Ceph.
run " pveceph createosd /dev/sde -bluestore 1"
got the partitions.
same with sdf. , after reboot got Ceph on it. and osd.2
am I doing anything wrong? or maybe I do not understand something with Ceph.
now, second issue I have is that I do not understand how to use the space.
I create a pool but I can not get to it. how do I use it?
if anyone have a link to a good tutorial I will greatly appreciate the help
thanks Vlad
I am entertaining an idea of using Ceph on Proxmox in single node setup.
I understand that it is not optimal and not 100% safe but it seems to be a good option for my needs.
my ultimate goals is to build out a Proxmox/file server setup on a single physical machine.
first of all , this is a home based setup.
I have one(1) oldish SuperMicro AMD server
Chassis: Supermicro SC846 24 Bay chassis
Motherboard: Supermicro H8DME-2 BIOS v3.5 (latest)
CPU: 2 AMD Opteron Hex Core 2431 @ 2.4Ghz for total of 12 cores
RAM: 49GB DDR2 PC-5300f @ 667mHz ECC
4x1Gb NICs.
the storage is SATA.
4 ports on MB and 3x8 PCI-X HBA direct to non-expander back wall.
Disks : 2x120Gb SSD for OS
2x1Tb
3 or 4 x 2Tb
3 or 4 x 3TB
I will setup Proxmox on 2 ssds using ZFS-RAID1
all other is for storage.
my issue(s) and ideas are:
#1. I do not like ZFS for storage so I want to use some other file system. currently all my data disks are BTRFS in various raid-1 pools.
#2. as proxmox does not support BTRFS directly I want a setup that will help me to expose all my BTRFS pools to the world but help me control access to it. I would entertain having OMV VM with pass-through but I am having issues setting this up so it works. hence I am looking for an alternative options.
now to the gist of the matter.
I am playing with Proxmox+ceph on single node setup using several how-tos I found.
I am suing latest ISO for v5.
all is installed and running just fine as far as I can see.
I am using a nested VM setup.
Hyper-V on windows 10 pro.
the proxmox VM is Gen 1 with 2 CPU, 5GB RAM , 2 nics
5 harddrives.
2x127 for OS
2x200GB data
3x100GB data
this is my Ceph config , Line in Bold Red is the one added during setup to allow single node mode, as per several how-tos.
Code:
[global]
auth client required = cephx
auth cluster required = cephx
auth service required = cephx
cluster network = 192.168.1.0/24
fsid = c48799a3-920b-4b38-bb7a-cfa91d5e6700
keyring = /etc/pve/priv/$cluster.$name.keyring
mon allow pool delete = true
osd crush chooseleaf type = 0
osd journal size = 5120
osd pool default min size = 2
osd pool default size = 3
public network = 192.168.1.0/24
[osd]
keyring = /var/lib/ceph/osd/ceph-$id/keyring
[mon.pve]
host = pve
mon addr = 192.168.1.69:6789
the OSD is (this is a latest test out of many I did before). using 1 200GB drive and 1 100GB
and here the disk list
the issue here is no matter how I do things to create OSD UI or CLI
I get at least one disk of the same size set as partitions
and I get only one Ceph OSD for 2 disks.
as per images I created OSD-0 using UI, choose bluestore and sdc with nothing(default) for journal
than I tried to create second OSD with second 200GB drive and that is what I got.
a drive set as partitions and no OSD for it.
same story with OSD1 (after I tried the 200GB) using CLI.
same story. run " pveceph createosd /dev/sdd -bluestore 1"
got the Ceph.
run " pveceph createosd /dev/sde -bluestore 1"
got the partitions.
same with sdf. , after reboot got Ceph on it. and osd.2
am I doing anything wrong? or maybe I do not understand something with Ceph.
now, second issue I have is that I do not understand how to use the space.
I create a pool but I can not get to it. how do I use it?
if anyone have a link to a good tutorial I will greatly appreciate the help
thanks Vlad