Local Storage and Ceph storage

Dec 8, 2022
7
0
6
Just a post to try to figure out if what I am doing is correct, or if I am trying to force something that is weird. I have 3 nodes that are identical, each with 500Gb hard disks, which I will later switch to SSD. I would like to install Proxmox and local things on a 100Gb section of the disk and use 400Gb for Ceph storage. When I did the installation, I chose EXT4, and put 100Gb into the box for local. Now, I am trying to set up Ceph, where do I go to select the other 400Gb of space that is unclaimed on the disk?
 
Just a post to try to figure out if what I am doing is correct, or if I am trying to force something that is weird. I have 3 nodes that are identical, each with 500Gb hard disks, which I will later switch to SSD. I would like to install Proxmox and local things on a 100Gb section of the disk and use 400Gb for Ceph storage. When I did the installation, I chose EXT4, and put 100Gb into the box for local. Now, I am trying to set up Ceph, where do I go to select the other 400Gb of space that is unclaimed on the disk?

This is not supported - I would recommend picking separate devices for the os. Best would be two devices as a zfs mirror so you can loose one disk without loosing all the os-configuration and services (like ceph service etc.)..

When you setup a disk as a osd in proxmox, you need to wipe and put gpt on it then create it via ceph osd web ui menu. Its recommend to have at least 4 devices per node.

https://pve.proxmox.com/wiki/Deploy_Hyper-Converged_Ceph_Cluster
 
Last edited:
Hmm, I will have to see if I have the available space/power to put another device inside each node.

so the recommended way is for each node to have a spinning disk for the OS install, and then another device for Ceph storage?
 
Hmm, I will have to see if I have the available space/power to put another device inside each node.

so the recommended way is for each node to have a spinning disk for the OS install, and then another device for Ceph storage?

Ceph with one disk is not really useful. What are the reasons why you choose ceph as a storage? MINIMUM Setup ist one disk for os (better is two) and 4 osds per node (its technically possible with less but not useful or good)
 
Last edited:
The main reason I would like ceph for storage is so I can easily/automatically migrate VMs and LXCs from any host to any host as needed. I don't want to put the disks for the containers on my NFS host as I don't really want to cause that much network traffic just running VM/LXCs. That, and learning about it.