Assistance with proxmox Ceph-reef or quincy install.

polarbear

New Member
Apr 18, 2025
8
0
1
Hi guys first post here. I have the latest proxmox up and running 8.4.1 impressed with it. However, I cannot get the entire ceph installed. The monitors get setup okay. But I can't get the ceph-mgr to install or configure. I tried manually installing it and it's asking for so many dependencies with python. I am in air gapped environment. Is there a way to get through this and a smoother way down the road everytime I have an update? Thank so much
 
Hi,

how did you try to install Ceph?
Did you configure the correct apt repositories?
Did you try using our tooling, i.e. through the web UI under Datacenter -> <node> -> Ceph or using pveceph CLI?
 
Hi thanks for your response. So what I did was I mounted the dvd debian full dvd and mounted the proxmox pve and mounted both of them. The are in the sources.list as deb [trusted=yes] file:///storage/mnt1 and mnt2.

Then I did apt update
apt upgrade.

Since I am in airgapped I use powershell and wget to download all contents from repos for ceph reef and got all the deb files.

I copied the deb files into a dir locally on one of the nodes.
Then did dpkg -i *.deb and ran a fix-install too.

That got me into the gui on the UI then it said to configure. I was like oh cool so I go all the way to monitor and they were working and healthy then when I try to "create manager" This is where I am stuck.

So now I am in constant loops because of dependencies after installing manually all the python deb and tried to grab all their dependencies. I have ceph-mgrxxxxx.deb a bunch of them but if I try to install as if I am going down rabbit hole over and over stuck at when I try to dpkg -i libpython3.13-stdlib then says however package libpython3.13-std.lib is not installed.

I wish there was a repo I can just install whatever is needed to install ceph-mgr or whatever else after that.

Thanks again
 
So I am at my desk now and I can create the manager. But then when I try to start it I get this task failed message
command /bin/systemctl start ceph-mgr@proxmox failed exit code 1
 
hi again,
So if I use this repo grab everything and set up my sources.list and ceph.list files accordingly should work and I should get past the python and bunch of dependency issues?

We have share LUN storage a bunch of big ones like 16 tb about 15 of them. Trying to put something on proxmox cluster aware. Looks like ceph is the right choice but complex. I did try ZFS and did the whole multipath thing and got my nodes to see the LUN just fine. But seems I have to replicate the vms to each lun to migrate it. I was hoping the vm would exist and be seen by all nodes from the same lun. I even tried LVM-thin seems I gotta replicate that too.

Just trying to architect this correctly this is the last thing I can't get going.

Thanks
 
Sorry I missed out some stuff
This repo here http://download.proxmox.com/debian/

Also further explain I attached a 4tb lun and it was seen by all 3 nodes. When i moved the vm on node1 that was sitting on the 4tb lun to node 2 4tb lun (same lun) It copied the entire 100 gigs of windows. But I guess zfs and lvm-thin are not clusteraware. only ceph. Thanks again
 
Hi again, I got around the issue of offline mirror to get ceph reef installed. Now that I have that installed. I can present like a 10TB lun to all my proxmox nodes and it's going to share the same LUN and moving vms from node to node should be easy now. I just wanted to make sure it's not like if I have 5 nodes and I have to provide 5x10TB like one lun to each node and it replicates. It's actually shared storage if I am correct? Thanks
 
Oh wow - I thought the matrix sheet with all the filesystems that was posted said ceph is File system cluster aware? It's over FC. I already did the multipath and can see the LUN we presented to it.

So for example you are saying that I can't use cephFS like 1 5TB lun attached to all nodes. I can't float a vm from node to node if this type of file system is cluster aware?

I been working hard at this. Wanted to not replicate and only move the state memory of the vm to another node on the same LUN. If that make sense. Just like VMWARE VMFS.

Thanks again
 
Last edited:
So after reading a bit more I guess still going to lose storage space because it's replicating with all other osd/pools.

What's the best way here with many VMs to not lose so much storage with replication (This mean for example if I have 5 nodes I will have 5 vms of every virtual machine there). A dedup / thin provisioning option to offset the way this is designed to have vm on every lun presented to all proxmox nodes.

Thanks