Pardon my less-than-intelligent question, but is there a way to install Proxmox on a Ceph cluster?

alpha754293

Member
Jan 8, 2023
129
22
23
Pardon my less-than-intelligent question, but is there a way to install Proxmox on a Ceph cluster such that Proxmox boots off of a Ceph cluster? Or is this not possible?
 
The usual boot process uses the BIOS firmware to read the very first blocks of the operating system. This is before even the "initrd"/"initramfs" is available = "pre-boot".

While boot devices may be local hardware and network devices with some different flavors of well established network-boot protocols, I have never seen Ceph being offered here. It would probably need a part of a Ceph stack residing in the EFI area.

Just my (very) limited understanding...
 
  • Like
Reactions: Johannes S
The background to my question is rooted in the fact that right now, for any given Proxmox host, I don't really have a particularly great way of backing up said Proxmox host.

Thus, one idea would be to nest Proxmox inside another Proxmox install (not an ideal situation, because then you have the Matryoshka doll syndrome) and whilst this idea is to install Proxmox on Ceph since Ceph is self-healing.

So I was thinking that if I had a system or a cluster of systems that was serving up Ceph over the network (whether it's GbE or my 100 Gbps IB system interconnect), and after installing ceph-mgr-dashboard that I might also be able to configure iSCSI targets that Proxmox will then be able to use.

That's the idea of it.

So I'm just checking to see a) how stupid of an idea this is, and b) the feasibility of implementation/how to implement.
 
iSCSI is deprecated in the Ceph project and should not be used any more.

And there is no need to backup a single Proxmox node (if you have a cluster).

You may want to backup the VM config files but everything else is really not that important.
If you want to lower the time needed to bring up a new Proxmox host write some Ansible playbooks for their basic configuration before they can join the cluster.

Usually a RAID1 on some smaller SSDs is enough for the Proxmox operating system. Remember it is based in Debian and can be automated similarly.
 
  • Like
Reactions: Johannes S
iSCSI is deprecated in the Ceph project and should not be ised any more.
Oh...I didn't realise this. Thank you.

You may want to backup the VM config files but everything else is really not that important.
I would imagine that I would want to backup for example /etc/default/grub.conf and like the storage (and possibly network) configuration settings no?

(As grub has the kernel boot parameters necessary for PCIe passthrough)
 
I would imagine that I would want to backup for example /etc/default/grub.conf and like the storage (and possibly network) configuration settings no?

(As grub has the kernel boot parameters necessary for PCIe passthrough)
Think about this the other way around: If you have some automation helping you to setup a Proxmox host you do not need to backup these settings.
 
  • Like
Reactions: Johannes S
Think about this the other way around: If you have some automation helping you to setup a Proxmox host you do not need to backup these settings.
This would be true if you know and have systems like Ansible and/or Terraform deployed (and know how to use them).

As a homelabber, I have yet to learn these systems/platforms.

Right now, all of my deployment notes are in OneNote.

There are a few scripts/threads on backing up hosts to PBS, like this.
Thank you.

Sorry -- I must be missing something as the thread that is referenced doesn't actually contain any scripts.
 
Last edited:
I don't really have a particularly great way of backing up said Proxmox host.
In a cluster you dont need or even want to backup a host. everything important lives in /etc/pve which exists on all nodes. If you DID back up a host(s), you'd open the possibility of restoring a node that has been removed from the cluster and causing untold damage when turning it on.
Or you use a PXE network boot where the initrd contains all necessary things to continue with a Ceph RBD as root device.
This is the way for headless deployment, although I'd probably not use RBD here as its simpler and more manageable to use NFS instead. and DO NOT use the storage served by the nodes for this purpose or you'll not be able to actually power on the cluster.