Use Disk PassThrough or Raid-0 for PVE with Ceph (Testsystem)

Martin.B.

New Member
Dec 12, 2025
17
2
3
I know that there are several threads and ressources about not using a raid HBA for PVE with Ceph.

I have some OLD Systems to do some testing and, of course, will not get any money for new hardware for tests.
Those systems have an Areca 1883LP controller. All Disks (Seagate Nytro SSDs 2TB) are in the front enclosure that is directly connected to the Areca controller.

I now have the possibility to create passthrough disks in the controller, in this case no smart data is shown in PVE.
Or i can create a raid-0 with just 1 disk for every hardware disk, then i get smart data shown in PVE, but it is not the real smart data of the disk and does not contain very much information.

What would be the (better) way to get PVE with Ceph running, just for testing. Data loss does not matter in this case, just the basic functionality.

As a workaround: could i install multiple PVE+Ceph as VMs on a PVE server?
 

Attachments

  • Zwischenablage-1.jpg
    Zwischenablage-1.jpg
    37.6 KB · Views: 1
  • Zwischenablage-2.jpg
    Zwischenablage-2.jpg
    18 KB · Views: 1
As a workaround: could i install multiple PVE+Ceph as VMs on a PVE server?
Sure! For teaching/learning/debugging this works great.

Example: in my Homelab I have one specific cluster member with 64 GiB Ram and 2 TB local storage for VMs.

I was able to create six virtual PVE nodes with 8 GiB Ram each, construct a cluster, add another two virtual 32 GB disks to each of them and form a Ceph installation. Now I got 12 OSDs, multiple MONs and MGRs.

It worked surprisingly well. I was able to utilize different Ceph pools to run several (small) Test-VMs. And it allowed me to test Ceph failure behavior / auto-repairing and also to experiment with Erasure Coding.

Pro tip: before injecting artificial failures make a snapshot - all "hosts" are virtual :-)


Disclaimer: I am not using any Ceph currently. My story is here: https://forum.proxmox.com/threads/fabu-can-i-use-ceph-in-a-_very_-small-cluster.159671/

Also: https://pve.proxmox.com/wiki/Nested_Virtualization
 
Last edited:
  • Like
Reactions: aaron
As a workaround: could i install multiple PVE+Ceph as VMs on a PVE server?
This is what we use for many internal tests, and also in the hands-on labs for our trainings.
Good for functionality and behavior tests as long as performance is not a factor!
 
  • Like
Reactions: UdoB
Thanks for your fast answers, looks like i will go this way and only use 2 of the hardware machines and build the ceph cluster virtual. The machines do only have 32GB RAM and an older 6 core CPU, so maybe using 2 hardware machines will be better. I am sure i will get it when the hardware comes to its limit :)
If i use all disks as raid-5 i should have ~10TB per node free for VM usage.

It is just for testing and learning and comparing it to the "non ceph" environment with "diskless" server and redundant iSCSI storage as backend.

@UdoB : Thanks for the story and the hint with the snapshots.
 
  • Like
Reactions: UdoB
It is just for testing and learning and comparing it to the "non ceph" environment with "diskless" server and redundant iSCSI storage as backend.
Good for functionality and behavior tests as long as performance is not a factor!
This is a major point. using ceph on this sort of POC will not teach you how to actually use ceph in production since you wont be able to actually put the type of load on it that would be meaningful. Is your intention to use this knowledge to put together a production cluster, or just for funsies?