Lifting Linux home server to Proxmox VE - What's state of the art storage management?

teamPlayer

New Member
Oct 12, 2021
1
0
1
78
I used to ran a home server with Ubuntu Linux. The main purpose was a file server. The files where simply put on to EXT4 partitions on the discs. Now I want to run some more services in VMs and installed Proxmox VE.

1. Should I use cqow2 files?
As I am now reorganizing the server I wonder how to store the file server's files best.

Wouldn't it be better to store the files inside qcow2 / qcow3 images? Maybe one qcow2 file for each home directory and a separate qcow2 file for my public data archive. This seems more flexible to me than putting the data directly on classical EXT4 partitions.

As I don't want to risk loosing my data I would like to ask here for people with more experience to this to give me feedback if this is a good idea or not.

Of course I am the only one who is responsible for the safety of my data.

2. ZFS or Ceph
I consider to build the file server based either on ZFS or on Ceph. I know Ceph seems to be overkill for a single server, but I read parts of the documentation and it doesn't look much more complicated than ZFS. I am new to ZFS as I am to Ceph, so I have to learn a new system anyway.

Is it right when I say, that I could use Ceph's Rados block device (RDB) to store my file servers data as images instead of storing Qcow2 files on a classical file system (e.g. ZFS)?

As far as I can see I would need to put a Crush rule in place that mirrors the data between my local discs instead of between cluster nodes. I do not need replication beetween nodes as there is only on at this point. Maybe this comes in handy in the future.

Is my expectation correct?

The server's hardware:
Intel(R) Xeon(R) CPU E3-1245 v3 @ 3.40GHz (8 cores)
32GB ECC RAM
2x 2TB HDDs with the data
2x 16TB HDDs empty

Thanks in advance.
 
Some misconceptions here, let me try to clear them up:

This seems more flexible to me than putting the data directly on classical EXT4 partitions.
You're comparing apples to peaches. A qcow2 file does not contain files, it contains a disk image. That is, you can attach it to a VM and the VM will see it as a hard disk, not a file storage. So your "classical EXT4 partition" would actually be inside the qcow2 file, and then the files on that. What FS you put the qcow2 image on at the PVE level is irrelevant.

Also sidenote: qcow3 is just an internal extension of qcow2, you will always use the qcow3 feature set with qcow2 images, but it's generally not referred to as such.

In practice, a qcow2 image is exactly as safe a place to put data as the underlying FS on the hypervisor.

I know Ceph seems to be overkill for a single server, but I read parts of the documentation and it doesn't look much more complicated than ZFS.
If you're starting out and you only a have a single node, I'd highly recommend ZFS. It's made specifically for this scenario, as opposed to Ceph, and you can always add more disks and use those for Ceph later if you're expanding/learning. Also you can use ZFS as the root disk for PVE, but not Ceph.

Is it right when I say, that I could use Ceph's Rados block device (RDB) to store my file servers data as images instead of storing Qcow2 files on a classical file system (e.g. ZFS)?
You're right up until the "e.g.", since ZFS in particular will just as well store disk images as zvols instead of qcow2/raw files. These then support snapshots, dedup, checksumming, etc... all the ZFS goodness. Check our docs for a more thourough comparison as well.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!