Backup and Storage advice

anewlime

New Member
Nov 10, 2024
6
0
1
Hey,

I'm just getting started with my first homelab. I'm new to PVE and all things Linux. Looking for some help on storage and backup options.

I've got a single node, an old SFF desktop with 1*1TB NVMe SSD and 3*1TB SATA HDD.

I want to run PBS to benefit from incremental backups.

What are my options for storage and backup config?

I was thinking:
PVE on the NVMe as ext4.
PBS as a VM.
Other VMs and LXCs on the NVMe.
3*HDD in either RAID1 or RAIDZ-1 for backup storage with redundancy.

I know PBS would ideally be on bare metal but I only have one PC. I'm guessing a VM would be better than increasing PBS on top of PVE to keep PVE clean?

Is RAID1 or RAIDZ-1 the best option for the HDDs? I believe RAID1 would give me 1TB of storage with the ability to lose 2 drives. Whereas RAIDZ-1 would give me 2TB of storage and the ability to lose 1 drive. Are there other factors to consider when choosing between these two?

Thanks in advance for any help.
 
I personally do not use PBS but I do read that people run PBS a lot as a VM.

The only reason to not run PBS in a VM is that Proxmox VE needs to work to access the VM and PBS. (And thus defeating the hole reason of having PBS if you do not run it on multiple machine and thus a cluster with redundancy.)
And if you want to run PBS inside a VM i would recommend to pass-though the disk directly to the VM instead of a VM disk.

My general advice is to always run your host OS on a ZFS mirror / Hardware RAID1 (And if possible on identical disks) so that your host OS is redundant and if 1 of the 2 disks fail, the host OS can still work/boot.

In an ideal world you would install proxmox VE on 2 NVME drives with RAID 1 / ZFS mirror (since NVME disks are way faster then HDDs) but since you only have one, I would test if it is possible to install proxmox VE on the HDDs with ZFS mirror / Hardware RAID 1 and take the slower boot times as it is not a question of if a disk will fail but a mather of when.

Then just use your NVME drive for all your VM disks and use the leftover HDD for PBS or to store backup files.
I personally just use a backup disk and use the build-in backup function of proxmox VE. it may not give incremental backup support, but the compression is really good out of the box and definitely worth it to also take a look at.
This way your VM disk can be backup 1, the backup file / PBS can be backup 2, and you then only need just one more externally for a full backup plan.
 
Last edited:
an old SFF desktop
You didn't tell us which CPU and especially how much RAM you have.

PBS as a VM.
The overhead of a complete VM is larger than the overhead of a container. If you have limited RAM then opt for a container.

I believe RAID1 would give me 1TB of storage with the ability to lose 2 drives. Whereas RAIDZ-1 would give me 2TB of storage and the ability to lose 1 drive.
Correct. And you have to decide which aspect is more important. Also keep in mind that only ~80% should be used.

Are there other factors to consider when choosing between these two?
Well... a Triple-Mirror can read data three times as fast as a RaidZ. Writing data is identical slow. IOPS are important because PBS handles its magic by those several hundred thousands (or millions) of chunks. Actually I would not implement it with HDDs without having redundant SSD/NVMe attached as a "Special Device".

Possibly (but not recommended) two mirrored USB-SSDs with a small capacity (0.3% of the pool capacity is enough!) may help drastically. Note that USB is never recommended. That said... I do use some in my Homelab...

Anecdote: one of my first productive PBS' was rotating rust only, implemented as a VM. It worked fine. For several weeks. After filling up some (single digit) TB of backups the access times got so slow that the GUI was not able to list the content. Only a second attempt was successful as the first failing attempt filled the ARC with half of the metadata of my request. The solution was to add some dual use write-cache + metadata device. (This was not ZFS.) My point here is: even if it works on the first glance it might not be a good choice on the long run...

Sorry, your limited hardware will probably not deliver good performance. On the other hand "good" depends on the user, so if you are fine with this... go for it.

With your constraints I would not separate the OS from PVE- and PBS-data. You just don't have enough devices for this luxury.

Note that I didn't mention your NVMe. That is because I go for redundancy always, especially on a single node. (In a cluster that's not so critical.)

Have fun! :)
 
  • Like
Reactions: Johannes S
It depends, it can be if you run everything from the OS disk.
Boot times will be way slower compared to the NVME disk.
But if you run just proxmox VE on the HDD and use the rest for ISOs I do not see a big reason why is should impact performance as much.

But if you plan on using the HDD for the VMs as well then you will see a big performance hit. (And that is why I would recommend to store the VM disks on the NVME and not on the host OS disk.)
 
And in general I recommend to always only use the proxmox VE OS disk for proxmox VE OS itself and ISOs as filling the disk up to much and/or reading/writing to much to one location at the same time will cause a bottleneck. (And this is even true if you use SSDs and or NVMEs)
 
Thanks @UdoB .

I've got an i7-9700 and 64GB RAM.

I could potentially upgrade the storage if that's a good route; The motherboard only has a single M.2 PCIe slot but it all has PCIe x4 and x16 slots, which are currently unused. I could add another NVMe SSD (or even 2) with an adaptor card. I could also upgrade the SATA HDDs to SSD. Just not sure which would be the best option in terms of performance, redundancy and cost.
 
To give some context my configuration is as follows:
1x AMD EPYC 7282 Rome 16 cores
256GB ECC RAM
HPE smart array e208i-a sr gen10
8x Samsung 2TB SSD 870

I have configured the disks that are attached to the HPE smart array e208i-a sr gen10 as HBA disks.

Then I configured 2x 2TB SSD are a ZFS mirror for Proxmox VE.
The local location got changed to only allow ISOs, Snippets and container templates and changed the local-zfs to only allow disk-images and containers. (Since I also store my cloud-init templates on the proxmox VE OS disk and thus need to allow VM disks to be stored on it but I never put VMs on there that are going be running.)

Then I configured 2x 2TB SSD are a ZFS mirror for the VMs and called to VM-disk.
The VM-disk only allows disk-images and containers.

And lastly I configured 2x 2TB SSD as a ZFS mirror for Backups.
Then I mount them as a directory called Backups via a work-around since it needs to be a directory so that I can store backup files in it.
The Backups is configured to only allow VZDump backup files.

This configuration allows me to seperate all the disks so that if one disk gets really bizzy the rest of the system will not be affected.
 
Thanks @UdoB .

I've got an i7-9700 and 64GB RAM.

I could potentially upgrade the storage if that's a good route; The motherboard only has a single M.2 PCIe slot but it all has PCIe x4 and x16 slots, which are currently unused. I could add another NVMe SSD (or even 2) with an adaptor card. I could also upgrade the SATA HDDs to SSD. Just not sure which would be the best option in terms of performance, redundancy and cost.
If possible I would:

Try to get a second NVME disk that is the same as the one you already have and get the PCIE adapter the it needs for the extra.
Then do something I did but I bit different:

2x 1TB HDD for proxmox VE OS as ZFS mirror with the arc capped at either 8GB or 16GB RAM. (This can be done via the proxmox VE installer and the installer capps the arc size by default to max 16GB RAM)

2x 1TB NMVE for VM disks as ZFS mirror. (Use the fastest disks for the VM disks as it will make a big differents if you run a couple of VMs on the same disk and it allows the proxmox VE OS to use most to all the arc for itself to speed up disk access for frequently used files for the proxmox VE OS)

Then just create a directory on your last 1TB HDD disk and use it for backups. (And if your VM data can be compressed really well I would just stick to the build-in Backup system and not try to get PBS running as a VM as it may cause more issues then it solves since backups are for disaster recovery but PBS as a VM requires a working Proxmox VE OS and a undamanged local disk since all the VM configs are ONLY stored on the local disk and the backup files)

I think this would really help you out on performance while costing (depending on cost of a extra NVME disk that is the same as the one you already have) not that much extra. (But that is also up to you.)
 
Last edited:
You could also upgrade to all SSDs/NVME drives but to me it seems a bit of a waste of money.
Yes SSDs are way faster the HDDs but if you seperate the OS, VMs and Backup files correctly then they should not cause a bottleneck for each other and is way cheaper since you already have the HDD drives. (And when they eventually fail you can always consider upgrading to SSDs at a later date.)
 
If possible I would:

Try to get a second NVME disk that is the same as the one you already have and get the PCIE adapter the it needs for the extra.
Then do something I did but I bit different:

2x 1TB HDD for proxmox VE OS as ZFS mirror with the arc capped at either 8GB or 16GB RAM. (This can be done via the proxmox VE installer and the installer capps the arc size by default to max 16GB RAM)

2x 1TB NMVE for VM disks as ZFS mirror. (Use the fastest disks for the VM disks as it will make a big differents if you run a couple of VMs on the same disk and it allows the proxmox VE OS to use most to all the arc for itself to speed up disk access for frequently used files for the proxmox VE OS)

Then just create a directory on your last 1TB HDD disk and use it for backups. (And if your VM data can be compressed really well I would just stick to the build-in Backup system and not try to get PBS running as a VM as it may cause more issues then it solves since backups are for disaster recovery but PBS as a VM requires a working Proxmox VE OS and a undamanged local disk since all the VM configs are ONLY stored on the local disk and the backup files)

I think this would really help you out on performance while costing (depending on cost of a extra NVME disk that is the same as the one you already have) not that much extra. (But that is also up to you.)
Thanks so much for your advice. I'm going to try and get a pair of NVMe SSDs. How important is it for the NVMe SSDs to be the same? I'm struggling to find one for sale. I've got a PC611 NVMe SK hynix 1TB, which is proving difficult to source. I've also got a Kioxia EXCERIA 1TB (currently in my laptop but I can swap things around), which I've found for sale but only used.
 
Come to think of it, if the drives are in a mirrored pair and one fails but a matching replacement can't be found is the advice to replace both, one at a time, to end up with a new matching pair?
 
Generally it is advised to use a matching pair.
It is possible to use mismatched disks for ZFS mirrors / ZFS RAID 1 but is a real pain to get it to play nice.
https://serverfault.com/questions/1...ed-pair-be-a-different-size-than-another-pair

And note that if there is a speed difference between the drives then you are going to be limited by the slowest drive.

If possible use matching drives but if not possible then mixing drives is possible but is a pain to do and maintain…
 
Last edited:
I would advise against using ZFS on consumer grade ssd's as it will destroy your ssd's in no time !
Proxmox writes an insane amount of logging data to the disk (10 to 15 Gb p/day), which apparently can't be reduced and kills your ssd in no time...
Search for threads about ssd TBW data.
Goodluck!
 
Last edited:
Ok. I've ordered a drive to match my existing, it's used but hopefully will be ok. Going to order a PCIe adaptor card and set up as you suggested. Thanks again!
 
10GB to 15GB a day is fine on consumer SSDs. (Windows 10/11 does just as much a day.)
The only thing that can happen is that your drive will report back as not healthy since it will at some point reach it TBW. (And since it’s running 24/7 it will just creep up slowly, but I am willing to bet that most SSDs in PC are also way over the TBW. No one care since it is not easily visible.)

And if it hits its TBW you can just use it just fine.
You only need to monitor the wear-out and bad sector detection a bit more frequently.
The TBW it just a guarantee from the manufacturer that it will to that many TB of writing before you may see bad sectors on the drive.
I personally have seen drives reach 30x TBW before it started to fail.
 
  • Like
Reactions: Johannes S
I find 10 to 15 Gb a day for a system only running 2 to 3 lxc/vm's, which are mostly idle, excessive !
Moreover apparently there is no way to reduce this and set log levels to a more reasonable output
Not good imo!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!