SSD recommendation & raidz vs mirror

Miktash

Active Member
Mar 6, 2015
67
1
28
I want to test PBS to backup 18Tb of VM’s (on proxmox ve). What kind of SSD’s would you recommend?

Buying enterprise SSD’s is quite expensive and maybe consumer grade ssd’s are fit for these kind of backups? After all it’s only sending incremental changes, right? What do you think?

I also wonder if we need zfs mirrors or if raidz is fine too? With ssd’s maybe raidz-1 ( or better raidz-2?) is okay?

Thanks
 
I want to test PBS to backup 18Tb of VM’s (on proxmox ve). What kind of SSD’s would you recommend?

Buying enterprise SSD’s is quite expensive and maybe consumer grade ssd’s are fit for these kind of backups? After all it’s only sending incremental changes, right? What do you think?

I also wonder if we need zfs mirrors or if raidz is fine too? With ssd’s maybe raidz-1 ( or better raidz-2?) is okay?

Thanks
Don't just look at the price per GB of storage when buying SSDs. Look at the price per GB of TBW. You will see that enterprise SSD are way cheaper on the long run.

With a SSD-only pool a raidz might be fine. Your IOPS performance will be limited to the speed of a single SSD but maybe thats still fast enough for your needs.
 
Use regular hdds for data and enterprise ssds for zfs special device.

The PBS manual says that we should use ssd only storage. Is performance going to be ok when using a special device with hdd’s?

And how big should the special device be?

The server already has 2x 120gb sm863 as a boot device (i had those ssd’s laying around…). Maybe I can repartition those and use them as special device?
 
The PBS manual says that we should use ssd only storage. Is performance going to be ok when using a special device with hdd’s?

And how big should the special device be?

The server already has 2x 120gb sm863 as a boot device (i had those ssd’s laying around…). Maybe I can repartition those and use them as special device?
SSD-only is recommended for different benefits:
- downtime costs your company money, so you want the fastest storage (atleast fast enough to saturate your network speed) to reduce the restore speed to lower the downtime
- SSDs got no mechanical parts that could fail
- even with SSDs as special metadata vdevs, a HDD-based datastore might not be fast enough to finish jobs like a "(re-)verify" in time
- resilvering is done faster, so less chance to loose all the backups

PBS will store everything as chunk files of max 4MB (got 1.7MB as average here) in size. So when you backup your 18TB of VMs and your average chunk size might be 2MB, then this will result in 9 million chunk files. And these chunk files will be deduplicated so you can't read/write them sequentially.
HDDs got terrible seek times/IOPS performance and that is something that won't help in case of an emergency, where you need to read 9 million files randomly spread across the disks to restore all your VMs because your complete PVE servers storage failed.

So it basically comes down to something like this:
Lets say a PVE node fails and your whole IT won't work anymore. Most people can't work, you can't serve customers so you only got costs but no income. Is it worth spending more for SSDs if your IT is then back working again after some hours instead of some days? Only for really small businesses, non profit organizations or homelabs it might be cheaper to use HDD.
 
Last edited:
Ok. So I better buy some enterprise ssd’s en go for an ssd only storage :)

Other question: if it’s writing chunks of changes. Then how does it handle removing old backups? Does it need to send the whole vm’s again ( like it has to do initially?) from time to time?
 
Ok. So I better buy some enterprise ssd’s en go for an ssd only storage :)

Other question: if it’s writing chunks of changes. Then how does it handle removing old backups?
It will only remove deleted chunks when you run a GC job. And you need to run prune jobs (or use backup retention settings so PVE will run prune jobs in background) atleast 24 hours and 5 minutes earlier.
Does it need to send the whole vm’s again ( like it has to do initially?) from time to time?
Your backups will be split into chunks on the PVE node. And PVE will only upload chunks to the PBS that aren't already existing there as, because of deduplication, no chunk has to be stored twice.
PBS isn't using differential backups like a Veeam where you would need from time to time to store a new full backup. Every PBS backup is a full backup but you still only need to upload that part of data that isn't already stored on your PBS. So if your guests aren't changing that much PBS backups should be really fast.
 
Last edited:
Ok. Thanks

I’m still a bit confused on how VE knows what to send. Does it keep some kind of snapshot to track all changes since the last run? And if so, isn’t that hitting the VM performance?

I’m trying to understand how it all works before I start :)
 
First it will have to read the whole virtual disk of your guest and chop it into chunks. In case its a VM and not a LXC and that VM hasn't stopped it will make use of dirty-bitmapping to monitor what parts of the virtual disk changed since last backup. With that it can skip reading the chunks it knows that didn't changed anyway. Then PVE will hash each chunk and ask the PBS if a chunk with that hash already is stored there. If that chunk already exists on the PBS datastore it doesn't need to send it, as no chunk needs to be stored twice because of deduplication. So it only sends the new chunks. And before sending it will compress and optionally encrypt the chunks.

So every backup is a backup that other backup solutions call a "full backup" but it does't need to read everything because of dirty-bitmapping and only needs to send the difference between the last backup because of deduplication (and not because it is a differential backup, which it is not).
 
Last edited:
With our zfs send/recv solution, which we are using right now, the backup server is actually a proxmox ve host with no vm’s running on it. It’s used as a backup server directly and it was added to the cluster.

By doing so we can simply clone a zfs zvol (a backup) on that server, launch a vm using that clone for the disk, and have almost instant access to a vm backup in case we need to restore some files from the inside, for example…

I suppose this cannot be done with PBS because there is no direct access to a backup image from VE. So we need to wait for a restore (copying) job to finish first. Or is there a way to do this differently?

Maybe by running VE and PBS on the same server. But probably not because there is no image on the disk but only chunks?
 
With our zfs send/recv solution, which we are using right now, the backup server is actually a proxmox ve host with no vm’s running on it. It’s used as a backup server directly and it was added to the cluster.

By doing so we can simply clone a zfs zvol (a backup) on that server, launch a vm using that clone for the disk, and have almost instant access to a vm backup in case we need to restore some files from the inside, for example…

I suppose this cannot be done with PBS because there is no direct access to a backup image from VE. So we need to wait for a restore (copying) job to finish first. Or is there a way to do this differently?

Maybe by running VE and PBS on the same server. But probably not because there is no image on the disk but only chunks?
If the filesystem, the zvol is formated with, is supported, you can restore individual files from the PBS webUI without needing to restore the full guest first.

And you could install PVE+PBS bare metal on the same server and share the same ZFS pool for both. With that it might be quite fast to restore a guest using another VMID from local PBS to local PVE if you want to test something or access some files.
 
Last edited:
Ok. I think we can work something out like that :) . Thanks for the tip.


I was just thinking about if we should be going the raidz-1 or raidz-2 road or straight for a zfs mirror like we do with hdd’s. But raidz is going to limit the speed to the speed of a single ssd. So maybe we better go for the mirror. Only a bit sad that so much space is lost that way because enterprise ssd’s are expensive.
 
Throughput performance will scale with the number of disks in the raidz. Just not IOPS performance. So you would need to test if IOPS performance will be enough to saturate the network or PVE pools.
 
For testing PBS I have a server with 24x 960Gb SSD's.

For the best performance I should go for 12 mirror sets, right?
This way I will get roughly 11TB of usable storage.

Does it make sense to go for a configuration like: 8x raidz1?
This way I will still have 8 stripes over raidz1 (with 3 disks).
I will get roughly 14TB of usable storage.

I can even go for a configuration like: 4x raidz1 (with 6 disks)
This will give me roughly 17.7TB of usable storage.

Even the last option give me the performance of 4 SSD's, right? Shouldn't that be fast enough for PBS?
 
Jup

You have to test and benchmark it. Do some test backups and restores and see where the bottleneck is.

The problem is that I will only start to notice performance problems after it has written many backups, right? Because the amount of chucks written will increase and it may become a performance problem after a while?
 
The problem is that I will only start to notice performance problems after it has written many backups, right? Because the amount of chucks written will increase and it may become a performance problem after a while?
For GC and verify jobs yes. The more chunks you got, the longer a re-verify or GC will take. But backup and restore speeds should be fine to test. They can of cause also get slower later (less SLC cache available the fuller the SSDs get) but shouldn't be that bad. But for GC and re-verify you could extrapolate how long it should take. Lets say you fill your datastore to 8% and do a GC or full re-verify. Then it shouldn't take longer than 10x of that when the pool would be 80% full.
Keep in mind that 20% of the pool always should be kept free for brest performance so the CoW of ZFS can do its job.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!