Hardware to buy (SSD)

Gregor

New Member
Nov 4, 2021
8
0
1
35
Hello,

I want to buy SSDs for Proxmox Root-FS, VMs/ CTs, and Special Device for ZFS for two 6TB WD Red HDDs.

I found this model online for 115$ (used): Seagate Nytro 1551 480GB 6G SATA Mainstream Endurance

Is it recommendable? I am running Nextcloud and some very small CTs (e.g. Unifi Controller, SMB server, Wordpress.)

If not - what would you recommend to buy?

Thanks!

Best regards
Gregor
 
I've got no experience with that drive but on paper it looks fine. But keep in mind to buy atleast 2 or 3 of them and put them in a mirror so it matches the reliability of your HDD pool, because if you loose them also all data on that HDDs will be lost.
 
  • Like
Reactions: takeokun
Hi Dunuin,

thanks. I want to buy two and put them in RAID1. Somebody told me not to put Root-FS, VM/CT, ZFS Special Device on one pool, but I cannot figure out why. Of course the write/ read load on that pool is higher, but as I would be using enterprise grade SSDs (2 in RAID1) it shouldn't be that big of a problem?

Furthermore to mitigate the risk of loosing data, I will buy another 6TB WD Red for remote backups, maybe with QNAP and virtualised Proxmox Backup Server. As I would not be using SSD there, I would use LVM-thin. That should be it - what do you think?

I just found an HPE 400GB 6G SATA Write Intensive SSD for 115$ which seems to be fitting, too. This is also a used item, but from a big seller.
 
thanks. I want to buy two and put them in RAID1. Somebody told me not to put Root-FS, VM/CT, ZFS Special Device on one pool, but I cannot figure out why. Of course the write/ read load on that pool is higher, but as I would be using enterprise grade SSDs (2 in RAID1) it shouldn't be that big of a problem?
There are many points why it would be preferable to have dedicated drives for that. For example:
- high load of VMs can't slow down the host itself. If the hosts needs to wait it can't hypervise which effects all guests.
- you can't easily destroy and recreate your VM storage (if you might want to test better ZFS options or need to expand because you are running out of space) without reinstalling and setting up your complete hosts OS.
- you can't easily create backups of your hosts OS (by writing the complete system disk on blocklevel to an image file) without wasting alot of space because this image then would also include all the VMs which you don't want to backup that way because PBS would do a better job
- once added to a pool a special device can never be removed without destroying the complete pool. So if you use one partition of that SSDs as a special device for your HDD pool you will never be able to remove one of the SSDs without destroying all the data on all the HDDs
- you get less performance because more stuffs needs to share the same SSD
Furthermore to mitigate the risk of loosing data, I will buy another 6TB WD Red for remote backups, maybe with QNAP and virtualised Proxmox Backup Server. As I would not be using SSD there, I would use LVM-thin. That should be it - what do you think?
Sounds like a good idea. Just keep in mind that PBS with HDDs is very slow because it will chop all the big virtual disks in millions of small chunks. So doing the weekly GC or verify jobs it will need to read millions of small files and that is a workload HDDs are really bad at. Works fine on a small homeserver if you don't care that these job will take hours but PBS was designed with SSDs as backup storage in mind.
 
  • Like
Reactions: takeokun
- once added to a pool a special device can never be removed without destroying the complete pool. So if you use one partition of that SSDs as a special device for your HDD pool you will never be able to remove one of the SSDs without destroying all the data on all the HDDs
I understand the first part. But why can't I remove one of the SSDs from the RAID1 without destroying all the data? Isn't this exactly the point of having RAID1?
Sounds like a good idea. Just keep in mind that PBS with HDDs is very slow because it will chop all the big virtual disks in millions of small chunks. So doing the weekly GC or verify jobs it will need to read millions of small files and that is a workload HDDs are really bad at. Works fine on a small homeserver if you don't care that these job will take hours but PBS was designed with SSDs as backup storage in mind.
I did this with 200GB of data which took a long time - but I don't mind that (right now.) Do you think, that it will significantly lower the lifespan of the hdd? Or is it just... slow?

I am thinking about buying a PCIe card with 4 extra SATA ports. This way, I can buy two smaller (32GB) and cheap SSDs for PVE Root FS (RAID1). I would then have two RAID1 pools having 2 SSDs, one for PVE Root FS and one for CTs/ VMs and Special Devices. It wouldn't hurt me if one of the Root FS SSD fails... and even if both fail, setting up PVE again won't be a problem - right? I mean, the ZFS configuration (HDD pool + Special Devices) is stored on the Root FS, but can be configured in the setup without loosing the actual data?
Having enterprise grade SSDs for the CTs/ VMs + Special Devices would be enough security for me, i think.
 
I understand the first part. But why can't I remove one of the SSDs from the RAID1 without destroying all the data? Isn't this exactly the point of having RAID1?
You can replace one at the time. Remove both and all data on the HDDs is lost because if used as a special device those SSD will be a integral part of that HDD pool too. You can add or remove a SLOG or L2ARC as you like but with special devices that isn't working. They can only be added/replaced but not be removed.
I did this with 200GB of data which took a long time - but I don't mind that (right now.) Do you think, that it will significantly lower the lifespan of the hdd? Or is it just... slow?
Every IO will reduce the life expectation of a HDD because of mechanical wear. But no idea if that actually will make a big differnce if you dont do these GC/verify jobs on a daily basis.
I am thinking about buying a PCIe card with 4 extra SATA ports. This way, I can buy two smaller (32GB) and cheap SSDs for PVE Root FS (RAID1). I would then have two RAID1 pools having 2 SSDs, one for PVE Root FS and one for CTs/ VMs and Special Devices.
Using USB SSDs?HDDs might also work.
It wouldn't hurt me if one of the Root FS SSD fails... and even if both fail, setting up PVE again won't be a problem - right? I mean, the ZFS configuration (HDD pool + Special Devices) is stored on the Root FS, but can be configured in the setup without loosing the actual data?
Importing ZFS pools isn't a problem. But seting up a new PVE might get very annoying depending on how much you modify it, PVE is no appliance and you can use it as a normal Linux server. Some people install a desktop envoronment with alot of programs. You need to setup a mailserver if you want be able to send alert emails in case a backup or pool fails. Its also useful to install logging and monitoring tools. Sometimes you need to install a SMB server to be able to use your drives inside a VM because bind-mounting won't work there. You might want to edit alot of config files to optimize your kernel and services. You might want to edit your bootloader to be able to passthrough stuff, you might want to program your own scripts to automate stuff,,,it would take me many hours of work to install and setup PVE again.
 
Last edited:
You can replace one at the time. Remove both and all data on the HDDs is lost because if used as a special device those SSD will be a integral part of that HDD pool too.
Just to be clear: you are not saying, that disconnecting both SSD for a (short) period of time will result in loosing all the data, even if both are working fine and are reconnected? Because I would guess that the inavailability of the special devices would lead to the notification, that the whole pool cannot and will not be used until all integral parts are available.
Reading this would bring me to the conclusion, that I should not think about using ZFS for my HDDs in the first place? Maybe it is better to configure them as lvm(-thin)?

Using USB SSDs?HDDs might also work.
I was thinking about M.2 SSDs with a SATA adapter, or smaller SATA SSDs which do not cost that much in the consumer area.

Importing ZFS pools isn't a problem.
Good to hear :)

But seting up a new PVE might get very annoying depending on how much you modify it, PVE is no appliance and you can use it as a normal Linux server. Some people install a desktop envoronment with alot of programs. You need to setup a mailserver if you want be able to send alert emails in case a backup or pool fails. Its also useful to install logging and monitoring tools. Sometimes you need to install a SMB server to be able to use your drives inside a VM because bind-mounting won't work there. You might want to edit alot of config files to optimize your kernel and services. You might want to edit your bootloader to be able to passthrough stuff, you might want to program your own scripts to automate stuff,,,it would take me many hours of work to install and setup PVE again.
I see, you are a very professional PVE user. For me it won't be that much work: import pools, setup network configuration, import cts/vms. I must say, that the main purpose for this machine is the Nextcloud server. Other CTs are only used for smaller use cases (Unifi Controller, SMB share for scanner, small website.)

Really appreciate your time and help!
 
Just to be clear: you are not saying, that disconnecting both SSD for a (short) period of time will result in loosing all the data, even if both are working fine and are reconnected? Because I would guess that the inavailability of the special devices would lead to the notification, that the whole pool cannot and will not be used until all integral parts are available.
If both special devices won't be available for a short time the complete pool will degrade and won't be usable until you add atleast one of the special devices again and manually clear the status of the pool. As long as no data is lost on the SSDs it should be possible to bring the pool online again later. But I wouldn't try to do that. If you want to do some stuff with the SSDs you should export the complete HDD pool so it isn't running at all so it doesn't get into degraded state in the first place.
But using a partition on that SSDs as special devices you can't decide later that you need more storage for VMs and want to remove the special devices.

Reading this would bring me to the conclusion, that I should not think about using ZFS for my HDDs in the first place? Maybe it is better to configure them as lvm(-thin)?
That depends on what you want to do with it and how important data integrety is to you.
I was thinking about M.2 SSDs with a SATA adapter, or smaller SATA SSDs which do not cost that much in the consumer area.
Would be an cheap option if you got enough free PCIe slots. But you check two things...
1.) Is your mainboard able to use bifurication? For such a dual M.2 to 1x PCIe 8x slot card you need a mainboard that offers the UEFI options to switch the configuration of that PCIe slot. If that slot works in default "8x" mode only one of both M.2 slots would be usable. If you want both M.2 to be usable you would need to use bifurication to set that PCIe slot to "4x4" mode.
2.) check if your UEFI can boot from NVMe SSDs. Especially with older mainboards or mainboards running in CSM mode that isn't always possible.
 
  • Like
Reactions: takeokun
If both special devices won't be available for a short time the complete pool will degrade and won't be usable until you add atleast one of the special devices again and manually clear the status of the pool. As long as no data is lost on the SSDs it should be possible to bring the pool online again later. But I wouldn't try to do that. If you want to do some stuff with the SSDs you should export the complete HDD pool so it isn't running at all so it doesn't get into degraded state in the first place.

That depends on what you want to do with it and how important data integrety is to you.
What do you mean by "what you want to do with it"? The HDD RAID will be used as data storage for Nextcloud, so mostly photos, videos, documents and stuff. Thus, the integrity is quite important. Concluding, I should use ZFS then?
Regarding the first cite: would you recommend using ZFS for the HDDs if i do not add a special device to that pool?

Would be an cheap option if you got enough free PCIe slots. But you check two things...
1.) Is your mainboard able to use bifurication? For such a dual M.2 to 1x PCIe 8x slot card you need a mainboard that offers the UEFI options to switch the configuration of that PCIe slot. If that slot works in default "8x" mode only one of both M.2 slots would be usable. If you want both M.2 to be usable you would need to use bifurication to set that PCIe slot to "4x4" mode.
2.) check if your UEFI can boot from NVMe SSDs. Especially with older mainboards or mainboards running in CSM mode that isn't always possible.
Actally, I was thinking about one PCIe card with ASM1061 and four SATA ports. The ASM1061 is already in use by my Mainboard (J5040-ITX), so I think it should work. If M.2 would be a problem, I could easily buy (consumer) SSDs, as it is only for PVE.
 
What do you mean by "what you want to do with it"? The HDD RAID will be used as data storage for Nextcloud, so mostly photos, videos, documents and stuff. Thus, the integrity is quite important. Concluding, I should use ZFS then?
Regarding the first cite: would you recommend using ZFS for the HDDs if i do not add a special device to that pool?
You can create a ZFS pool using partitions on the SSDs as special devices for metadata and HDDs for data. I just wanted to point out that this should be a well made decision because other than a SLOG or L2ARC this can't be changed once created without destroying the complete pool with all data on it. If you are fine with that there should be no problem.
For ZFS you should read about the features of it. You don't use ZFS because you want raid. If you just want raid there are faster alternatives that consume less RAM, less CPU, are more flexible in expanding and are better for SSD wear like mdadm or HW raid. You use ZFS if you want its features like replication, deduplication, blocklevel compression, bit rot protection, encryption and so on. ZFS with its checksumming and CoW concept will add an additional layer of data integrity if you for example compare it to a normal SW raid or HW raid but that also comes with the cost of additional requirements and more overhead.
Actally, I was thinking about one PCIe card with ASM1061 and four SATA ports. The ASM1061 is already in use by my Mainboard (J5040-ITX), so I think it should work. If M.2 would be a problem, I could easily buy (consumer) SSDs, as it is only for PVE.
That PCIe slot is only "PCIe 2.0 x1". So that means it can't handle more than a total of 400MB/s. So don't expect the SSDs to be fast if all 4 SATA ports need to share this 400MB/s. And looks like your mainboard only supports a maximum of 16GB RAM. For your PVE you want 2GB, 4GB for a Nextloud VM and ZFS needs alot of RAM for caching too. Rule of thumb for best ZFS performance would be 4GB + 1GB RAM per 1 TB of raw storage or 4GB + 5GB RAM per 1 TB of raw storage if you want to use deduplication. So using your drives with ZFS 17GB RAM just for ZFS would be good to have. Will work with less RAM but the less RAM you allow ZFS to use, the less responsive it gets. I would atleast give ZFS 4-8 GB of RAM. So that would be 10 to 14 GB RAM just for PVE and a single Nextcloud VM and only 2 to 6 GB RAM would be available for all other VMs like your SMB server and so on.
 
Last edited:
  • Like
Reactions: takeokun

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!