Upgrade storage, how to proceed?

reynierpm

New Member
May 19, 2023
28
1
3
I have a Dell Server where I am running PVE. It has one 512GB SSD and 2TB HDD. The SSD is where PVE is installed and I am using the HDD for VM storage. Now I am planning to remove that HDD and add bigger ones (10TB x 4HDD in RAID5). How should I proceed with this? Meaning how do I tell PVE there is a new storage available, will PVE see it right away?
 
If you have to remove the original HDD to install the bigger ones, won't you loose all your VM data or do you have a backup available?

The drives should be picked up right away and you should be able to create a ZFS pool on them from the GUI. You can then create zfs data sets and add them to use as Proxmox storage for VM's
 
If you have to remove the original HDD to install the bigger ones, won't you loose all your VM data or do you have a backup available?

The drives should be picked up right away and you should be able to create a ZFS pool on them from the GUI. You can then create zfs data sets and add them to use as Proxmox storage for VM's
I do have backup already and I copy them over to my laptop in order to save/restore later, thanks
 
Also keep in mind that a 4x HDD raidz1 will be a terrible VM storage. Only IOPS performance of a single HDD (so ~100 IOPS) and you would be forced to use at least a block size of 16K or even better 64K or otherwise you will lose half of that raw capacity due to parity+oadding overhead (so only ~16-18TB actually usable).
 
Last edited:
Hmmm, I have only interfaces for 4 HDDs and I was thinking of a RAID5 even if I am losing 10TB just to keep the data safe in case an HDD gets damaged. What would be your recommendation in this scenario?
 
Questions:
1. How much space do you actually need?
2. Do you have at least the same amount storage for your backups?
3. Depending on 1 and two there are Severin ways to use your 4 Diskussion
3.1 Raid Z1(Raid 5 equivalent) with 4 Diskussion, 30TB usable
3.2 Mirror with 2 Disks, the other two for Backup. 10 TB usable
3.3 Striped mirror with 2x2 Disks, 20 TB usable
faster than Raid Z1.

You can add futher storage to mirror by adding a mirror with 2 additional disks,
but if you use raid z1 you would have to add another z1 with another 4 disks.

I strongly recommend 3.2 / 3.3,
if you have questions just ask


You can accelerate your setup later by adding ssd as cache device
 
Last edited:
Hmmm, I have only interfaces for 4 HDDs and I was thinking of a RAID5 even if I am losing 10TB just to keep the data safe in case an HDD gets damaged. What would be your recommendation in this scenario?
If you need the 30 TB of storage then Point 3.1 RaidZ1 with 3 Data and one Redundancy Disk
 
@ubu let me put you in context. Those 4 HDDs were installed in a NAS, but the NAS stopped working, and is a pain to send it back for warranty and replacement. I have about 24 TB of movies/tv shows/personal-work information stored in them (of course I am creating a backup before removing the disks from the NAS and adding them to the server where I am using PVE). Having said that here we go:

1. At least those 24TB so I can move all that data back and pretty sure I will need more space.
2. Yes I do I am backing everything up in a brand new external storage hard drive.
3.1 I am aware that I am losing 10TB but isn't RAID5 the best RAID to keep your data safe?
3.2/3.3 Not sure what you mean with this one, new to me.

Can you help me to understand this? I am a newbie, not an expert.

Interesting, how much SSD would I need?

My only concern is the server by default only accepts 4 disks, I could extend that by using a PCI Express card but no idea where to put those HDDs inside the case.
 
If you need the 30 TB of storage then Point 3.1 RaidZ1 with 3 Data and one Redundancy Disk
But do you really need 30 TB if up untin now you had 2 TB?

Also compression on ZFS will give you 20-30% more space depending on your data
 
Last edited:
@ubu let me put you in context. Those 4 HDDs were installed in a NAS, but the NAS stopped working, and is a pain to send it back for warranty and replacement. I have about 24 TB of movies/tv shows/personal-work information stored in them (of course I am creating a backup before removing the disks from the NAS and adding them to the server where I am using PVE). Having said that here we go:

1. At least those 24TB so I can move all that data back and pretty sure I will need more space.
2. Yes I do I am backing everything up in a brand new external storage hard drive.
3.1 I am aware that I am losing 10TB but isn't RAID5 the best RAID to keep your data safe?
3.2/3.3 Not sure what you mean with this one, new to me.

Can you help me to understand this? I am a newbie, not an expert.

Interesting, how much SSD would I need?

My only concern is the server by default only accepts 4 disks, I could extend that by using a PCI Express card but no idea where to put those HDDs inside the case.
You would need 2 SSD for redundancy, the do not need to be especially big, it is just read / write cache.

For Data Safety:

RAID1 (ZFS calls this Mirror) means all your data is written to 2 Disks which are exact copies.
If one fails you can replace it and the data will be copied from the other one
if 011011010101 is you Data:

Disk 1 Disk 2
011011010101 | 011011010101


RAID5 (ZFS calls this RaidZ1) means you have multiple (N) Data Disk and 1 Redundancy Disks, so N+1 Disks, in your case 3+1
So here your data is written to Disk 1 2 and 3 and the calculated checksums are on Disk4
If ONE of your Disks fail,you can replace it and the missing data can be restored from the rest of the data and the checksums (i simplified a bit)

Disk1 Disk2 Disk3 Disk4
0110 1101 0101 1110


In BOTH cases ONE Drive can fail

There is also RAID6 (ZFS Raid Z2) in which two Disks can fail and even a Raid Z3 in which up to 3 Disks can fail without data loss.


If you need that 24 TB RaidZ1 is the Solution for you (POOLNAME is the Name you want to give)

If your disks are sda,b,c,d create 1 big ZFS partition on eachwith fdisk and create the pool POOLNAME as follows:

zpool -o ashift=12 create POOLNAME raidz1 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1

zpool status
pool: POOLNAME
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
POOLNAME ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
sda1 ONLINE 0 0 0
sdb1 ONLINE 0 0 0
sdc1 ONLINE 0 0 0
sdd1 ONLINE 0 0 0

I believe you can also do that from the Webgui.

Then add it as Storage in your Proxmox Webgui
 
  • Like
Reactions: reynierpm
Hmmm, I have only interfaces for 4 HDDs and I was thinking of a RAID5 even if I am losing 10TB just to keep the data safe in case an HDD gets damaged. What would be your recommendation in this scenario?
Then I don't see a good solution without buying new hardware. 4x 10TB disks only allow you to store ~27TB of data as a ZFS pool shouldn't be filled more than 80-90% because otherwise it might get slow and will fragment faster when filled more than 90% (and you can't defrag it because its Copy-on-Write). so you are already at the limit with your 24TB of data and you will probably need to buy new hardware anyway if you don't want to get rid of old media. I at least would buy a new case that can fit 8x 3,5" disks and a LSI HBA card. Then you could add up to 8 SATA HDDs to that HBA card, PCI passthrough that HBA card into a NAS VM to act as your new NAS. And I would buy 2 enterprise SATA SSDs, connect them to the onboard SATA ports and create a ZFS mirror which you could use as PVE system disks and VM storage. Then you could use your existing 4x 10TB HDDs as a raidz1 just for your data and when space runs out you could later buy 4 more 10TB HDDs, create another raidz1 with them and stripe it to double the capacity (60TB) and performance.
3.1 I am aware that I am losing 10TB but isn't RAID5 the best RAID to keep your data safe?
From most to least secure:
raidz3 > 3-disk mirror (three disk raid1) > raidz2 (raid6) > striped mirror (raid10) > mirror (raid1) > raidz1 (raid5) > single disks (no raid) > stripe (raid0)
 
Last edited:
  • Like
Reactions: takeokun
Does your hardware support ZFS? i.e does the disk controller run in HBA or IT mode?

Do you just want to replace your NAS or are you thinking about proxmox as a NAS (either hosted or native)?

My advice would be to buy a SATA card and just use double-sided adhesive tape to stick the SSD anywhere there is a bit of space for it.

Then install TrueNAS if all you want is a NAS, or if you're running proxmox, try and buy a bigger capacity SSD if you can and install it as a boot/os drive but only use 60Gb or so, then format the rest as LVM and use this to run your VM's.

Install the 4 x 10TB drives and use them to store your media and backups. As mentioned already, the performance will be no good as storage to run VM's from (so you need to run them from SSD) but there are scenarios where your VM's will be able to access the data on them - e.g plex server

As your SSD is now a single point of failure, you should make sure you backup your VM's on a regular basis. Also, be sure to setup smart monitoring on your discs so the system emails you if there are any issues with disc health as you are only protected against a single disc failure.
 
Then I don't see a good solution without buying new hardware. 4x 10TB disks only allow you to store ~27TB of data as a ZFS pool shouldn't be filled more than 80-90% because otherwise it might get slow and will fragment faster when filled more than 90% (and you can't defrag it because its Copy-on-Write). so you are already at the limit with your 24TB of data and you will probably need to buy new hardware anyway if you don't want to get rid of old media. I at least would buy a new case that can fit 8x 3,5" disks and a LSI HBA card. Then you could add up to 8 SATA HDDs to that HBA card, PCI passthrough that HBA card into a NAS VM to act as your new NAS. And I would buy 2 enterprise SATA SSDs, connect them to the onboard SATA ports and create a ZFS mirror which you could use as PVE system disks and VM storage. Then you could use your existing 4x 10TB HDDs as a raidz1 just for your data and when space runs out you could later buy 4 more 10TB HDDs, create another raidz1 with them and stripe it to double the capacity (60TB) and performance.

From most to least secure:
raidz3 > 3-disk mirror (three disk raid1) > raidz2 (raid6) > striped mirror (raid10) > raidz1 (raid5) > single disks (no raid) > stripe (raid0)
Okay I would need your help choosing the right LSI HBA card and SSD disks, here are a few links, just let me know which one do you think is the right to buy:

LSI HBA card
SATA SSD:
So far I cannot find an enclosure for 8+ HDDs, if you could leave me some links that would be amazing.

Again thanks everyone for the help provided!
 
To familiarize yourself with zfs you can use image files:
for i in 1 2 3 4 5 6 7 8 9; do dd if=/dev/zero bs=1M count=1000 of=disk$i.img; done

zpool create pool1 raidz1 /home/USER/disk1.img /home/USER/disk2.img /home/USER/disk3.img /home/USER/disk4.img
zpool status

zfs create pool1/testdir
zfs list
 
They are both not great for server workloads. Better would be an entry-level enterprise SSD. Something like a Samsung PM893, Solidigm D3-S4510/D4-S4520, Kingston DC500M/DC600M, Micron 5300 PRO/5400 PRO, ...

So far I cannot find an enclosure for 8+ HDDs, if you could leave me some links that would be amazing.
Hard without any information on your existing hardware. Motherboard fom factor, PSU form factor, CPU cooler height, space requirements, mow much you care about noise, rackable or not, ...

You probably want one with internal ports unless you want to buy an external disk shelve. And one that either comes with IT-mode firmware or that can easily be flashed from IR-Mode to IT-mode. You don't want a raid card but a dumb HBA for ZFS.
I personally use these and you can get them as cheap as 35$ when cross-flashing them yourself: https://www.ebay.com/itm/304423364396 .The preflashed ones are a bit more expensive.
 
  • Like
Reactions: takeokun
Hard without any information on your existing hardware. Motherboard fom factor, PSU form factor, CPU cooler height, space requirements, mow much you care about noise, rackable or not, ...
Right, apologies for not adding that information from the very beginning, this is the "server" I do have right now.
You probably want one with internal ports unless you want to buy an external disk shelve
that was exactly what I was thinking not sure which one exactly since I am not sure where to add extra disks inside the case even if it has plenty of space. I am open to recommendations on this matter
 
What about an external disk shelve? Would that be an option? I am not thinking in swap the existing case but somehow adding the disks I know my option are limited because this server has only space for four HDDs however there must be a way to add extra disks on this or not?
 
What about an external disk shelve? Would that be an option? I am not thinking in swap the existing case but somehow adding the disks I know my option are limited because this server has only space for four HDDs however there must be a way to add extra disks on this or not?
Those DAS aren't cheap. Would probably be cheaper to get an old computer, install something like TrueNAS on it, setup your NFS/SMB/iSCSI and access those from your PVE workstation over the network. 2x 10Gbit SFP+ NICs and a DAC aren't that expensive either. You then got 4 empty slots in your PVE workstation for SSDs.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!