Newbie and question about storage option

chun02160

New Member
Oct 16, 2022
10
0
1
Hello

I am very new to Proxmox and trying to configure/install properly before I start putting more.
I have Dell OptiPlex3050 Micro with 1x1TB NVMe and 1x1TB SSD storages with 32GB RAM. My intention for this is to initially setup few (home related) management stuff, including Hass (Home Assistant), Unifi Controller as immediate needs, but also more, Pi-hole, VPN, and maybe remote linux server and etc. Since I plan to have some way to control IoT stuff using Home Assistant, I think my server needs to be HA.

Anyway, I have been watching and reading some tutorials but I am still not familiar with many. I saw some people install ZFS RAID1 with small SSD then create additional ZFS and poll after installation? But my case, I only have 2 storages, so I have to select ZFS RAID1 with 1TB allocated, then that's it, cannot do further to create more disk after installation. Then I tried ZFS RAID1 with smaller size, like 200GB but after installation, when I do fdisk, rest of unpartitioned storages do not show up? I also heard ZFS RAID1 may cause high RAM usage?

Anyway I chose ZFS because I saw many people suggested, but I am not sure what's good for my case. I saw XFS and all sort. Can someone help/recommend me what's good setup for it? I am stuck in installation stage due to this.

Thank you so much
 
I am by no means a ZFS or Proxmox master, but with only 2 storage devices, you are somewhat limited right now. ZFS does use some RAM, and the things it can do are really cool, but I can't see much of a benefit in your situation to not simply use ext4 to create LVM storage locally. Definitely want the mirroring in a RAID 1. One thing, though, is that adding additional storage in proxmox is very easy and quite flexible. NFS, iSCSI, among others. If you are just getting started, don't overthink it. There is a lot to learn, so keep things like this basic for now. You can host all of your VMs on local storage, as well as proxmox. I had read previously somewhere the proxmox install took up the entire allocated storage, but I don't find that to be the case.

Without a doubt, additional storage is phenomenally better, but not required at first. Good thing is that once you get additional storage, it is SO simply to move the VMs to it as desired.
 
I am by no means a ZFS or Proxmox master, but with only 2 storage devices, you are somewhat limited right now. ZFS does use some RAM, and the things it can do are really cool, but I can't see much of a benefit in your situation to not simply use ext4 to create LVM storage locally. Definitely want the mirroring in a RAID 1. One thing, though, is that adding additional storage in proxmox is very easy and quite flexible. NFS, iSCSI, among others. If you are just getting started, don't overthink it. There is a lot to learn, so keep things like this basic for now. You can host all of your VMs on local storage, as well as proxmox. I had read previously somewhere the proxmox install took up the entire allocated storage, but I don't find that to be the case.

Without a doubt, additional storage is phenomenally better, but not required at first. Good thing is that once you get additional storage, it is SO simply to move the VMs to it as desired.
Thank you for the reply! The hardware I have can only take 2 storages (unless usb) due to form factor.

So to keep it simple, then what would be better?
1. Use 128 GB NVMe (I can swap my old one) ext4 to install proxmox and then 1TB SSD for directory and vm storages?
2. Use 1TB NVMe ext4 for installation and 1TB SSD same as 1
3. 1TB RAID1
Or others? For 1 and 2, I guess I am giving up the auto backup feature.

Also ext4 vs xfs if I go to the route?
 
I would create a ZFS mirror with the 1TB NVMe and 1TB SATA SSDs and use the full 1TB. PVE system and VM/LXC Storage then can dynamically share the whole 1TB (maybe keep a few GBs unpartitioned to add a swap partition later). But that way the whole pool will be slowed down to the performance of that SATA SSDs, so no speed benefit using that NVMe.
For backups I would add a USB HDD/SSD.
 
Last edited:
I would create a ZFS mirror with the 1TB NVMe and 1TB SATA SSDs and use the full 1TB. PVE system and VM/LXC Storage then can dynamically share the whole 1TB (maybe keep a few GBs unpartitioned to add a swap partition later). But that way the whole pool will be slowed down to the performance of that SATA SSDs, so no speed benefit using that NVMe.
For backups I would add a USB HDD/SSD.
Thank you so much

Few follow up questions. So ZFS mirror meaning I thought this 1TB should mirror each other or somewhat backup of each other so do I need another drive for backup?

Also you said slown down, is it due to mirroing? Is it noticeable slow or not bad?

Thanks
 
Thank you so much

Few follow up questions. So ZFS mirror meaning I thought this 1TB should mirror each other or somewhat backup of each other so do I need another drive for backup?
Yes, its redundant so one disk may fail without loosing data and without downtime. But this will only protect your data when a disk is failing. But you could also loose data because ...:
1.) you got a power outage, hardware failure oder kernel crash so your RAM cached writes will be lost and this corrupts data on both mirrored disks at he same time
2.) you delete something by accident, which will delete it from both disks at the same time
3.) someone steals your server, server burns down, lightning strikes fries your server, water damage, ...
4.) you are hit by ransomware
All things where a raid won't help you and data will be lost. Here only helps a real backup. Would even be better to have two USB HDDs for backups and rotate them, so one is always offsite.
Also you said slown down, is it due to mirroing? Is it noticeable slow or not bad?
You got a fast NVMe SSD and a slow SATA SSD. When using a mirror and you write data it will need to write the data to both SSDs in parallel. The fast NVMe will be ready first but has it has to wait for the slow SATA SSD to complete the write too, before it can continue with the next write. So using a NVMe SSD + SATA SSD in a mirror would be as slow as just using two SATA SSDs when it comes to write performance.
 
Last edited:
  • Like
Reactions: chun02160
Yes, its redundant so one disk may fail without loosing data. But this will only protect your data when a disk is failing. But you could also loose data because ...:
1.) you got a power outage, hardware failure oder kernel crash so your RAM cached writes will be lost and this corrupts data on both mirrored disks at he same time
2.) you delete something by accident, which will delete it from both disks at the same time
3.) someone steals your server, server burns down, lightning strikes fries your server, water damage, ...
4.) you are hit by ransomware
All things where a raid won't help you and data will be lost. Here only helps a real backup. Would even be better to have two USB HDDs for backups and rotate them, so one is always offsite.

You got a fast NVMe SSD and a slow SATA SSD. When using a mirror and you write data it will need to write the data to both SSDs in parallel. The fast NVMe will be ready first but has it has to wait for the slow SATA SSD to complete the write too, before it can continue with the next write. So using a NVMe SSD + SATA SSD in a mirror would be as slow as just using two SATA SSDs when it comes to write performance.
Thank you so much and this makes a lot of sense. One more question though. I cannot extend more drives from given motherboard (can only take 1 NVMe and 1 SATA), so would USB for SATA seems the only way for backup, but I want to hear if there is alternative?

Thank you again!
 
No, then you will have to use USB disks. Thats the problem with these small thin clients. They are power efficient because they lack a lot of features and expandability which would drive the ilde power consumption up. So you have to choose between a big power-hungry machine that offers all you might want and a small but efficient box where you have to cut corners.

But just for backups USB disks would be fine. Especially when rotating multiple backup disks you want them external or hot-swappable anyway.
 
  • Like
Reactions: chun02160
@chun02160, I also am a Newbie and have a very simple installation. I moved Home Assistant from VirtualBox on a Linux Laptop to a very small NUC (8 GB RAM, 256 GB SSD). With my first setup I left everything on the NUC, including Backup. Later I installed a 128 GB USB-Stick for the Backups. Installing the additional disk and moving the Backups to the USB-Stick was not so difficult. However, the USB-Stick interferes with the ZigBee-Stick (of Home Assistant) and I had to put both away from the NUC (with USB cables). In the meanwhile I also have pi-hole installed. That's it. I have a large Home Assistant installation which consumes almost 6 RAM (not always). pi-hole consumes less than 500 MB RAM. It all works fine. I also tested with a cluster (with a second old laptop), but this is another story. I use simple LVM and ext4. Reg. Backups I think of copying the backups regularly from the USB Disk to another storage, maybe DropBox, in the future (I do this with my other data and that is the way how I share data with all laptops I have; encrypted of course). In that case I might be able to restart easily Home Assistant again in case something goes wrong with my NUC. But this is also another (future) story.
 
  • Like
Reactions: chun02160
@Dunuin

Thanks all for the reply.

I actually have few more following up questions. After digging more, it seems like I can also install Debian 11 then install Proxmox.
I've neem thinking about few options:
Option1:
- Partition 128GB for NVMe and SSD and install Debian root w/ RAID1.
- Partition 32GB for NVMe and SSD for Debian swap
- 100MB for either NVMe or SSD for boot
- Remaining in both NVMe and SSD for ZFS pooling and Directory and etc.
- USB SATA SSD for backup; though as @OberLeo said, I am bit worried about USB interference with my Zigbee USB

But I am curious if this setup makes sense and would make difference of installing Proxmox straight with 1TB NVMe and SSD with ZFS RAID1? Technically, it physically exists in same disk, but partition differently so not sure if each partition would get difference performance?

Alternatively, I was thinking
- USB SATA Dock and put 2 x 128GB SSD for Proxmox boot ZFS RAID1
- Internal NVMe and SSD for Directory and ZFS polling stuff
- Another USB SATA (or another drive in SATA Dock) for Backup
But many drives rely on USB, not sure if this is good?

I know I am somewhat overthinking too, but I really want to avoid reinstall and restore stage if I can in future. Appreciate any feedback!!
Thank you so much
 
@Dunuin

Thanks all for the reply.

I actually have few more following up questions. After digging more, it seems like I can also install Debian 11 then install Proxmox.
I've neem thinking about few options:
Option1:
- Partition 128GB for NVMe and SSD and install Debian root w/ RAID1.
Keep in mind that the Debian Installer only supports mdadm as software raid. You can do that (works fine here for years) but itsn't officially supported and there are known problems where mdadm raid1 can cause data corruption, so the staff usually recommends to prefer ZFS.
And in my opinion 128GB is too much, unless you want to store a bunch of ISOs, LXC templates or other files. Without taking those extra files into accounta 32GB root filesystem would be plenty of space for just the Debian/PVE system alone.
- Partition 32GB for NVMe and SSD for Debian swap
- 100MB for either NVMe or SSD for boot
I would also mirror the boot partition and choose at least 512MB.
- Remaining in both NVMe and SSD for ZFS pooling and Directory and etc.
Then you will have to setup these mirrored ZFS partitions after you have installed the pve-server package, es Debian doesn't support ZFS out of the box.
But I am curious if this setup makes sense and would make difference of installing Proxmox straight with 1TB NVMe and SSD with ZFS RAID1? Technically, it physically exists in same disk, but partition differently so not sure if each partition would get difference performance?
I don't see the point of mixing mdadm raid and ZFS raid. If you want ZFS for raid to a normal PVE installation.
If you want software raid but don't want ZFS because of the massive overhead and ressource consumption, do a Debian install and skip ZFS entirely.
Alternatively, I was thinking
- USB SATA Dock and put 2 x 128GB SSD for Proxmox boot ZFS RAID1
- Internal NVMe and SSD for Directory and ZFS polling stuff
- Another USB SATA (or another drive in SATA Dock) for Backup
But many drives rely on USB, not sure if this is good?
I would avoid USB as much as possible. Usually, you run into less stability, performance and data integrity problems when sticking to internal SATA, SAS or NVMe.

If there is really a problem with USB for a backup disk you could always buy some cheap NAS box and then do the backups to a NFS/SMB share.
 
  • Like
Reactions: chun02160
Keep in mind that the Debian Installer only supports mdadm as software raid. You can do that (works fine here for years) but itsn't officially supported and there are known problems where mdadm raid1 can cause data corruption, so the staff usually recommends to prefer ZFS.
And in my opinion 128GB is too much, unless you want to store a bunch of ISOs, LXC templates or other files. Without taking those extra files into accounta 32GB root filesystem would be plenty of space for just the Debian/PVE system alone.

I would also mirror the boot partition and choose at least 512MB.

Then you will have to setup these mirrored ZFS partitions after you have installed the pve-server package, es Debian doesn't support ZFS out of the box.

I don't see the point of mixing mdadm raid and ZFS raid. If you want ZFS for raid to a normal PVE installation.
If you want software raid but don't want ZFS because of the massive overhead and ressource consumption, do a Debian install and skip ZFS entirely.

I would avoid USB as much as possible. Usually, you run into less stability, performance and data integrity problems when sticking to internal SATA, SAS or NVMe.

If there is really a problem with USB for a backup disk you could always buy some cheap NAS box and then do the backups to a NFS/SMB share.
Thank you so much again

I understand it better and will keep in mind for USB!
I decide to NOT use mdadm after reading your comment & other posts. Then I was thinking about:
1. Install Debian: 32GB root (ext4), 16GB swap, and 512MB boot in NVMe
2. Install Proxmox from Debian (following Proxmox doc)
3. After installation, in proxmox env, partition SSD in ZFS for three, 32GB root, 16GB swap, and 512MB boot
4. Move/Migrate from 1 to 3
5. Update to boot from SDD
6. In Proxmox, format/clean NVMe and repartition like 3.
7. Now set the pool!
Then use rest of space to partition for VM and directory.

Is this going to work? If so, I cannot seem to find a way to migrate (Starting 4). Can you help me on command line or what I need to do?
Also is this going to make a difference using 1TB ZFS RAID1 for proxmox boot (= install proxmox straight without Debian)?

I know I am overthinking about really want to separate VM from root.

If this is not going to work, then I guess I am going to install ZFS RAID1 for NVMe and SSD 1TB, and put VM in same partition as last(?) resort,
Thanks for help again!
 
I don't get why you want to install Debian in first place. Debian Install only support mdadm raid. PVE Installer only supports ZFS raid. Why not just install PVE directly from the PVE ISO with a ZFS raid1? Where is the point installing a Debian first? You usually only want to do that if you want a mdadm raid or LUKS encrypted PVE installation or if your hardware ins't supported by the PVE installer so you are forced to use the Debina installer.

And I don't think it's possible (atleast not easy) to migrate from a Debian installation to ZFS.
 
I don't get why you want to install Debian in first place. Debian Install only support mdadm raid. PVE Installer only supports ZFS raid. Why not just install PVE directly from the PVE ISO with a ZFS raid1? Where is the point installing a Debian first? You usually only want to do that if you want a mdadm raid or LUKS encrypted PVE installation or if your hardware ins't supported by the PVE installer so you are forced to use the Debina installer.

And I don't think it's possible (atleast not easy) to migrate from a Debian installation to ZFS.
Ah ok. Maybe I'm overthinking too much as a beginner. Thanks for all the help! I am going to install ZFS RAID1 from proxmox ISO!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!