Considering a server swap

inxsible

Active Member
Feb 6, 2020
139
8
38
I currently have Proxmox 6.4 installed on a 1U server (Supermicro X9) with 4x500GB on RAID6 (hardware RAID). This runs 15 LXC containers and 1 PCOIP (Archlinux). This is a standalone node. No clusters/HA or anything of that sort.

I want to move it another 2U server based on Supermicro X10 board. The primary reason to move is because I want to install TrueNAS as 1 of the VMs in addition to my existing containers and VM and the 2U server allows for 12 disks which I can utilize in RAIDZ2 configuration for TrueNAS. It also has an HBA that I want to passthrough to TrueNAS. I plan to use Proxmox's local storage (lvm-thin) for VMs and containers so that they are fast as I intend to use 2.5" SSDs for the Proxmox OS install (250GB or 500GB). I also intend to install another VM with Proxmox Backup Server which would then backup all the containers on the TrueNAS storage.

After the move, the 1U server will serve as a backup TrueNAS server and I'll remove the hardware RAID card and replace it either with 4 SATA cables or another HBA.

Since there is also an update to Proxmox 7, I thought that I could just install Proxmox 7 on the 2U server and then restore the backups of each container and VM on the new server. My VM/container backups will be saved on a NAS. Is that a sound strategy? Will I have any problems trying to restore 6.4 VMs/containers on Proxmox 7 ?

Another question i have is : Should I be using LVM or ZFS for the Proxmox OS? I am leaning towards LVM -- only because
  1. it's a single node
  2. the 2U server only allows 2 drives at the back for OS. So only options would be RAID0(stripe) or RAID1 (mirror) with ZFS.
  3. currently I am utilizing 125GB out of the 1TB on my Proxmox local storage
With RAID1 (RAID0 wouldn't give the advantage of redundancy) I would only get a max of 250GB space (minus the OS partition and overhead) -- and I still need to install the TrueNAS VM & the PBS VM which would also take additional space. If I use LVM, I can use 2x250GB and still get 500GB of space vs only 250GB with ZFS. I am concerned with more than 50% of the drive being full (in case of ZFS) from the get go.

The numbers just double with the 500GB drives but LVM still gives me almost twice the space as ZFS albeit without the redundancy.

Thoughts or opinions ?
 
Last edited:
I also intend to install another VM with Proxmox Backup Server which would then backup all the containers on the TrueNAS storage.
After the move, the 1U server will serve as a backup TrueNAS server and I'll remove the hardware RAID card and replace it either with 4 SATA cables or another HBA.
I hope you want to run the PBS and the TrueNAS storing your PBS datastore on the 1U server. Otherwise that sounds like a bad idea if you want to backup your VMs/LXCs to a PBS VM that stores stuff on a NAS VM. So in case your VM storage dies you won't be able to access your backups because your PBS and TrueNAS VM is lost too.
Since there is also an update to Proxmox 7, I thought that I could just install Proxmox 7 on the 2U server and then restore the backups of each container and VM on the new server. My VM/container backups will be saved on a NAS. Is that a sound strategy? Will I have any problems trying to restore 6.4 VMs/containers on Proxmox 7 ?
That should work fine. Just remember to also backup your "/etc" folder, especially the "/etc/pve" if you dont want to redo your firewall rules and so on from scratch.
Another question i have is : Should I be using LVM or ZFS for the Proxmox OS? I am leaning towards LVM -- only because
  1. it's a single node
  2. the 2U server only allows 2 drives at the back for OS. So only options would be RAID0(stripe) or RAID1 (mirror) with ZFS.
  3. currently I am utilizing 125GB out of the 1TB on my Proxmox local storage
With RAID1 (RAID0 wouldn't give the advantage of redundancy) I would only get a max of 250GB space (minus the OS partition and overhead) -- and I still need to install the TrueNAS VM & the PBS VM which would also take additional space. If I use LVM, I can use 2x250GB and still get 500GB of space vs only 250GB with ZFS. I am concerned with more than 50% of the drive being full (in case of ZFS) from the get go.
Also remember that a ZFS pool should not be filled up more than 80%. So you only got 200 GB (or 186 GiB) of usable capacity. I would always use some kind of parity. Your drives will fail sooner or later and then you won't be able to access your data on the TrueNAS VM and so on for days until you bought a new drive, installed your PVE again and restored your VMs/LXCs. With parity everything will continue working just fine and you can buy a new drive, hotswap it and you even don't need to shutdown the server. ZFS raid is by the way not only about saving you data when a drive dies. Without ZFS you data will corrupt over time (google for "bit rot") and ZFS can only repair these corruptions if there is parity data available. So a ZFS raid1 will keep your data healthy even if your SSDs are working fine. ZFS is like ECC for your disks. So not using a ZFS raid1 is like not using ECC RAM. It will work most of the time just fine but you can never really trust your data.
 
I hope you want to run the PBS and the TrueNAS storing your PBS datastore on the 1U server. Otherwise that sounds like a bad idea if you want to backup your VMs/LXCs to a PBS VM that stores stuff on a NAS VM. So in case your VM storage dies you won't be able to access your backups because your PBS and TrueNAS VM is lost too.
Sorry if I wasn't clear earlier. But my aim is to have Proxmox on the 2U server with TrueNAS as a VM. My existing NAS (TrueNAS on bare-metal) will be moved to the 1U server (current Proxmox server). I intend to add ZFS replication between both these TrueNAS servers (VM & baremetal) so that the data would be replicated (onsite backup).

The PBS VM would then create backups on the main TrueNAS server (still haven't decided if the VM or the baremetal would be my main NAS). This would then be replicated by the NAS to the backup NAS -- say nightly.


That should work fine. Just remember to also backup your "/etc" folder, especially the "/etc/pve" if you dont want to redo your firewall rules and so on from scratch.
Good to know. I will make sure to back those folders up as well.
lso remember that a ZFS pool should not be filled up more than 80%. So you only got 200 GB (or 186 GiB) of usable capacity. I would always use some kind of parity. Your drives will fail sooner or later and then you won't be able to access your data on the TrueNAS VM and so on for days until you bought a new drive, installed your PVE again and restored your VMs/LXCs. With parity everything will continue working just fine and you can buy a new drive, hotswap it and you even don't need to shutdown the server. ZFS raid is by the way not only about saving you data when a drive dies. Without ZFS you data will corrupt over time (google for "bit rot") and ZFS can only repair these corruptions if there is parity data available. So a ZFS raid1 will keep your data healthy even if your SSDs are working fine. ZFS is like ECC for your disks. So not using a ZFS raid1 is like not using ECC RAM. It will work most of the time just fine but you can never really trust your data.
True. 250GB drives may just not work with ZFS because of the 80% rule given the amount of containers/vms I have or intend to have. But it does seem to me that you are recommending ZFS over LVM for the OS and also for VM disks.

Would the Proxmox ZFS compete with the TrueNAS ZFS for memory though since I do plan to install the TrueNAS VM with ZFS RAIDZ2? I have only 32GB of RAM (maxed out) on the server.

If I do go with ZFS RAID1 for the Proxmox OS, how much RAM should I dedicate to Proxmox so that I can maximize the RAM that I can give to the TrueNAS VM but also not degrade the performance of all the containers.
 
Last edited:
Sorry if I wasn't clear earlier. But my aim is to have Proxmox on the 2U server with TrueNAS as a VM. My existing NAS (TrueNAS on bare-metal) will be moved to the 1U server (current Proxmox server). I intend to add ZFS replication between both these TrueNAS servers (VM & baremetal) so that the data would be replicated (onsite backup).

The PBS VM would then create backups on the main TrueNAS server (still haven't decided if the VM or the baremetal would be my main NAS). This would then be replicated by the NAS to the backup NAS -- say nightly.
But then you still got the problem that you loose your PBS at the same time you loose all your other VMs so there is nothing to restore the VMs from. So you would need to create a new PBS LXC, set ip up again and use the backup TrueNAS as a datastore (which should be read only because replication only goes in one direction so every change done to the backup TrueNAS would be overwritten by the main TrueNAS at the next replication. Not sure if PBS can use read only datastores.
Good to know. I will make sure to back those folders up as well.

True. 250GB drives may just not work with ZFS because of the 80% rule given the amount of containers/vms I have or intend to have. But it does seem to me that you are recommending ZFS over LVM for the OS and also for VM disks.
I lost too much data to failing drives, failing backups drives, failing raid arrays and silent data corruption. So I'm fine to pay the extra money for enteprise SSDs, ECC RAM and alot of extra drives for backup and parity if that means my data is more safe.
Right now I'm running 2x bare metal TrueNAS servers and one PVE server. But you really need alot of drives. Bought 8x 8TB drives but of these 64 TB only around 12 TiB are actually usable for real data. 4x 8 TB as raidz1 in my main TrueNAS and 4x 8 TB in my backup TrueNAS and everything gets replicated once per week. So only 32 TB usable by the main TrueNAS. OF these 32TB I loose 8 TB because of raidz1 parity so only 24 TB left. These 24TB can only be filled up to 80% so only 19 TB left. And all your SMB/NFS shares are mounted at some clients which got write access to these shares. So if any of your clients is hit by ransomware this ransomware will delete/encrypt all your stuff on the NAS. So you want to use snapshots with a long retention (I use 2 month to 1 year for example) so you can undo changes caused by ransomware. But that also means that no data can be deleted for atleast 2 months. If I download a 100GB Zip, extract it, delete the zip and move the unzipped files to another dataset there is only 100GB of data but ZFS needs 300GB to store it and the 200GB of deleted data get only freed up after this 2 months or 1 year are over. This can really add up over the months...So I want atleast 33% of the space for snapshots. So this 19 TB is down to 12,5 TB of usable storage. So in the end I loose 80% of the storage I buy because of parity, backups and snapshots.
And if you use replication you must use snapshots because it are the snapshots that get replicated. If you for example do weekly replications wo want a snapshot retention of atlest 2 or better 3 weeks because if your snapshots get deleted on the main TrueNAS before they got replicated to the backup TrueNAS replication can't be done incremental anymore and TrueNAS needs to delete everything on your backup TrueNAS and copy everything over again. And a backup TrueNAS won't help you much against ransomware either if your retentions are too short. Lets say you only keep snapshots for 2 weeks. A ransomware infects one of your clients and the ransomware runs in the background and secretly encrypts your data in the background without you noticing it. It will wait 1 month and then locks all the encrypted data and then blackmails you with a popup that you need to pay 1000$ to get your data back. Meanwhile every healthy file on your backup TrueNAS got overwritten by encrypted data from your main TrueNAS. And because you can only rollback up to two weeks there is no chance to undo it.
Would the Proxmox ZFS compete with the TrueNAS ZFS for memory though since I do plan to install the TrueNAS VM with ZFS RAIDZ2? I have only 32GB of RAM (maxed out) on the server.
Yep. You would need RAM for your ARC on the host and RAM for the ARC inside your VM. VMs are completely isolated and don't share anything with the host. 32GB really isn'T much for such a setup. Rule of thumb for ZFSs ARC is 4GB + 1GB RAM per 1 TB of raw storage or 4GB + 5GB RAM per 1 TB of raw storage if you want to use deduplication. So 12x 4TB HDDs would be 52 to 244 GB RAM for your TrueNAS VM. It will run with less but the less RAM you give your ZFS, the slower it will get.
If I do go with ZFS RAID1 for the Proxmox OS, how much RAM should I dedicate to Proxmox so that I can maximize the RAM that I can give to the TrueNAS VM but also not degrade the performance of all the containers.
I think PVE + hosts ZFS will be fine with 6GB RAM. PBS runs here in a VM with 3 GB RAM. If you run PBS in a LXC this should be even less. So maybe 8 GB for PVE/PBS/ZFS+ X GB for your other LXCs + Y GB for TrueNAS. So you maybe got 16GB RAM for your TrueNAS? In that case it gets hard to make use of all that 12 drive bays.
 
Last edited:
But then you still got the problem that you loose your PBS at the same time you loose all your other VMs so there is nothing to restore the VMs from. So you would need to create a new PBS LXC, set ip up again and use the backup TrueNAS as a datastore (which should be read only because replication only goes in one direction so every change done to the backup TrueNAS would be overwritten by the main TrueNAS at the next replication. Not sure if PBS can use read only datastores.
I do have a 11 year old Core 2 Duo desktop that I can utilize for PBS instead of using a VM. I just need to replace the PSU on that since it died. I can throw in a 1TB drive, which might be sufficient to hold the VM/container backups.
I lost too much data to failing drives,
Fair enough. the redundancy does sound appealing although at a bit higher cost.
Yep. You would need RAM for your ARC on the host and RAM for the ARC inside your VM. VMs are completely isolated and don't share anything with the host. 32GB really isn'T much for such a setup. Rule of thumb for ZFSs ARC is 4GB + 1GB RAM per 1 TB of raw storage or 4GB + 5GB RAM per 1 TB of raw storage if you want to use deduplication. So 12x 4TB HDDs would be 52 to 244 GB RAM for your TrueNAS VM. It will run with less but the less RAM you give your ZFS, the slower it will get.
This right here has got me thinking if this is even a good idea (the whole TrueNAS as a VM on Proxmox) given the server that I have. I have no intention of buying new hardware for the server, so maybe I have to rethink this because I don't want to go to a state where the performance is unacceptable.
I think PVE + hosts ZFS will be fine with 6GB RAM. PBS runs here in a VM with 3 GB RAM. If you run PBS in a LXC this should be even less. So maybe 8 GB for PVE/PBS/ZFS+ X GB for your other LXCs + Y GB for TrueNAS. So you maybe got 16GB RAM for your TrueNAS? In that case it gets hard to make use of all that 12 drive bays.
This part is moot if I don't end up doing this due to lack of available RAM.

The idea behind this whole thing was that I would have 2 TrueNAS servers which would allow for me to have an online backup rather than the current backup strategy of copying the data to external drives every now and then.

Much to think about.....

Thanks for your responses.
 
Like I said, ZFS will work with less RAM, just don't expect it to be super responsive if you give it not enough RAM.
My backup TrueNAS server for example only got 16GB RAM for these 4x 8TB HDDs and atleast for big sequential writes this works fine with full 118MB/s (Gbit).
But HDDs are really crappy at doing small random writes and ZFS is doing alot of that because of all the overhead (metadata is saved with 3 copies and so on). So you want atleast enough RAM to fit all the metadata inside the RAM. And ZFS is doing alot of expensive calculations (parity, checksums and compression for every block) which also need alot of CPU power and some RAM. The more files you got, the more RAM you need to cache all the metadata. And the RAM that is left over ZFS will use for caching data so files you open often will be ready in RAM instead opening them from the slow disks. This is especially important because ZFS is a copy-on-write filesystem, so it tends to fragment and big files are not in one place (so they could be read as a big sequential reads) but spread accross all the disks (so it they can be more like random reads which HDDs are crappy at). And ZFS got a prediction algorithm. It will guess what files you will open next and preload them in RAM so you can faster access them.
The less RAM you allow ZFS to use, the less data can be cached. You can run arc_summary to see how good or bad your hit rates are. If its not enough RAM the hitrates will go down. I try to keep them at above 95%.

On my main TrueNAS the 4x 8TB HDD + 5x 400GB SSDs + 3x 200GB SSDs got 32GB RAM and I would have liked to upgrade it to 64GB RAM but 32GB is the max for that hardware too. But performance is most of the time ok. Sequential reads are between 200 and 400MB/s for the HDD pool.

On my PVE host I got 64GB RAM and I wish I could upgrade it to 128GB because 64GB isn't enough to run all VMs/LXCs at the same time. But all RAM slots are already populated. That server got 6x 200 GB SSDs, 2x 100GB SSDs and 2x 3TB HDDs but only 5 of the 200GB SSDs are using ZFS and I set my ARC to 8GB because I want my VM storage SSDs to be fast.

So you really need to check what works best for you.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!