iSCSI or NFS

Jeroen Visser

New Member
Oct 12, 2016
3
0
1
47
Hello all,

I'm new to Proxmox, and True/FreeNAS, and can not find the info I'm after using google nor search functions.

I hope to draw some expert advice here.

We are a small company but growing and our current 1-server mess just doesnt cut it any more. To prevent futures headaches I decided to virtualize the server-end so we can have low-cost multiple servers on a storage system that will both carry the load and take care of data integrity. Because our 11-users company, but the need to run 4 different servers and 10 or so Windows VM's, as well as the stability of non-windows servers, we opted to go as OpenSource as possible.

We decided to go with Proxmox and FreeNAS, and I am currently happy with a very good performance on iSCSI luns. But I quickly found I will not be able to use snapshots, create ISO storage or asign a backup lun.

This is a bit of a no go, unless I decide to export a lun per VM and snapshot the filesystem on the FreeNAS end. Which would be less then ideal I think.

I currently created a new VG on the proxmox end, put ext4 on it andand mounted it to the local filesystem. I can now store qcow2 files, but have an extra fs in the cake, and it yields a performance hit. I also cant share it as I cant mount a non-cluster aware fs twice without inviting disaster.

Am I the first to run into this? Or does everybody opt for NFS exported fs on the NAS end?

We will be connecting a 20 core XEON server to a TrueNAS Z20, using SFP+ ports (2 in the nas, 2 in the server, 2 in the switch) as well as regular 1gbe ports (2,2,many). The idea was to use multipath which doesnt work with NFS as far as I'm aware.

Our backup strategy never leaves spinning rust, but we will be replicating the data to two FreeNAS mini XL's.

I never realized the limitation with using RAW format VM's, and had kinda banked on stuff that requires qcow2.

Is there a best practice for this use case ? Is there a good reason not to go to production using this setup with a filesystem on the exported freenas zvol, mounted on the proxmox host, used to store qcow disk images?

Thanks for your time!
 
Theres a lot to unpack here.

Lets begin by ignoring all the technology you intend to bring to bear. Its often that sysadmins say "I have this square peg, so I'm going to redefine the hole until its square too." Lets concentrate on your use case.

1. why do you need proxmox at all? what are your applications?
2. with a single proxmox node, you have no fault tolerance if the server is down. Is this important?
3. You mentioned that you have an 11 user company, but you "need to run 4 different servers and 10 or so Windows VM's". Why? what are those servers doing? how much resources do they need? I would suggest you redefine your need based on applications, not "servers" or "VMs." Its conceivable your design is not/insufficiently efficient, performant, and/or fault tolerant on an application basis. Also, in case you have not procured the requisite Windows licenses, having that many Windows VMs have a cost associated.
4. Unless you're planning to attach additional nodes, you dont really need the freenas device. it serves no purpose but its connectivity is an additional point of failure; you're better off running on local storage. Are you planning additional Proxmox nodes?
5. NFS is ideal for D2D backups.
 
We are a software company. At the moment we have a server hosting everything we have, and maintenance is a pain in the ass. Rebooting the server means all our internal services are down for as long as it takes exchange to do its thing.

Our servers should already have been sepperated so that a security compromise on one doesnt affect the other services.

I do not want the sourcecode on the same server as the FTP or cliënt accessable webserver, and since Linux servers hardly waste or use resources in our scenario, it's a lot easier to just use a system per task and have risks spread.

We are forced to use many VPN clients, and instead of everybody maintaining their own set of VM's like we do now, I want to host a maximum of 10 Windows VM's running win7 to connect to these VPN's and serve as thin cliënt for connecting to our clients.

When we expand further, a second server will be added and this is not expected to last 3 years. The NAS has a lot of added value given this. It would also tax the server a lot if had to manage the zfs and the vms I would guess.

Our local work is compiling, working with semi-large databases (convert/pack/unpack/heavy IO stuff)

Given security best practises and ease of maintenance, this isnt that odd setup I'd say?
 
We are a software company. At the moment we have a server hosting everything we have, and maintenance is a pain in the ass. Rebooting the server means all our internal services are down for as long as it takes exchange to do its thing.
You still have this problem with a single hypervisor node.

Our servers should already have been sepperated so that a security compromise on one doesnt affect the other services.
Given, but you still havent defined what those services are.

I do not want the sourcecode on the same server as the FTP or cliënt accessable webserver, and since Linux servers hardly waste or use resources in our scenario, it's a lot easier to just use a system per task and have risks spread.
Thats simple enough to address with two separate containers. Actually, if you insist on keeping a NAS, both of these functions should be performed in jails on the NAS. in this case, since the data resides on the nas anyway there is no benefit of adding a "server" node.

We are forced to use many VPN clients, and instead of everybody maintaining their own set of VM's like we do now, I want to host a maximum of 10 Windows VM's running win7 to connect to these VPN's and serve as thin cliënt for connecting to our clients.
This is possible, of course, but I'm not sure it will actually solve the problems. for starters, it does not undo the necessity for VPN; it also doesnt replace your workstations, since they're still necessary to initiate the connection. I'm assuming your endnodes run Windows which means you'll still need to support them. I guess I dont understand what you are trying to solve.

It would also tax the server a lot if had to manage the zfs and the vms I would guess.
dont guess. check :) with your use case, the storage processing load, even ZFS, will not likely even register on your performance tests; all you'd need is a bit of RAM.

Given security best practises and ease of maintenance, this isnt that odd setup I'd say?
Its not a matter of odd or not. You have 20 cores allocated to a single machine, with a NAS as a backing store. You havent measured your CPU load, RAM load, usable storage, storage fault tolerance, minimum acceptable performance, service fault tolerance, failure recovery, or serviceability. I'm trying to point you in a direction to ask the right questions.
 
I'll update a bit of relevant info, and explain the choices made. I also never mentioned the total size of the storage space, which is 10TiB, and that isnt really feasible in a single server.

Given our housing, a backup unit has no place to go, and currently we do not even have a good backup. We currently work from emails. We are also marketleader in our segment. The clientbase grew, the company long time didnt.

We expect to grow to 20 employees pretty fast and I dont want to migratr twice in two years. Hence the NAS so I can simply slam more computing power in there and add a bunch of VM's.

I am replacing desktops with VM's as we dont have hardware deals that enable me to order the same workstation twice within a month, and having all sorts of workstations and driver sets is a pita.

We do not have proper systems management, it is a side job.

I need a flexibele, easy to restore and hassle free way to store and replicate data.

I am aware that replication is not backup, but given budget, services we run and how we maintain it, and the size of the data, we opted for a SAN storage backend to both protect and efficiently handle data. Given growth estimation, a single server or multiple servers would quickly become a pain to manage. The NAS will be HA if we are going to increase load.

I will update in a bit. Still on phone ;)
 
we use both. iscsi for kvm , nfs for files .

with zfs use a zfs send/receive backup tool like napp-it's replicate [ affordable and perfect support ] or one of many other great zfs tools.
 
We are forced to use many VPN clients, and instead of everybody maintaining their own set of VM's like we do now, I want to host a maximum of 10 Windows VM's running win7 to connect to these VPN's and serve as thin cliënt for connecting to our clients.

We use a similar setup with more clients and even more VPN vendors. It works great - you have to be aware of the windows licenses as @alexskysilk pointed out. We try to use as most Linux as possible to save the windows costs.

I'm from a similar sized company background, so this is my opinion and experience for that:
I think that a NAS/SAN solution is the way to go which enables you to grow more dynamically (add another shelf, etc). I'd always go with fiberchannel based SAN, but NAS also works. Your idea to use "cheaper" hardware is the way to go. I recommend to buy used servers, because you do not need the full power of new machines and multiple compute nodes will level out failures, e.g. DL360/DL380 G5/G6 are very cheap (e.g. 1k euro for 64 GB-RAM, 2x Quad or Hexacore + HT, FC HBA) and the most expensive part will be the 10 GBE for your iSCSI. I'd buy 3 or 5 nodes, depending on your need, but 3 is a very good starting point. (I would not go for less if you want to use HA, and you want!) For HA it is necessary that you also have 2 switches and also 2 controllers in your SAN/NAS. Then and only than, the used hardware part makes sense. We have disks for our servers and your san in stock, so if something breaks, we can replace it very fast. Our full stack is HA, so we have at least 2 paths to everything, yet we do not have a "real" UPS, so no generator or stuff like that. Simple battery-backed UPS.

Oddly, the windows licensing costs would buy at least one compute node, if not two.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!