Can i have a hardware recoomendation? arount 20 windows 11 vms

cybermod

Active Member
Sep 21, 2019
9
3
43
46
hello everyone, I would like to know your views about a proxmox node with about 20 windows 11 vm to be used with rdp.
The clients have mostly office use (office, google workspace and some web portal).
We are trying to set up the physical server for this project. Any suggestions are welcome.

A colleague of mine at work-very smart- pointed out to me that even the graphics card can be a problem on a server running so many machines
 
  • Like
Reactions: Johannes S
Of course, both options are possible ;)
Terminal services just requires a TS license. vgpu requires supported nvidia hardware and license- its worth nothing that the latter option is quite a bit more expensive.

edit the latter option also requires sufficient enterprise or edu windows 11 licenses as well, whereas the TS option just requirs CALs.
 
Last edited:
You need to maintain 20 VMs and vGPU for all the VMs. Wouldn't a terminal server solution be the simpler option? → Only one server to maintain. Would there be any disadvantages in your case?

This would also allow you to pass one larger graphics card to the VM [1].

Of course, both options are possible ;)

[0] https://pve.proxmox.com/wiki/NVIDIA_vGPU_on_Proxmox_VE
[1] https://pve.proxmox.com/wiki/PCI(e)_Passthrough
Good morning everyone, sorry for the delay but I was on holiday and completely switched off.

In the meantime, thank you for your replies and interest.



As for the terminal server itself, I wouldn't mind, but we have some constraints: these VMs need to run specific software that is not compatible with Terminal Server. We asked the software houses and we also ran some tests, but unfortunately nothing worked. They told us that in the future they will try to make the software compatible with Terminal Server, but for now there's nothing we can do. :(



As for licences, we have Proxmox licences (I can't remember if they're basic or standard, I need to take another look at the project).

For Windows, we are still looking into the correct licensing.


Thank you for your time, and I look forward to hearing your thoughts!

I would also like to discuss networking with you. I hope I am posting in the right section.

The server is equipped with eight 1 GB network cards and two 10 GB fibre cards.

There are also two HP switches, each with 48 1 GB ports and four 10 GB LACP-compatible ports, as well as a Synology NAS with two 1 GB network cards.

I did some redundancy work with VMware years ago, but I don't have much experience with Proxmox.
My goal is to ensure the NAS is always available for backups in case one of the switches fails. Could you also recommend the most intelligent way to set up the network? Would you recommend LACP?
I remember that aggregating network cards offered advantages such as fault tolerance but also more bandwidth.

All the best to you all — you are legendary!
 
  • Like
Reactions: Johannes S
Terminal services just requires a TS license. vgpu requires supported nvidia hardware and license- its worth nothing that the latter option is quite a bit more expensive.

edit the latter option also requires sufficient enterprise or edu windows 11 licenses as well, whereas the TS option just requirs CALs.
Thank you very much to you too.
 
As for the terminal server itself, I wouldn't mind, but we have some constraints: these VMs need to run specific software that is not compatible with Terminal Server. We asked the software houses and we also ran some tests, but unfortunately nothing worked. They told us that in the future they will try to make the software compatible with Terminal Server, but for now there's nothing we can do. :(
Yes, that sounds very familiar...


Would you recommend LACP?
Yes, that almost always fits [0]. If you combine multiple links into an LACP bond (802.3ad), you will not automatically achieve the full bandwidth for a single data stream, but multiple parallel streams can together utilize the total bandwidth.


[0] https://pve.proxmox.com/pve-docs/pve-admin-guide.html#sysadmin_network_bond
 
  • Like
Reactions: cybermod
Thank you: you make me feel a little less alone in this digital world eheheheeh

I have read up on and done some testing regarding bonds on Proxmox, but I have doubts about the smartest way to structure my network.

Here is a list of my hardware:

HP server with 8 1GB network cards + 2 10Gb SFP+ fibre cards
2 Aruba 1930 48g switches + 4 10Gb SFP+ fibre cards
1 Synology DS923+ (I install Proxmox backup LXC, which uses NFS storage on Synology)

In your opinion, what is the best way to set up the network?

Should I use the two Proxmox fibre cards in active-backup mode or set up LACP?

The switches are not - in theory - compatible with stack or MLAG (I was TOTALLY unfamiliar with MLAG), so I have to choose between having the switches in series with bandwidth aggregation or having some sort of fault tolerance.... I admit I'm a little confused.

Any assessment, advice or insight would be appreciated.


sorry for bad english, i'm using a translation tool
 
If you have just the one server, there is no need to worry about clustering traffic, so thats one variable out.

Do you have multiple switches? if not, you probably dont really need to worry too much about creating laggs at all.

LACP is convenient but requires upstream switch support. If you have the necessary L2 functionality available in your switch and you have the means to build the config- you certainly can make a lagg with 8 members (subject to switch support.)

The simplest approach is to create 8 bridges, each with a single physical interface uplink. that way, you can distribute your machines more or less evenly per uplink, and it will work with any switch configuration.
 
HP server with 8 1GB network cards + 2 10Gb SFP+ fibre cards
2 Aruba 1930 48g switches + 4 10Gb SFP+ fibre cards
1 Synology DS923+ (I install Proxmox backup LXC, which uses NFS storage on Synology)

In your opinion, what is the best way to set up the network?

Should I use the two Proxmox fibre cards in active-backup mode or set up LACP?

The switches are not - in theory - compatible with stack or MLAG (I was TOTALLY unfamiliar with MLAG), so I have to choose between having the switches in series with bandwidth aggregation or having some sort of fault tolerance.... I admit I'm a little confused.

You have two switches and one server. You could connect both switches to the server. If you need additional load balancing, choose MLAG via two switches (if supported) and LACP on Proxmox VE. Otherwise, for fail-safety, choose a simple active-backup bond for the two 10 Gbit interfaces (VM traffic).

For management, I would recommend using two of the Gigabit interfaces.

And the Synology is only for backups, right?

The simplest approach is to create 8 bridges, each with a single physical interface uplink. that way, you can distribute your machines more or less evenly per uplink, and it will work with any switch configuration.

Yes, that would also be a possibility for distributing network-loads. But with 20 VMs, you would probably want to have 20x1Gbit interfaces (one per VM). Unfortunately, this also has the disadvantage that if one interface fails, you have to manually switch to another one for the affected VM.

The question is, of course, how much traffic the VMs actually generate in productive operation.
 
Hi everyone,
In the end, I opted for the simplest solution: fiber network cards in bound with active-standby.
I did some testing and, given the current state of the infrastructure, I would say that it is more than sufficient.
Thank you very much for your support and advice. I will definitely be back with more beginner questions.

I also use AI a little to do some research, but I believe that the field experience of users (like you) is priceless!!!

Thanks again

a.
 
  • Like
Reactions: Johannes S