Installation on Datacenter

Matteo Lanzoni

New Member
Apr 9, 2016
6
0
1
40
Cesena
Hi, this is my first post on this forum, i work with proxmox on my servers from version 2.x and I think this software is very powerful. Actually I’ve 3 cluster with drbd (6 servers) and some stand alone installation for a total of 22 proxmox installed.

I'm planning to install a Proxmox cluster on a Datacenter with the following characteristics:

-Storage will be on a SAN connected with Iscsi (netapp) starting with 10Tb full ssd

-I will have at least 3 nodes with dual cpu 8 core and 384 or 512Gb Ram(Dell PowerEdge R630 with drac)

-there will be a 10gbs switch between storage and 1gbs switch on "internet side"

-every server will have redundancy on ethernet card (switches too)

-there will be a firewall between internet and managing network (proxmox web interface and drac)

-i have done another installation with cluster and drdb but now I need a more flexible environment.


I have some questions about server installations:
-sas or ssd on hosts? no Vm will be on local storage

-is there a way to install proxmox without local lvm?

-jumbo frame on 10 gbs?

-can I force corosync/cluster to work on 10gbs switch even if proxmox web interface is on 1Gbs switch?

-some Vm will be configured to create a "local network" (for example 1 pfsense firewall with public ip and 2 server behind it); how can I keep it working if them are active on different node after hot migration? I can configure vlan on switch too.

-datacenter: i’ve found only 1 provider can rent me a personalized solution like this (Aruba), do you have experience with other in europe? I’d like to check other about prices..
 
Hi,
-sas or ssd on hosts? no Vm will be on local storage
I would prefer small enterprise SDD.

-is there a way to install proxmox without local lvm?
Yes with PVE on Debian installation. See
https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Jessie
Or you can install ZFS as RootFS.

jumbo frame on 10 gbs?
Generally yes but I have no experience with netapp.
So I would try it and test it.

-can I force corosync/cluster to work on 10gbs switch even if proxmox web interface is on 1Gbs switch?
The Web-interface is on all assign IP's (Host) available, so yes you can tell the cluster witch network to use.
The recommendation is to use an dedicated network for the cluster because the storage traffic can make the cluster communication to slow and unreliable. Corosync works proper with a max latency of 2ms.
See https://pve.proxmox.com/wiki/Separate_Cluster_Network#Setup_at_Cluster_Creation

-some Vm will be configured to create a "local network" (for example 1 pfsense firewall with public ip and 2 server behind it); how can I keep it working if them are active on different node after hot migration? I can configure vlan on switch too.
Yes this is possible you have to use Vlans and more bridges and this have to be on all nodes with the same naming schema.
vmbr1 for uplink public network. <-> nic0 pfsence nic1 <-> vmbr2 vlanX <-> VM nic0
see https://pve.proxmox.com/wiki/Network_Model#Configuring_VLAN_in_a_cluster

-datacenter: i’ve found only 1 provider can rent me a personalized solution like this (Aruba), do you have experience with other in europe? I’d like to check other about prices..
We have good experience with https://www.first-colo.net
 
Thanks for the answer.
Yes this is possible you have to use Vlans and more bridges and this have to be on all nodes with the same naming schema.
vmbr1 for uplink public network. <-> nic0 pfsence nic1 <-> vmbr2 vlanX <-> VM nic0
see https://pve.proxmox.com/wiki/Network_Model#Configuring_VLAN_in_a_cluster

I'm doing some test and i saw all is working with "bridge_vlan_aware yes" and all vlan created on my "real" switch.
With the following configuration if I want a private network between two Vm I need only to specify the same vlan (must exist on switch) directly on the Vm Nic. I've tested migration too and all seem working perfectly. With this configuration I don't need to change proxmox network configuration if I need a new "private network" but I need only to create a new VLAN on the switch connected on vmbr1.
I want to ask you if you think I can have problem in production environment with the following configuration and at least 50 Vlans

Node 1 interfaces:

auto lo
iface lo inet loopback

iface eth0 inet manual

auto eth1 ###Interface for Storage (10 Gb)
iface eth1 inet static
address 10.0.2.11
netmask 255.255.255.0

auto eth2 #Interface for Backup
iface eth2 inet static
address 10.0.3.11
netmask 255.255.255.0

iface eth3 inet manual

auto vmbr0 ###Interface for web interface/cluster
iface vmbr0 inet static
address 10.0.4.11
netmask 255.255.255.0
gateway 10.0.4.2
bridge_ports eth0
bridge_stp off
bridge_fd 0

auto vmbr1 ###Interface for vm
iface vmbr1 inet manual
bridge_ports eth3
bridge_stp off
bridge_fd 0
bridge_vlan_aware yes


Vm conf:
bootdisk: virtio0
cores: 2
memory: 1024
name: Test
net0: bridge=vmbr1,e1000=32:65:37:30:30:66,tag=2
numa: 0
ostype: l26
smbios1: uuid=22a7bc75-fc94-4e65-8207-a9b39394b52f
sockets: 1
virtio0: IscsiLvm:vm-101-disk-1,size=4G

Vm2 conf (this is a firewall with public ip on net1 and private ip on net0)
bootdisk: virtio0
cores: 1
ide2: Bkp:iso/pfSense-CE-2.3.2-RELEASE-amd64.iso,media=cdrom
memory: 256
name: uscita
net0: bridge=vmbr1,e1000=3A:38:38:37:65:61,tag=2
net1: bridge=vmbr1,e1000=36:34:33:35:30:62
numa: 0
ostype: other
smbios1: uuid=f230d6f7-fb20-4a2f-accd-4b7cf3e8ca5e
sockets: 1
virtio0: IscsiLvm:vm-102-disk-1,cache=writethrough,size=4G
 
-jumbo frame on 10 gbs?

Always! And please read the guide from Netapp for iSCSI on Linux. I run into serious problems without it on 10 GBE iSCSI over mulitpathed NetApp on SSD which yielded good IO delay around at most 1ms, yet very terrible throughput of only 120 MB/sec.
 
You should use virtio nic because the performance are better and need less overhead.

You can also use the 2 nic (###Interface for vm) for backup cluster communication.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!