I have been running a Proxmox cluster with a virtualized PFsense for over a year.
My PFsense moves between nodes as I have central storage setup via Freenas through ISCSI
Currently PFsense is on proxmox node 3:
- I use OVS
- vlan 1 = LAN network (through vmbr0)
- vmbr1 = DMZ
- vmbr5 = WAN...
I also have had the same issue when updating the firmware through DELL.
The way I have resolved this (not the only way), was to setup the following:
VM running Dell Openmanage Enterprise - I use this to monitor all my dell servers
On a windows 10 VM I have DELL EMC Repository Manager running...
I have not had issues with the R620/R720 built in NIC cards. I have purchased 3 x C63DV 0C63DV DELL X520/I350 DAUGHTER CARD from Ebay with no issues.
1) I would double check that it is plugged in correctly ..... sounds silly, but from experience I have been burnt by silly things like this more...
What I found is that if the card is compatible with the Dell Server and works good with Debian .... you should not have problems with Proxmox
My entire setup is using dell servers:
R710, R620 and R620 as the proxmox cluster (Running about 15 vm's & 7 containers)
R720 as my Freenas head unit /...
I am using 3 different cards between my R710, 620 and R720 servers:
1) Built in card for Dell R620 ( C63DV 0C63DV DELL X520/I350 DAUGHTER CARD 10GBE NETWORK )
2) Add in Intel card: Intel X520-DA2 10Gb 10Gbe 10 Gigabit Network Adapter
3) Add in Mellanox card: Mellanox ConnectX-3 EN CX312A Dual...
I like the features of ZFS over ISCSI in that it automatically creates ZVOL, which I can easily snapshot. However, again my performance is slower vs regular ISCSI.
Just quick example: Using proxmox host 2 (Dell R620) - Freenas 11.2-u4 (8x1tb enterprise SSD vdev mirrors); Freenas for ISCSI...
I run both MPIO ISCSI (Freenas ZVOL ---> 3 node proxmox cluster) and it works really good from a reliability and performance standpoint. I can saturate my 10gb links. I found the following forum post useful to setting it up https://forum.proxmox.com/threads/multipath-iscsi-lvm-and-cluster.11938/...
You can complete the setup with just two NIC's
- In most cases you would set the ISP modem/router to bridge mode, so that your PFsense interface (assigned to WAN) would obtain and IP address directly from the ISP DHCP server (not from modem/router)
ISP-->ISP Modem (Bridge Mode)-->Proxmox...
I was thinking about it some more last night:
I have the following modifications to the base proxmox install:
vm.swappiness = 1 (I found that the base value of proxmox caused me some sluggishness over time)
The two below .... I am not sure I have seen much difference with them implemented...
So I ran a couple of tests on one of my proxmox node, which is running latest 5.4 (fully updated)
Dell R620 - dual E5-2643V2 (24 cores - 3.5ghz) 128gb of ram; 8x400gb Intel S3610 SSD in raid 10 (Perc H710p 1gb controller)- EXT4 file system from proxmox installer
I have 5 VM's running on this...
What hardware are you running? Processors, memory, disk, etc.
I have not noticed anything on my setup, but will try to recreate by loading up other VM's while using windows VM
- My setup for test: R620 Dell with 128gb memory; dual E5-2680V2 or dual E5-2663V2 (depends on node) and either all...