I like the features of ZFS over ISCSI in that it automatically creates ZVOL, which I can easily snapshot. However, again my performance is slower vs regular ISCSI.
Just quick example: Using proxmox host 2 (Dell R620) - Freenas 11.2-u4 (8x1tb enterprise SSD vdev mirrors); Freenas for ISCSI...
I run both MPIO ISCSI (Freenas ZVOL ---> 3 node proxmox cluster) and it works really good from a reliability and performance standpoint. I can saturate my 10gb links. I found the following forum post useful to setting it up https://forum.proxmox.com/threads/multipath-iscsi-lvm-and-cluster.11938/...
You can complete the setup with just two NIC's
- In most cases you would set the ISP modem/router to bridge mode, so that your PFsense interface (assigned to WAN) would obtain and IP address directly from the ISP DHCP server (not from modem/router)
ISP-->ISP Modem (Bridge Mode)-->Proxmox...
When you restart the VM on the same node, the disks return ?
- Not visible in the VM hardware configuration when VM stops ?
- Not visible under storage (Local-Storage or whatever you have it named) ?
I was thinking about it some more last night:
I have the following modifications to the base proxmox install:
vm.swappiness = 1 (I found that the base value of proxmox caused me some sluggishness over time)
The two below .... I am not sure I have seen much difference with them implemented...
So I ran a couple of tests on one of my proxmox node, which is running latest 5.4 (fully updated)
Dell R620 - dual E5-2643V2 (24 cores - 3.5ghz) 128gb of ram; 8x400gb Intel S3610 SSD in raid 10 (Perc H710p 1gb controller)- EXT4 file system from proxmox installer
I have 5 VM's running on this...
What hardware are you running? Processors, memory, disk, etc.
I have not noticed anything on my setup, but will try to recreate by loading up other VM's while using windows VM
- My setup for test: R620 Dell with 128gb memory; dual E5-2680V2 or dual E5-2663V2 (depends on node) and either all...
Yes, I tried with just one IP address and restarted the syslog-ng.
However, I am still seeing the connection lost for that connection.
I even tried changing the IP to one of the other connections, but still same outcome.
I tried adding the code above into the syslog-ng.conf on my freenas, but am still seeing the errors.....
For me all of the errors are a real problem since I have 3 proxmox servers with two ISCSI links each. That is an error message for each connection.
From, what I have learned so far in my own setup (If I wanted an all in one setup):
If possible I would have separate storage for my VM images (seperate disks for proxmox) and pass through an HBA to my NAS operation system. This way the NAS operating system has direct access to the disks. I...
In my environment I have 4 different NAS systems running.
1 - Dedicated Freenas box used for ISCSI, CIFS, etc.... (dedicated SAN/NAS) [excellent storage performance - full 10gb network saturation]
1 - OMV installed as a VM running in proxmox with an HBA passed directly to the VM (16 x 2TB...
This may have to do with:
- time to recognize that node is down
- time & # of attempts to restart the VM's on the given node
- ------ once restart timeout/attempts ends , vm's are moved over to next node
On my system is takes a couple of minutes before the VM's are moved over and fully...
I have validated with iperf3
However, the real tests have been through my Windows 10 VM and also I have an ubuntu VM running Phoronix test suit
Windows VM is running local on R620 proxmox node-3 (local storage LVM-thin - Raid 5 - 8 x Intel S3610 SSD's) and I am moving large files SMB/CIFS to...
I have the Unifi XG-16 with three proxmox servers + one freenas sever connected through SFP+ DAC
I have Vlans configured and am getting full 10g speeds
In proxmox I am using open v switch
- Physical SFP+ link 1 set with MTU 9000
- 3 Vlans as bridge - corosync ring 1, ISCSI Link 1, LAN
- Second...
Ok great thanks for the explanation!!
I decided to remove the extra node from the cluster as I use it rarely.
I can increase the available resources on the existing nodes by adding more memory and rebalancing the VM’s
Again thank you for the explanation!!! I will mark this thread as resolved :)
Interesting .... thank you for the help !!!
I wonder if it has to do with the number of expected votes and if it does maybe I can decrease the number of votes to look as if I have three nodes instead of 4, 5, 6 etc.....
I only turn on the 4th node when I need the extra resources. When I don't need the extra resources I power down the 4th node, thus saving some electricity. I do the same thing on my storage side to save some power = money
What is interesting is if I move all the HA VM's to one node within the...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.