FreeNAS iSCSI issues with Proxmox Cluster

Marek Šiller

New Member
Apr 4, 2016
2
0
1
39
Hi,

we have setup a 3-node proxmox cluster (4.1) and configured two storages for VMs - NFS to our FreeNAS and iSCSI to our FreeNAS. iSCSI was configured using Proxmox Web UI (https://pve.proxmox.com/wiki/Storage_Model#LVM_Groups_with_Network_Backing)

Both, NFS and iSCSI, use dedicated 10G Ethernet connection among Proxmox Nodes and FreeNAS Storage.

With this setup, we are experiencing regular filesystem corruption in our VMs. On some VMs it occurs once or more times a day, on some VMs a few times a week, on some VMs filesystem corruption does not occur at all (all of them are on same storage).

Unfortunatelly, there aren't any verbose messages in the logs besides following occasional message: "blk_update_request: I/O error, dev sdc, sector 0". /dev/sdc is the iSCSI device exported from FreeNAS.

On FreeNAS, the only thing I could find in logs was the messages concerning the storage exports (live-ticker from Proxmox):

Jul 22 12:28:50 hwdeka5 mountd[12604]: export request succeeded from 10.10.11.67
Jul 22 12:28:50 hwdeka5 mountd[12604]: export request succeeded from 10.10.11.67
Jul 22 12:28:50 hwdeka5 ctld[98321]: 10.10.11.67: read: connection lost
Jul 22 12:28:50 hwdeka5 ctld[25109]: child process 98321 terminated with exit status 1
Jul 22 12:28:50 hwdeka5 mountd[12604]: export request succeeded from 10.10.11.67
Jul 22 12:28:51 hwdeka5 ctld[98322]: 10.10.11.40: read: connection lost
Jul 22 12:28:51 hwdeka5 ctld[25109]: child process 98322 terminated with exit status 1
Jul 22 12:28:52 hwdeka5 mountd[12604]: export request succeeded from 10.10.11.40
Jul 22 12:28:52 hwdeka5 mountd[12604]: export request succeeded from 10.10.11.40
Jul 22 12:28:52 hwdeka5 mountd[12604]: export request succeeded from 10.10.11.40
Jul 22 12:28:53 hwdeka5 mountd[12604]: export request succeeded from 10.10.11.16
Jul 22 12:28:53 hwdeka5 ctld[98324]: 10.10.11.16: read: connection lost
Jul 22 12:28:53 hwdeka5 ctld[25109]: child process 98324 terminated with exit status 1
Jul 22 12:28:53 hwdeka5 mountd[12604]: export request succeeded from 10.10.11.16
Jul 22 12:28:53 hwdeka5 mountd[12604]: export request succeeded from 10.10.11.16​

And the following message:

sonewconn: pcb 0xfffff804da355000: Listen queue overflow: 193 already in queue awaiting acceptance (33 occurrences)
sonewconn: pcb 0xfffff804da355000: Listen queue overflow: 193 already in queue awaiting acceptance (36 occurrences)
sonewconn: pcb 0xfffff804da355000: Listen queue overflow: 193 already in queue awaiting acceptance (33 occurrences)​

At the moment, I can not connect the "Listen queue overflow" message to any process / communication.

Did anyone experience similar problems with iSCSI, FreeNAS and Proxmox?

Is it possible to use Proxmox Cluster with FreeNAS iSCSI and LVM for reliable production usage?

Is it necessary to setup iSCSI multipath (at the moment we've got only one dedicated SAN).

Thank you very much for your help!

Best Regards,
Marek
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!