Iscsi / LVM Storage

lweidig

Active Member
Oct 20, 2011
104
2
38
Sheboygan, WI
So after my NFS issues behind me (http://forum.proxmox.com/threads/7563-ISO-and-Templates) I decided to move on and setup some image /container storage using our iSCSI OpenFiler storage. I created the iSCSI storage device with no issues, other than the fact that we see the following messages getting logged continually to syslog:

sd 8:0:0:0: [sdc] Very big device. Trying to use READ CAPACITY(16).
sdc: detected capacity change from 0 to 2199023255552

Which may be related to the bug where when we select the base volume it shows the size as 2.00GB instead of the actual 2.00TB:
LVM-SizeWrong.png

Following that we assign a VG name, select Shared and click create as shown below. The little processing indicator is displayed for about 30 seconds and then we are just brought back to the same window. Click create again and same thing:

LVM-Create.png

Cannot seem to get past this part and create the LVM storage. Only option is to X out of the window.
 
does it work with a smaller lun? also be aware of this bug,
 
Created a 50GB LUN (which was reported at 50MB so I think you are always off by one "unit") and had the same results. Sat at Please Wait for about 30 seconds and then returned with nothing getting created.
 
Rebooting both nodes in the cluster apparrently cleared up the issues. Was able to create LVM groups on both. Still display size bug exists and it should not be necessary of course to reboot to add storage.
 
Unfortuntately this only leads to the next error. While I was able to create the LVM storage and it shows up under the web interface the logs now are loaded with:

dlm: closing connection to node 2
INFO: task clvmd:2196 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
clvmd D ffff88033cd12140 0 2196 1 0x00000000
ffff88033c249c70 0000000000000086 0000000000000000 0000000000000000
0000000000000000 0000000000000000 0000000000000000 00000000fffddaef
ffff88033cd12708 ffff88033c249fd8 000000000000f788 ffff88033cd12708
Call Trace:
[<ffffffff814e6f15>] rwsem_down_failed_common+0x95/0x1d0
[<ffffffff814e70a6>] rwsem_down_read_failed+0x26/0x30
[<ffffffff8125ff94>] call_rwsem_down_read_failed+0x14/0x30
[<ffffffff814e6594>] ? down_read+0x24/0x30
[<ffffffffa0617497>] dlm_user_request+0x47/0x240 [dlm]
[<ffffffff811716ed>] ? cache_alloc_refill+0x14d/0x230
[<ffffffffa0625329>] device_write+0x5f9/0x7d0 [dlm]
[<ffffffff8112dd70>] ? __free_pages+0x60/0x90
[<ffffffff811859e8>] vfs_write+0xb8/0x1a0
[<ffffffff810818a5>] ? sigprocmask+0x75/0x110
[<ffffffff81186421>] sys_write+0x51/0x90
[<ffffffff814e7ace>] ? do_device_not_available+0xe/0x10
[<ffffffff8100b202>] system_call_fastpath+0x16/0x1b

These errors appear on both nodes in the cluster. Now, when you go to great a VM (or container - even though it does not use the LVM) you get errors that the nodes are not available.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!