Zfs ISCSI can't create more than 11 luns?

rahman

Renowned Member
Nov 1, 2010
63
0
71
Hi,

I am trying to add vm disks zfs storage. But after creating 10 disks I can't create more. I get this error.

File exists. at /usr/share/perl5/PVE/Storage/LunCmd/Iet.pm line 376. (500)

If I delete one of the disk, I can add new one. But I get this error every time I try to create 12. disk.

Here is zfs list:

elastics/vm-101-disk-1 4.16G 5.80T 4.16G -
elastics/vm-101-disk-2 72K 5.80T 72K -
elastics/vm-107-disk-1 4.30G 5.80T 4.30G -
elastics/vm-107-disk-2 72K 5.80T 72K -
elastics/vm-107-disk-3 72K 5.80T 72K -
elastics/vm-107-disk-4 72K 5.80T 72K -
elastics/vm-111-disk-1 4.13G 5.80T 4.13G -
elastics/vm-111-disk-2 72K 5.80T 72K -
elastics/vm-111-disk-3 72K 5.80T 72K -
elastics/vm-111-disk-4 72K 5.80T 72K -
elastics/vm-127-disk-1 76.8G 5.80T 76.8G -


and here is ietd luns:
cat /proc/net/iet/volume
tid:1 name:iqn.2001-04.tr.edu.artvin:elastics
lun:0 state:0 iotype:blockio iomode:wt blocks:545259520 blocksize:512 path:/dev/elastics/vm-127-disk-1
lun:1 state:0 iotype:blockio iomode:wt blocks:306184192 blocksize:512 path:/dev/elastics/vm-107-disk-1
lun:2 state:0 iotype:blockio iomode:wt blocks:306184192 blocksize:512 path:/dev/elastics/vm-111-disk-1
lun:3 state:0 iotype:blockio iomode:wt blocks:306184192 blocksize:512 path:/dev/elastics/vm-101-disk-1
lun:4 state:0 iotype:blockio iomode:wt blocks:629145600 blocksize:512 path:/dev/elastics/vm-111-disk-2
lun:5 state:0 iotype:blockio iomode:wt blocks:629145600 blocksize:512 path:/dev/elastics/vm-111-disk-3
lun:6 state:0 iotype:blockio iomode:wt blocks:629145600 blocksize:512 path:/dev/elastics/vm-111-disk-4
lun:8 state:0 iotype:blockio iomode:wt blocks:629145600 blocksize:512 path:/dev/elastics/vm-107-disk-2
lun:10 state:0 iotype:blockio iomode:wt blocks:629145600 blocksize:512 path:/dev/elastics/vm-107-disk-3
lun:9 state:0 iotype:blockio iomode:wt blocks:629145600 blocksize:512 path:/dev/elastics/vm-107-disk-4
lun:7 state:0 iotype:blockio iomode:wt blocks:629145600 blocksize:512 path:/dev/elastics/vm-101-disk-2
 
This is how Proxmox Zfs plugin works. It creates a zvol and shares it via iscsi lun.
 
Then I'm sorry I can't help you more.

At a glance, it seems silly to allocate a LUN for each VM, but there might be (and quite surely there is) some technical reason to do that. And maybe you're hitting a technical limit (maybe on the minor node number?), but I'm only guessing.

I only used ZFS on FreeNAS to export a single LUN to Proxmox, that in turn uses LVM to allocate space for the different VMs.
 
It is debian 7.8 ZoL install. Is there any way I can see what command pve zfs plugin sends? So I can try them manually if it works.
 
After line 371 in /usr/share/perl5/PVE/Storage/LunCmd/Iet.pm add the following line:
print STDERR Dumper($ietadm,@params);

The do the following:
1) sudo service pvedaemon stop
2) sudo /usr/bin/pvedaemon --debug
3) sudo service pveproxy restart

When invoking the command to create a new disk in the gui something similar like below should be seen on the console:
$VAR1 = '/usr/sbin/ietadm';
$VAR2 = '--op';
$VAR3 = 'new';
$VAR4 = '--tid=1';
$VAR5 = '--lun=test1';
$VAR6 = '--params';
$VAR7 = '/dev/test';
 
File exists. at /usr/share/perl5/PVE/Storage/LunCmd/Iet.pm line 376. (500)


I know this is an older thread but I ran into the exact same issue tonight and found this thread so I'm updating it in case it helps somebody else.

I had this exact same error tonight using ZFS over iSCSI. The storage server for me is Ubuntu 14.04 based with iet installed. The information below helped me work through it and figure out the cause. It seems that when a drive was added previously, it added it with the ietadm command and it did exist but for some reason the ietd.conf file in /etc/iet/ietd.conf did not get updated to include the new LUN. I'm not sure where exactly that takes place in the code and whether its a bug in Proxmox or IET itself. Once I hand edited the ietd.conf file to update the last LUN that was added (As verified in /proc/net/iet/volumes) then saved the file I was then able to continue adding disks. I did not restart IET (no need) as the LUN was already active, but I believe proxmox must examine that file to get the next available lun to create a new disk. IET bombs out because the LUN is already there.

Just wanted to provide this extra info in case it helps somebody else.

After line 371 in /usr/share/perl5/PVE/Storage/LunCmd/Iet.pm add the following line:
print STDERR Dumper($ietadm,@params);

The do the following:
1) sudo service pvedaemon stop
2) sudo /usr/bin/pvedaemon --debug
3) sudo service pveproxy restart

When invoking the command to create a new disk in the gui something similar like below should be seen on the console:
$VAR1 = '/usr/sbin/ietadm';
$VAR2 = '--op';
$VAR3 = 'new';
$VAR4 = '--tid=1';
$VAR5 = '--lun=test1';
$VAR6 = '--params';
$VAR7 = '/dev/test';
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!