Disabling iSCSI automatic configuration overwrite

Phlesher

New Member
Feb 9, 2022
4
0
1
42
Hoping someone can help me with this one.

I'm using a NAS in which I have multiple targets. I need to connect Proxmox VM's to these multiple targets, each of which is using mutual CHAP authentication. That part is non-negotiable at the moment.

Since Proxmox has no facility for configuring CHAP (or any other advanced) settings for iSCSI portals/nodes/targets, I've had to do this configuration via iscsiadm in the shell. This part went off without a hitch -- all connections are good. I can get Proxmox to use these connections by subsequently enabling them in the GUI (but only after I've established the correct configuration via iscsiadm, as explained below...).

The problem is that Proxmox refuses to leave the configuration files alone. It seems that whenever the iSCSI targets are enabled in the Proxmox GUI, Proxmox will actively monitor connection status and then, in the case of a failure, overwrite my working configuration with a default configuration (thus killing individual session settings like CHAP, etc.). I have not found any way to keep the targets visible and usable by the VM's in Proxmox while turning off Proxmox's active overwriting of the open-iscsi configuration files.

So, there are multiple questions here:
  1. Am I correct that Proxmox is doing this overwriting of configuration as part of some attempt to re-establish connection to broken iSCSI targets?
  2. If so, is there a way to disable that overwriting/active management of iSCSI configuration, such that it will only use whatever has been configured already in the config files and not touch them?
  3. If not, is there any other solution out there to get around this issue? The only other solution I can imagine is writing a cron job or similar that monitors the config files being overwritten, disables Proxmox iSCSI targets, rewrites the correct configuration, re-establishes all iSCSI sessions, and then re-enables Proxmox iSCSI targets. This would be a lot of work for something that seems like it should be an easy feature disablement in Proxmox, but if it's all I've got, I'll take it...
Thanks in advance for any help!
 
A new thought as an alternative to my solution posted in bullet #3:

Could I set up the iSCSI connections using iscsiadm as before, completely remove the iSCSI targets in Proxmox's interface, then add the mounted iSCSI drives in Proxmox's interface instead? This would force Proxmox to see the drives as standard disks in the OS, rather than specifically iSCSI targets, and thus stop trying to manage them as iSCSI targets. Then, it would be up to the host to ensure iSCSI stays up and is configured correctly.

Is this maybe the best solution?
 
I think the last approach you listed is probably your best option. However, it also depends on the type of volume manager/filesystem you plan to use on top of iSCSI and/or how you plan to pass the LUNs through. For example, if you have ability to create iSCSI LUNs on a per PVE volume basis, you can pass them as direct LUNs.

In the end PVE support for iSCSI storage backend is limited, omitting things like CHAP. If you let PVE manage your iSCSI connections - it expects to be the only way of doing so.


Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
How do you accomplish passing them as "direct LUNs"? Not sure what that means exactly. I assume something using the CLI, not the GUI?

What I've done so far and seems to be working: I've established the sessions via iscsiadm, which has created the dynamic disks at the OS level (e.g., /dev/disk/by-path/ip-blah-blah-blah-lun-0), and then used the qm command to set this path directly on a VM's sata0 or scsi0 hard disk. It's working fine for a couple of the VM's, but the third one was unfortunately used in Proxmox to create a LVM group on top of, which then had a volume inside of it. I'm not clear whether there's a way for me to pull that volume out of the LVM cleanly.
 
Last edited:
There is an option somewhere to use LUN directly, but what you are doing is effectively the same. Good catch on using disk path that will be consistent across the nodes.
As for the 3rd lun - if you are confident its not used by anything on the system (storage/vm/etc) you can just use lvremove/pvremove etc commands to clean it up.


Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
I only saw an option to use LUN's directly when the iSCSI integration within Proxmox is being used - which, again, would present the same problem that I've already encountered wherein Proxmox is fiddling with configuration.

Actually to restate the problem with that third VM -- it's not that I want to just clean it up. What I'd really like to do is somehow migrate the image contained in the LVM that's inside the LVM group back up to a raw image. This could be a multi-step process, just not sure what that process is yet. I will have to go investigate it probably! Otherwise, I have to do some more manual migration that will just be a pain.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!