[SOLVED] ISCSI with chap

MrJake

New Member
Mar 12, 2025
18
2
3
I have my PBS connected in a switch that has links connected to a dell equalogic ps6100 that I want to use ISCSI as datastore on my PBS for my backups. all 4 gig ports on the san are configured in the same network as the group ip and on the PBS server I have a quad nic with 4 ports connected in the switch with 4 ips in the same network as the san.
I need now to connect the PBS via ISCSi to the san and have chap enabled. I didnt find much doc about that.
Can someone help please?
 
For anybody following the bread crumbs ... it's always good to refer to the vendor's docs.

Sure, follow that post. But read this article first.

https://pve.proxmox.com/wiki/Multipath

What is the benefit of shifting the initiator down to the host level instead of just mounting in the VM? I've seen you mention this in (I think) at least 2 other places, so I figured I would ask. I'm still researching options, so I'd be very interested in your notes.

Also, just a quick disclaimer: I know neither option is exactly ideal, but there's obviously a need/desire for people to avoid adding another box, so here we all are :)

-Newb
 
  • Like
Reactions: tcabernoch
YMMV.
iSCSI is an unwieldy process all around that I did not enjoy. I took the whole thing on strictly because I had hardware that required it.
Host mounts worked for me. I had real issues getting the PBS virtual machine to properly mount iSCSI.

I found it easier to build and manage from the host level.
It was also easier to document. I had some delusion that other people would be able to administer it. No, they look at it and just page me.
I've had to recover it a couple times. I was able to just point a new VM at that huge virtual disk on the storage and mount it. So its proven functional, if not resilient.
 
  • Like
Reactions: NewbAdmin
YMMV.
iSCSI is an unwieldy process all around that I did not enjoy. I took the whole thing on strictly because I had hardware that required it.
Host mounts worked for me. I had real issues getting the PBS virtual machine to properly mount iSCSI.

I found it easier to build and manage from the host level.
It was also easier to document. I had some delusion that other people would be able to administer it. No, they look at it and just page me.
I've had to recover it a couple times. I was able to just point a new VM at that huge virtual disk on the storage and mount it. So its proven functional, if not resilient.

Yeah, I can definitely see the appeal of doing it that way from an ease-of-management and setup perspective since it sidesteps a lot of command line work. Were you able to do any performance comparisons? If so, what were the general outcomes? Vague answers like "slightly better," "slightly worse," "negligible," etc are enough for me. I don't expect you to quantify anything.

I'll probably end up testing both options either way just to see for myself, but I expect some future reader might appreciate the additional bits of information.

-Newb
 
  • Like
Reactions: tcabernoch
Sorry, it's been some time, but I had sufficient difficulties with CLI setup that I didn't get any sort of performance impression.

Um ... I recall what set me out here.
The device I'm mapping to is an old EMC that doesn't support enough files per folder for the .chunks folder structure.
You could get the map to work, but the chunks wouldn't create.
By mapping it as a datastore and putting the virtual disk there, I got around that limit.

I had some other really dumb ideas.
I was going to use TrueNAS as a front end for the iSCSI mount. Ya, don't try modifying the back end in TrueNAS.

So this method from the manual seemed perfect to me. And its worked.
Sorry for the lack of detail, its been a couple years and many disasters in between.
 
Last edited:
  • Like
Reactions: NewbAdmin
I wound up running the initiator on the hosts as recommended, but I passed the LUN directly through to the PBS VM instead of adding the LVM layer. This approach isn't what the Proxmox team recommends for most scenarios, but they do support it, and it's definitely easier to configure than running the initiator in the VM. It also has certain advantages for me. I think it makes the restore process slightly easier, and I can take better advantage of a thinly-provisioned volume on the backend. Performance wound up being less important than storage efficiency for me.

I didn't end up taking advantage of multipathing (though not because I didn't want to).

I'm working on a writeup at newbadmin.com if any passers-by are curious.