PBS: How to use multipathed iSCSI-target(s) for backups?

Seabob

New Member
Jan 22, 2024
12
3
3
DE
www.seabob.com
Dear all,
as far as I can see things, you can add nearly any locally mounted folder as a backup directory. So, why not creating an LVM-group on a multipathed iSCSI-target and mount this? (As opposed trying to install PBS directly on a NAS-box).
In order to implement my plan, I installed open-iscsi and multipath-tools in PBS 3. Two additional NICs for SAN-A (VLAN9) and SAN-B (VLAN10) were also prepared and running. By help of this article: How-To-Setup-iSCSI-with-Multisession-and-MPIO-in-Linux I was even able to "see" my MP-device under "/dev/mapper/myNAS_LUN1".
But trying to reboot the system ended with 6 times "Failed to start proxmox-backup-proxy.service - Proxmox Backup API Proxy Server." and after about 1-2 minutes I can login to the server.
Before drilling into my mislead configuration in detail, I'd like to ask, is there a better and official guideline available from Proxmox?
 
Maybe, I should clarify my request for help a bit:

#1 How to attach iSCSI-targets to PBS? - There is no GUI like with PVE.
#2 How to get multipathing running?

Currently, I'm testing wit PBS 3.1 distro and in parallel using a Kubuntu with PBS 3.1 installed on top.
 
Well, with a lot of testing and reinstalling I got it made, finally. In lack of a comprehensive guideline I decided to put my experience here, so someone else may profit from it. Many hints presented here are actually taken from other manuals, I just compile them.
The task was to get this environment up and running:
PVE_big-picture_US-en.png
I have to admit the switches do not support MLAG or stacking, so I had to achieve fault-tolerant connections by other means.
Another thing I asked myself was: "how to backup the PBS itself?" - I haven't found a guide to import an existing datastore in case the PBS needs to be restored for whatever reason, so I installed it as another VM. I'm aware of the impacts discussed in other threads.
Consequently, I had to decide whether to attach the LUN for backups to the host and create a vmdisk for PBS on it, or to attach the LUN directly to the VM, so the VM can access it nearly independently from the host. I chose to connect the LUN directly to the VM.

Installing PBS as VM brought the comfort to use the VLAN-tagging provided by the vmbrs in PVE, so I didn't need to configure VLANs for SAN-A and SAN-B in the VM.

nano /etc/network/interfaces:

auto lo
iface lo inet loopback

auto ens18
iface ens18 inet static
address 192.168.2.43/23
gateway 192.168.2.254
#LAN, Management
auto ens19
iface ens19 inet static
address 192.168.0.133/26
#SAN-A

auto ens20
iface ens20 inet static
address 192.168.0.197/26
#SAN-B

#1 Make sure you can ping targets on all networks including internet.

#2 download necessary tools and updates
apt update
apt upgrade
apt install -y open-iscsi multipath-tools
optional, maybe useful as well: apt install -y mc htop net-tools multipath-tools-boot

#3 create interfaces and bindings for iSCSI
iscsiadm -m iface -I NAS0002_A --op new
iscsiadm -m iface -I NAS0002_B --op new
iscsiadm -m iface -I NAS0002_A -op update –n iface.net_ifacename -v ens19
iscsiadm -m iface -I NAS0002_B -op update -n iface.net_ifacename -v ens20
These bindings are stored in /etc/iscsi/ifaces and maybe checked there.

#4 Edit /etc/iscsi/iscsid.conf
node.startup = automatic
node.session.timeo.replacement_timeout = 15
node.session.iscsi.MaxBurstLength = 262144 (<= this value depends on your storage-devices)

#5 activate and start iSCSI-services
systemctl enable iscsid open-iscsi multipathd
systemctl start iscsid open-iscsi multipathd

#6 discover your storage-targets
iscsiadm -m discovery -t sendtargets -p <storage_IP>:3260 -I NAS0002_A
a potential result could be:
192.168.0.129:3260,1 iqn.2000-01.com.synology:CAYDEBSNAS0002.default-target.70b15ac062e <= this IQN will be needed in the next step

#7 login to target
iscsiadm -m node -T iqn.2000-01.com.synology:CAYDEBSNAS0002.default-target.70b15ac062e -p 192.168.0.129 -l
iscsiadm -m node -T iqn.2000-01.com.synology:CAYDEBSNAS0002.default-target.70b15ac062e -p 192.168.0.193 -l
I needed to login via both paths, maybe even a discovery on both paths might be necessary.
- check result using "lsblk"
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 40G 0 disk
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 512M 0 part
└─sda3 8:3 0 39.5G 0 part
sdb 8:16 0 5T 0 disk
└─NAS002_LUN1 252:0 0 5T 0 mpath
sdc 8:32 0 5T 0 disk
└─NAS002_LUN1 252:0 0 5T 0 mpath
sr0 11:0 1 964M 0 rom
We can see a /dev/sdX for each path, in this case it is sdb and sdc
Now we use this information to determine the WWIDs of the target. As there are two paths, but only one target, you can choose one of the paths.
/lib/udev/scsi_id -g -u -d /dev/sdb

The following information is mostly taken from the Proxmox-Wiki regarding multipath iSCSI.

#8 Edit /etc/multipath.conf (example)
blacklist {
wwid .*
}

blacklist_exceptions {
wwid "360014054fc1c251db21ed4387dac22d3"
wwid "3600144f028f88a0000005037a95d0002"
}
multipaths {
multipath {
wwid "360014054fc1c251db21ed4387dac22d3"
alias NAS002_LUN1
}
multipath {
wwid "3600144f028f88a0000005037a95d0002"
alias NAS003_LUN0
}
}
defaults {
polling_interval 2
path_selector "round-robin 0"
path_grouping_policy multibus
uid_attribute ID_SERIAL
rr_min_io 100
failback immediate
no_path_retry queue
user_friendly_names yes
}
in addition to this, you add the wwid also to /etc/multipath/wwids by the command "multipath -a <your_wwid>"

#9 Restart and check multipathing with edited configuration
systemctl restart multipath-tools.service
multipath -ll
the latter command should return something like:

NAS002_LUN1 (36001405f5bbfebfddd2dd448cda62fd5) dm-0 SYNOLOGY,Storage
size=1.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=50 status=active
|- 3:0:0:1 sdb 8:16 active ready running
`- 4:0:0:1 sdc 8:32 active ready running
Both paths to the LUN should show up like above.

At this point one could think the job is mostly done, just create a partition or LVM-volume on the device, format it, mount it and finally create the datastore, but it got tricky for me. If there is somebody out, who has better advice, I'm sure the community and me will welcome it.

#10 determine the device-path for the LUN
"ls /dev/mapper" reveals:
control NAS002_LUN1

For my understanding /dev/mapper/NAS002_LUN1 should be a mountable path, but it didn't work.

#11 creating the partition
in order to do so, I used "kpartx -av /dev/mapper/NAS002_LUN1" to introduce a device-mapping. "ls /dev/mapper" now shows:
control NAS002_LUN1 NAS002_LUN1-part1
As I found out, I could now partition and format /dev/mapper/NAS002_LUN1-part1 by "gdisk" and "mkfs".

#12 mounting with surprises
#12a attempting to use ext4:
I formatted the device with default options "mkfs -t ext4 -L NAS002_LUN1 /dev/mapper/NAS002_LUN1",
but initialization as datastore for backups failed, because not all required folders could be created. This was a surprise. I'd be interested to know, which options I should have passed to mkfs in order to get the datastore initialized.
#12b using xfs:
formatting, mounting maunally and initializing as datastore worked well, so I liked to automount this store on startup using /etc/fstab.

#13 automounting on startup
My first several attempts took the server to takes minutes for booting up and ended in maintenance mode.
The maintenance mode was entered, because the LUN wasn't ready, when fsck tried to check it on boot, so I gave this line in fstab a try:
/dev/mapper/NAS002_LUN1-part1 /mnt/NAS0002_LUN1 xfs auto,nofail,rw,defaults 0 2 - "nofail" did the trick in my case.

But still booting took awfully long. The next error I figured out by checking system-logs was: iscsi tried to login in to IP-address on SAN-A by using the iface on SAN-B. This needed to fail.
Checking /etc/iscsi/nodes in detail revealed an "additional" faulty route for the portal on SAN-A via NAS0002_B, which I deleted.
Now, the boot completes within a minute or less.

I connected the backup-store to PVE and ran a test. As a result it turned out that performance on backup is just limited by the 1GBit-NICs in the NAS-device, so I'm fine in the end.
 
  • Like
Reactions: dilime and UdoB

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!