[SOLVED] using iSCSI as Backup Storage

peterG

Member
Jan 20, 2014
15
5
23
Boston, MA, US
Hi all
Posting this to save anyone else in future all the digging done to get iSCSI backup storage configured and working.
PLS revise and suggest corrections and shortcuts if possible from all.

The existing wiki docs either dont go far enough or are not clear enough for new comers to Linux to understand that you need to use LVM and then drop to Command line (or CLI) to create a Logical Volume within the Logical Volume Group one creates via the web gui & then format that with a file system on the Logical volume you just created in order to be able to use iSCSI as backup storage and THEN add & define --back in the gui-- the backup storage as a "directory" in order to be able to use it for backups of vm's. (I hope this preceding statement is correct, pls edit me & revise as needed.)

working with pveversion 3.2-4 with kernel: 2.6.32-29-pve
This PVE is a single node at nonprofit organization with very small IT budget, (been working great since pve 3, hosting w2003R2 vm's)

The iSCSI targets are Win7 physical desktops that happen to have bigger hd's installed in them, thus installed the free Starwind software (single instance free or complementary license per physical machine) iSCSI servers that serve up an iSCSI target from carving out unused space from its native ntfs hd. Also looked at the free Kernsafe iStorage server single instance license but chose to go with StarWind product and even tho it complained upon install taht we werent installing their products on a server platform it installs and worx just fine on Win7 64 bit machines.

So iSCSI target is created, then mounted as iSCSI in PVE web gui and then following the reccm's of the Storage Model wiki page, create & use an LVM on this and create a Volume Group name. That is also explained in the page. Then you have to bring up CLI and create a logical volume on the volume group and then format the logical volume with a file system! Before going back to the web gui and being able to add this now formatted logical volume as a -Directory- to be used as backup storage (you would be in the storage tab of the datacenter portion of the web gui at this point). If one desires to start backing up vm's right then, you click on each vm and say backup now, the backup storage now shows up as the directory you just designated & mounted in last step..

here's the procedure with linx:

First start at the Storage Model wiki page: here

Then before going on, take some time to lookit how Logical Volumes work in Linux, Review Physical Volumes, Volume Groups and Logical Volumes in these 3 tutorials, they will help!

how to create & work

then this one over at centos

and finally use the great png image cheat sheet which really clears up all the nice & simple commands you need to deal with LVM's here
or to just get the png image LVM commands cheatsheet here

so putting it all together: at the point where you've created the LVG in the web gui and then you've just brought up the CLI (either at the console or thru ssh session) to continue issuing these commands
(At this point Udo's directions drove the point home from this post here)
Just extracting& quoting relevant portions from his post above, what helped was this:

"simply create an logical volume on your lvm for backup - this has the effort you can play also with VM-disks on lvm, because you don't need to use all space for backup in the beginning - logical volumes can grow whithout trouble."

lvcreate -L 1T -n backup name_of_VG # create an logical volume with 1TB on the volumegroup name_of_VG

(or alternatively as stated in ths centos tutorial here as cited in above link: https://www.centos.org/docs/5/html/Cluster_Logical_Volume_Manager/LV_create.html

lvcreate -l 100%FREE -n your-lv-name testvg #-"testvg" being the logical volume group name you assigned in web gui portion of teh pve config-
The abuv command creates a logical volume called "your-lv-name" that uses all of the unallocated space in the volume group "testvg"

then
vgdisplay )

Udo continues here:

#have a look:
vgdisplay
lvdisplay

mkfs.ext4 /dev/name_of_VG/backup
mkdir /backup
echo "/dev/name_of_VG/backup /backup ext4 defaults 0 2" >> /etc/fstab # appends this line to fstab
mount /backup

after that, define the directory /backup as backup-storage.

To expand the logical volume see "man lvm", "man lvextend" and "man resize2fs".

Udo


have you tried my example before?


lvcreate -l 270000 -n backup /dev/second-lvm
ls -l /dev/mapper
mkfs.ext4 /dev/second-lvm/backup

Udo

Hi,
with "-l" you give the amount of logical extends - in your case one extend is 4MB in size.
Look with "vgdisplay" how much do you have free. You can use "lvextend -l +XXXX" for extends, or "-L +800M" for MB.
After that use simply resize2fs.
Do an "df -k" before and after that.

Udo

-------------------------------------------------------------------------------------------------------

So Udo's repeated persistence and patience helps a lot, finally here's what was done in our case following above directions and just putting aside the web gui after creating teh LVG and bringing up the CLI:

vgdisplay gives us LR1 as our created LVG name we created in the web gui.

lvcreate -l 100%FREE -n LV1 LR1 #we just created a logical volume called LV1 inside LVG LR1 and we took 100% of the free space in the mounted iSCSI block device and now we'll format it with a file system.. this is very very similar as in working with fdisk and partitioning or just plain partitioning back in the ancient DOS days ..

mkfs.ext4 /dev/LR1/LV1 # we are now formatting and laying out a type ext4 file system on the LV1 logical volume device we created inside the LVG LR1. be patient and let format finish, the bigger the space being formatted the more time it'll take.. (a few minutes or so i'd say given say a 1TB iSCSI block device being on a GigE LAN and using say core2 duo machines with 4GB mem or more as the hosted iSCSI target).

now say I want to mount this in my /mnt dir, I make a dir inside /mnt and then mount it as this:

mkdir /mnt/LV1-backup-here # for clarity here i am actually naming/designating this dir as "LV1-backup-here"

mount /dev/LR1/LV1 /mnt/LV1-backup-here # Ok, whew! we just mounted the darn formatted logical volume finally at this mount point called "LV1-backup-here"..

Now back we go into the webgui and now go to teh datacenter(on left side of web gui) - & then one of the tabs is teh storage tab, go in there and add the local directory /mnt/LV1-backup-here as your backups dir destination.

now say you want to start right away and manually backup a vm to this new desitnation, click on teh vm's on the left pane of webgui and then choose teh backup tab and then inside taht tab, just click on back up now.. (the storage drop down box in this vm's backup pane should already be populated with the newly created backup storage space desigantion/name you chose)
If you want better compression and smaller backups, choose the gzip compression option to save space, but the backups will take more time for sure.

Now after these backups complete, and bcs we are connected to an actual win7 physical machine(s), which may not be up all the time, we tear down the connection and the image files inside the iSCSI "capsule" on the win7 ntfs disk keep the backup..
hav to figure out best way to tear down this connection ..

hope this clarifies and helps bcs the digging can be frustrating.. anyone/all pls feel free to revise,clarify,correct,improve.
 
Good story! Just curious about StarWind config... Single-node instance should not have RAM cache set to write-back mode. Did you turn cache OFF completely or moved to WT option? Also why didn't you cluster StarWind into HA config (StarWind had recently started giving away two-node versions w/o any limitations, you need to be MCP, MVP, MCT or ask kindly LOL)? That would give an option to run redundant backup storage with replication between nodes rather then RAID configured on single node. For now we use bunch of an old Netgear and newer Synology NAS boxes as a backup (also in iSCSI mode) but as we don't want to replace 1TB disks into 2TB or 3TB ones we're looking for a replacement: single or dual (see question above) Dell R710 (we're replacing them with R730 units) packed with 1-2-3 WD RE4 drives we could be adding "on requirement" and running free Hyper-V and free StarWind, single or clustered. Single Vs Clustered is actually my question :) Thank you!

Hi all
Posting this to save anyone else in future all the digging done to get iSCSI backup storage configured and working.
PLS revise and suggest corrections and shortcuts if possible from all.

The existing wiki docs either dont go far enough or are not clear enough for new comers to Linux to understand that you need to use LVM and then drop to Command line (or CLI) to create a Logical Volume within the Logical Volume Group one creates via the web gui & then format that with a file system on the Logical volume you just created in order to be able to use iSCSI as backup storage and THEN add & define --back in the gui-- the backup storage as a "directory" in order to be able to use it for backups of vm's. (I hope this preceding statement is correct, pls edit me & revise as needed.)

working with pveversion 3.2-4 with kernel: 2.6.32-29-pve
This PVE is a single node at nonprofit organization with very small IT budget, (been working great since pve 3, hosting w2003R2 vm's)

The iSCSI targets are Win7 physical desktops that happen to have bigger hd's installed in them, thus installed the free Starwind software (single instance free or complementary license per physical machine) iSCSI servers that serve up an iSCSI target from carving out unused space from its native ntfs hd. [ ... ]
 
Good story! Just curious about StarWind config... Single-node instance should not have RAM cache set to write-back mode. Did you turn cache OFF completely or moved to WT option? Also why didn't you cluster StarWind into HA config (StarWind had recently started giving away two-node versions w/o any limitations, you need to be MCP, MVP, MCT or ask kindly LOL)? That would give an option to run redundant backup storage with replication between nodes rather then RAID configured on single node. For now we use bunch of an old Netgear and newer Synology NAS boxes as a backup (also in iSCSI mode) but as we don't want to replace 1TB disks into 2TB or 3TB ones we're looking for a replacement: single or dual (see question above) Dell R710 (we're replacing them with R730 units) packed with 1-2-3 WD RE4 drives we could be adding "on requirement" and running free Hyper-V and free StarWind, single or clustered. Single Vs Clustered is actually my question :) Thank you!
Sorry to resurrect an old thread but did you ever manage to get the LVM volume to auto mount to the folder, or was it required to manually mount from the CLI? I'm experimenting with Starwind Virtual SAN on Proxmox and would like to have it fairly automated in the same way that's possible with both Hyper-V and ESX
 
response to @plastilin on the other thread in Proxmox forums that was referred to back to here:

Yes we got this working, HOWEVER the speed of doing backups (live snapshots or etc) was so SLOW, excruciatingly slow (even tho this was within a GigE LAN), we gave it up and never used it again.. we opted instead to image the running VM's from within themselves..

ie:this is what we do, but it can be done so many diffrent ways otherwise without using iSCSI over the LAN.. what we do is we use PARAGON within the running instances of the MS platforms, whether server or workstations, and we just image the boot "disc" portion to other storage (whether outside the proxmox chassis on the LAN or inside the Proxmox chassis) for backup or offsite backup.
The data volumes we do continuous sync in addition to nitely offsite backups.. all this done within the VM's..
 
response to @plastilin on the other thread in Proxmox forums that was referred to back to here:

Yes we got this working, HOWEVER the speed of doing backups (live snapshots or etc) was so SLOW, excruciatingly slow (even tho this was within a GigE LAN), we gave it up and never used it again.. we opted instead to image the running VM's from within themselves..

ie:this is what we do, but it can be done so many diffrent ways otherwise without using iSCSI over the LAN.. what we do is we use PARAGON within the running instances of the MS platforms, whether server or workstations, and we just image the boot "disc" portion to other storage (whether outside the proxmox chassis on the LAN or inside the Proxmox chassis) for backup or offsite backup.
The data volumes we do continuous sync in addition to nitely offsite backups.. all this done within the VM's..

I want to use starwind as SAN fro store VM`s, not backup
 
This 9 year old post worked for me. I want to use my Drobo Elite (iSCSI) for backups with Proxmox Backup Server. I couldn't figure out the next steps on getting a volume group created. Thanks PeterG!
 
The problem with PBS using iSCSI directly (last time we looked) using PBS disk management interface (without LVM and Filesystem dance), is that developers decided to specifically exclude iSCSI based disks:
Code:
 if let Ok(target) = std::fs::read_link(&sys_path) {
            if let Some(target) = target.to_str() {
                if ISCSI_PATH_REGEX.is_match(target) { continue; } // skip iSCSI devices
            }
        }
https://github.com/proxmox/proxmox-...dc70b2/src/tools/disks/mod.rs#L966C25-L966C25

Whether its the right decision is up to discussion, I guess.



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!