Can iSCSI + LVM storage be used as a destination for vzdump backups ?

vcp_ai

Renowned Member
Jul 28, 2010
177
5
83
Valencia -Spain-
After reading this post
http://forum.proxmox.com/threads/56...s-with-iSCSI-storage?highlight=lvm+as+backups

I still do not know if it can be used or not:

I've configured an ISCSI + LVM according to instructions on
http://pve.proxmox.com/wiki/Storage_Model#LVM_Groups_with_Network_Backing

and it only offers me as available Contents : "Virtual Disks"

I've succesfully installed some Virtual machines there, but I need to use ISCSI Target for destination of vzdumps backups.

Can I use it for both purposes ?
-If as I asume It can not be done, how should I define an ISCSI Target for destination of vzdump backups ???
 
Ah, dietmar, thanks, that makes sense. I had wondered that too. NFS is its own filesystem, so that 'just works'...
 
You need a filesystem to store backup files.
Thanks for your answer. That was what I was thinking...

I've tried following method for a work around to use iscsi for destination of vzdump backups, and it works on a test machine.
It basically uses an iSCSI Target and mounts it as a local directory:

This is the procedure I used:
=============================================================
We create a new iSCSI target Checking "Use LUNs directly"

After Save we can see on var/log/messages:

Mar 3 18:27:10 proxmox kernel: scsi3 : iSCSI Initiator over TCP/IP
Mar 3 18:27:10 proxmox kernel: scsi 3:0:0:0: Direct-Access OPNFILER VIRTUAL-DISK 0 PQ: 0 ANSI: 4
Mar 3 18:27:10 proxmox kernel: sd 3:0:0:0: Attached scsi generic sg2 type 0
Mar 3 18:27:10 proxmox kernel: sd 3:0:0:0: [sdb] 63438848 512-byte logical blocks: (32.4 GB/30.2 GiB)
Mar 3 18:27:10 proxmox kernel: sd 3:0:0:0: [sdb] Write Protect is off
Mar 3 18:27:10 proxmox kernel: sd 3:0:0:0: [sdb] Write cache: disabled, read cache: disabled, doesn't support DPO or FUA
Mar 3 18:27:10 proxmox kernel: sdb: unknown partition table
Mar 3 18:27:10 proxmox kernel: sd 3:0:0:0: [sdb] Attached SCSI disk
Mar 3 18:27:10 proxmox kernel: sdb: detected capacity change from 0 to 32480690176

i.e. It has detected a new device and named it
sdb


==============================================================
On a ssh terminal sesion :
proxmox:~# fdisk -l

Disk /dev/sda: 250.0 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000

Device Boot Start End Blocks Id System
/dev/sda1 * 1 66 524288 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2 66 30401 243671712 8e Linux LVM

Disk /dev/dm-0: 4294 MB, 4294967296 bytes
255 heads, 63 sectors/track, 522 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000

... ... ... .. . Part Removed .........

Disk /dev/sdb: 32.4 GB, 32480690176 bytes
64 heads, 32 sectors/track, 30976 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Disk identifier: 0x00000000

Disk /dev/sdb doesn't contain a valid partition table
proxmox:~#

i.e /dev/sdb is present with no valid partition

==========================================================
So we create it
proxmox:~# fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0x2aae58ae.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.


The number of cylinders for this disk is set to 30976.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
proxmox:~#

=========================================
Format partition using ext4
proxmox:~# mkfs.ext4 /dev/sdb
mke2fs 1.41.3 (12-Oct-2008)
/dev/sdb is entire device, not just one partition!
Proceed anyway? (y,n) y
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
1982464 inodes, 7929856 blocks
396492 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
242 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000

Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 35 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
proxmox:~#

====================================

Mount file system:
# mkdir /mnt/iscsi1
# mount /dev/sdb /mnt/iscsi1

====================================
Now on proxmox we can add a new Directory on Storage page

Name: Directorio-Backups
Directorio: /mnt/iscsi1

Shared: YES
Type: Vzdump Backups
=====================================

And it works as a destination for vzdumps


Please revise it, and correct any mistake, or improvement you can find/suggest, before I install it on a production machine.

Thanks in advance:

Vicente
 
Don't forget that in such case it is impossible to share it between different servers...the filesystem wouldn't appreciate... :)
 
Don't forget that in such case it is impossible to share it between different servers...the filesystem wouldn't appreciate... :)
Thanks for your interest. Yes, I know that, but the owner of the SAN server says that it can not provide NFS Shares .... ....
So it is the only way (that I've found) to obtain the needed storage for vzdumps, being external to the prommox server.

By the way, do you find my procedure correct or can it be improved in some way ??
 
Thanks for your interest. Yes, I know that, but the owner of the SAN server says that it can not provide NFS Shares .... ....
So it is the only way (that I've found) to obtain the needed storage for vzdumps, being external to the prommox server.

By the way, do you find my procedure correct or can it be improved in some way ??
Hi,
two improvements: mount the filesystem with the uuid (in /etc/fstab), because if you insert an second internal drive, this will be sdb!
To see the uuid use in you case "blkid /dev/sdb".
Normaly you should use an Partition-table, disk without partition-table and without lvm-info are perhaps see as blank disk and will be destroyed by an inattentive person. With partitiontable (sdb1) you easiely see, that the disk is in use.

Udo
 
Thanks both dswart & udo.

Fortunately I was just going to start doing the process..

Investigatig how to create a partition, seems that cfdisk will help me ...

Also will try to understand /etc/fstab terminology, as it does not look very clear to me...

Will inform of results.

Best Regards.

Vicente
 
Well I've made all the process according your recomendations and have arrived to have a directory mounted on /mnt/iscsi1 i.e:

Created partition /deb/sdb1 with cfdisk using all the available space.
formated with mkfs.ext4 /dev/sdb1
then mkdir /mnt/iscsi1 and finally mount /dev/sdb1 /mnt/iscsi1

Directory has been added to Proxmox, and defined as vzdumps bakups. And it works !!
Results of blkid are:
proxmox:/# blkid /dev/sdb1
/dev/sdb1: UUID="93e9e1d4-e574-48eb-ab96-bf0a24d2fcdf" TYPE="ext4"
proxmox:/#
So after reading fstab(5) and looking to some examples I plan to add this line to /etc/fstab :

UUID=93e9e1d4-e574-48eb-ab96-bf0a24d2fcdf /mnt/iscsi1 ext4 defaults 0 1

Could you please confirm that these are correct parameters (I can not reboot the machine, and I will like to be sure that when it happens, It will start ....)

Vicente


 
Last edited:
Well I've made all the process according your recomendations and have arrived to have a directory mounted on /mnt/iscsi1 i.e:

Created partition /deb/sdb1 with cfdisk using all the available space.
formated with mkfs.ext4 /dev/sdb1
then mkdir /mnt/iscsi1 and finally mount /dev/sdb1 /mnt/iscsi1

Directory has been added to Proxmox, and defined as vzdumps bakups. And it works !!
Results of blkid are:
proxmox:/# blkid /dev/sdb1
/dev/sdb1: UUID="93e9e1d4-e574-48eb-ab96-bf0a24d2fcdf" TYPE="ext4"
proxmox:/#
So after reading fstab(5) and looking to some examples I plan to add this line to /etc/fstab :

UUID=93e9e1d4-e574-48eb-ab96-bf0a24d2fcdf /mnt/iscsi1 ext4 defaults 0 1

Could you please confirm that these are correct parameters (I can not reboot the machine, and I will like to be sure that when it happens, It will start ....)

Vicente


Hi,
looks good. You can try it without reboot:
Code:
umount /mnt/iscsi1

mount -a      # this mounts all entrys, which are normaly mounted during systemstart

# or
mount /mnt/iscsi1

Udo
 
Code:
umount /mnt/iscsi1

mount -a      # this mounts all entrys, which are normaly mounted during systemstart

# or
mount /mnt/iscsi1
Udo
Hi,

tested, and it works:

After umount /mnt/iscsi1 , from proxmox web page Directory looks empty.
After mount -a all contents are there !!!

Thank you very much to everybody for his support !!!!!!!!!!!! :)

Vicente
 
Hi,

Today I've re-started Proxmox for the first time, and found following problem:

During system start-up, it makes fsck on hard disks-> no problem, and then, it tries to do fsck on isci disk, without success..
Start-up halts waiting for pressing Ctrl-D to continue or enter root password for maintenance.

It seems that at this time it can not connect to iscsi to do an fsck.

When removing line
UUID=93e9e1d4-e574-48eb-ab96-bf0a24d2fcdf /mnt/iscsi1 ext4 defaults 0 1
from fstab systems starts correctly, but obviously there is no iscsi.

Is there anything that I can do to bring iscsi ready and the time where fsck is done ?
I can not find the process that starts iscsi ....

Any other hint ?

Regards

Vicente
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!