C
CalamityDjenn
Guest
Hi there,
I setted up Proxmox VE 1.8 on a HP Proliant DL360 G5 server (master) and a little C2D desktop computer (slave). The slave server is there only if we got a problem with the master, so all the VM are linked to the master server.
The goal is to use one or more LUNs with the SAN we have got on the network because we only have 79Gb (RAID) physical storage space on the master server. There, I thought it would be easy, if the master if out of order, to have the VM running through the slave server...
I want to use this LUNs for virtual disks and backups, only iso (only one ATM) or templates (we don't use OpenVZ ATM) would be on the physical hard drive.
I am quite a newbie about iSCSI so I think what I have done is wrong...
I created my new volume test + LUN on the SAN side, then did everything about the initiator + volume group to allow access from both servers.
Then, on the master server, I initialized the connection to the SAN with iscsiadm commands.
Here are the steps I have done after that :
From that, I can manage all the files from command lines in case we got a problem with the web interface and the backups run fine, BUT...
If we reboot the server, the VM cannot start at boot because the /etc/fstab file is called BEFORE the iSCSI connection with the SAN, so our disks are not mounted on the system. We got to do mount -a when the system is ready, and then start the VM manually! Really annoying
Also, I think this is NOT the solution because I found we could add an iSCSI target + LV Group on the server(s) through the web interface. The problem is I absolutely don't know how to manage the files if we use this method : I tried to do this with a new LUN on the slave server but I did not see any folder where the LUN was mounted on the system. Is that normal ?
How can I do then to copy a VM from a server to another ?
Also, I automatized snapshots using the web interface so all my VM are saved each day, but I only got one file for each VM. So, I wanted to create a little script (bash) to duplicate those files so I could have two files for each VM : one from today + one from yesterday for example.
This is in case a problem occurs on the virtual system just before backups (it happened), for safety reasons.
I don't know if everything is clear... The mean thing is our slave server does not run any VM, it is there ONLY if the master is down. It has to be able to start the VM from the state they were before the master server stopped. I don't know how to do my iSCSI thing for that to work properly...
Thank you for your attention.
I setted up Proxmox VE 1.8 on a HP Proliant DL360 G5 server (master) and a little C2D desktop computer (slave). The slave server is there only if we got a problem with the master, so all the VM are linked to the master server.
The goal is to use one or more LUNs with the SAN we have got on the network because we only have 79Gb (RAID) physical storage space on the master server. There, I thought it would be easy, if the master if out of order, to have the VM running through the slave server...
I want to use this LUNs for virtual disks and backups, only iso (only one ATM) or templates (we don't use OpenVZ ATM) would be on the physical hard drive.
I am quite a newbie about iSCSI so I think what I have done is wrong...
I created my new volume test + LUN on the SAN side, then did everything about the initiator + volume group to allow access from both servers.
Then, on the master server, I initialized the connection to the SAN with iscsiadm commands.
Here are the steps I have done after that :
Code:
pvcreate /dev/sda1
vgcreate stockVM /dev/sda1
lvcreate -L55000 -n lv_vz stockVM
lvcreate -L25000 -n lv_dump stockVM
mkfs.ext3 /dev/stockVM/lv_vz
mkfs.ext3 /dev/stockVM/lv_dump
mkdir /var/lib/vz2
mkdir /var/lib/vz2/dump
mount /dev/stockVM/lv_vz /var/lib/vz2
mount /dev/stockVM/lv_dump /var/lib/vz2/dump
echo "dumpdir: /var/lib/vz2/dump/" >> /etc/vzdump.conf
echo "# LVM from SAN
/dev/mapper/stockVM-lv_vz/var/lib/vz2 ext3 defaults 0 0
/dev/mapper/stockVM-lv_dump/var/lib/vz2/dump ext3 defaults 0 0
" >>/etc/bak_fstab
From that, I can manage all the files from command lines in case we got a problem with the web interface and the backups run fine, BUT...
If we reboot the server, the VM cannot start at boot because the /etc/fstab file is called BEFORE the iSCSI connection with the SAN, so our disks are not mounted on the system. We got to do mount -a when the system is ready, and then start the VM manually! Really annoying

Also, I think this is NOT the solution because I found we could add an iSCSI target + LV Group on the server(s) through the web interface. The problem is I absolutely don't know how to manage the files if we use this method : I tried to do this with a new LUN on the slave server but I did not see any folder where the LUN was mounted on the system. Is that normal ?
How can I do then to copy a VM from a server to another ?
Also, I automatized snapshots using the web interface so all my VM are saved each day, but I only got one file for each VM. So, I wanted to create a little script (bash) to duplicate those files so I could have two files for each VM : one from today + one from yesterday for example.
This is in case a problem occurs on the virtual system just before backups (it happened), for safety reasons.
I don't know if everything is clear... The mean thing is our slave server does not run any VM, it is there ONLY if the master is down. It has to be able to start the VM from the state they were before the master server stopped. I don't know how to do my iSCSI thing for that to work properly...
Thank you for your attention.