What I have:
2 x HP DL380 G7's
1 x EMC SAN presenting 4.7TB via iSCSI
Cisco-based network
Each server has four NICs connected as such:
enp3s0f0/vmbr0 - Managment
enp3s0f1/vmbr1 - 802.1q Trunk for VM connectivity
enp4s0f0/vmbr2 - iSCSI (this is a L2 VLAN, no routing in or out)
enp4s0f1/vmbr3 - Migration (this is a L2 VLAN, no routing in or out)
The EMC SAN has two nics configured:
NIC1 - Managment and SMB/CIFS
NIC4 - iSCSI (this is a L2 VLAN, no routing in or out)
From a connectivity standpoint, both hosts can ping each other from all three networks in question (Managment/iSCSI/Migration).
I built up host #1 and managed to figure out how to get iSCSI working and have ~50 VM's currently running with their disks stored on the SAN. Once this ran for a few days, I build the cluster on this node.
I then build host #2 and when I had the networking configured, I joined it to the cluster. And this is where the problems started.
When the join was complete, I could see the storage on node #2's web interface, but it was not mounted. So I set about using lsblk and blkid to get my UUID's and then added lines to /etc/fstab to mount the iSCSI to a directory (/mnt/emc-nas). After doing that, I could see the VM's disks on host #2, so I attempted a migration. I was warned it was making a local copy of the disk images and started a transfer..... This made no sense as the disks were clearly there.
I've been messing around with it since, based upon things I've been able to find google or in these forums. That leads me to some questions/observations:
1). Can what I'm trying to do be done from the GUI or will it be a mix of shell work and the GUI. I'm very comfortable in a shell, albeit not necessarily with pve. When there is a GUI, I tend to hestitate jacking around at the shell level, because I've seen issues in the past doing that on other platforms.
2). Do the iSCSI devices (/dev/sdx) have to be the same between hosts? For instance iSCSI is /dev/sdb on each node. Since I mount the iSCSI drive, the answer would seem to be no, but I found couple of posts in my searching that indicated that might not be the case.
3). Do I need an LVM for the iSCSI. Not LVM over iSCSI, but a straight iSCSI storage entry being placed in an LVM. Right now I'm running without an LVM.
I think that's enough info to get things started.
2 x HP DL380 G7's
1 x EMC SAN presenting 4.7TB via iSCSI
Cisco-based network
Each server has four NICs connected as such:
enp3s0f0/vmbr0 - Managment
enp3s0f1/vmbr1 - 802.1q Trunk for VM connectivity
enp4s0f0/vmbr2 - iSCSI (this is a L2 VLAN, no routing in or out)
enp4s0f1/vmbr3 - Migration (this is a L2 VLAN, no routing in or out)
The EMC SAN has two nics configured:
NIC1 - Managment and SMB/CIFS
NIC4 - iSCSI (this is a L2 VLAN, no routing in or out)
From a connectivity standpoint, both hosts can ping each other from all three networks in question (Managment/iSCSI/Migration).
I built up host #1 and managed to figure out how to get iSCSI working and have ~50 VM's currently running with their disks stored on the SAN. Once this ran for a few days, I build the cluster on this node.
I then build host #2 and when I had the networking configured, I joined it to the cluster. And this is where the problems started.
When the join was complete, I could see the storage on node #2's web interface, but it was not mounted. So I set about using lsblk and blkid to get my UUID's and then added lines to /etc/fstab to mount the iSCSI to a directory (/mnt/emc-nas). After doing that, I could see the VM's disks on host #2, so I attempted a migration. I was warned it was making a local copy of the disk images and started a transfer..... This made no sense as the disks were clearly there.
I've been messing around with it since, based upon things I've been able to find google or in these forums. That leads me to some questions/observations:
1). Can what I'm trying to do be done from the GUI or will it be a mix of shell work and the GUI. I'm very comfortable in a shell, albeit not necessarily with pve. When there is a GUI, I tend to hestitate jacking around at the shell level, because I've seen issues in the past doing that on other platforms.
2). Do the iSCSI devices (/dev/sdx) have to be the same between hosts? For instance iSCSI is /dev/sdb on each node. Since I mount the iSCSI drive, the answer would seem to be no, but I found couple of posts in my searching that indicated that might not be the case.
3). Do I need an LVM for the iSCSI. Not LVM over iSCSI, but a straight iSCSI storage entry being placed in an LVM. Right now I'm running without an LVM.
I think that's enough info to get things started.