New install: Guests can't write to drives on iscsi lvm group

n3mtr

New Member
Oct 12, 2012
3
0
1
We have a new install of 2.1-1 on a 2 server cluster. We have 2 iscsi targets setup. 2 lvm groups setup on top of those. I can create disks for kvm guests just fine. But when I try to install the OS, it fails. It can see the disk, with the correct size and everything. But I fails when it tries to write to it. CentOS fails after the partitioning screen, saying it failed to mount the drive. Windows Server 2008 comes back with a non descriptive 0x8... error.
Everything seems fine at the command line with the VG's and LV's as far as I can tell. If I make a virtual disk on the local drive the vm works fine.
Here is the output of vgdisplay:
Found duplicate PV bO37BxDjkV6f101Yh3fD1PhlDcE1tHZu: using /dev/sdb not /dev/sda
Found duplicate PV huIQrTuJCMKotBHbCmAiTd8edW1QsaK4: using /dev/sdd not /dev/sdc
--- Volume group ---
VG Name vm-lvm
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 4
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 2.71 TiB
PE Size 4.00 MiB
Total PE 709631
Alloc PE / Size 40960 / 160.00 GiB
Free PE / Size 668671 / 2.55 TiB
VG UUID wQ7qg8-IO6k-55uv-24SX-JgEz-E8ne-fQvYfN


--- Volume group ---
VG Name proxmox-vg
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 3
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 13.61 TiB
PE Size 4.00 MiB
Total PE 3568639
Alloc PE / Size 0 / 0
Free PE / Size 3568639 / 13.61 TiB
VG UUID P8XFmb-SZth-IpQf-UTQJ-I8YI-9kCP-9QC5kg


--- Volume group ---
VG Name pve
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 4
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 3
Max PV 0
Cur PV 1
Act PV 1
VG Size 67.83 GiB
PE Size 4.00 MiB
Total PE 17365
Alloc PE / Size 15190 / 59.34 GiB
Free PE / Size 2175 / 8.50 GiB
VG UUID obiuir-KHMM-nx80-T0pU-QGnk-dGPj-bhB5jZ

and lvdisplay(vm not running):
Found duplicate PV bO37BxDjkV6f101Yh3fD1PhlDcE1tHZu: using /dev/sdb not /dev/sda
Found duplicate PV huIQrTuJCMKotBHbCmAiTd8edW1QsaK4: using /dev/sdd not /dev/sdc
--- Logical volume ---
LV Path /dev/vm-lvm/vm-101-disk-1
LV Name vm-101-disk-1
VG Name vm-lvm
LV UUID gWUTSK-GXAW-ev8U-opJ0-dtUd-U24B-FXJ1If
LV Write Access read/write
LV Creation host, time proxmox1, 2012-10-12 11:15:48 -0400
LV Status NOT available
LV Size 160.00 GiB
Current LE 40960
Segments 1
Allocation inherit
Read ahead sectors auto


--- Logical volume ---
LV Path /dev/pve/swap
LV Name swap
VG Name pve
LV UUID Zc0yet-IVH9-GIe5-Bl1T-jFtX-w27B-rQKEWX
LV Write Access read/write
LV Creation host, time proxmox, 2012-10-08 10:54:06 -0400
LV Status available
# open 1
LV Size 8.50 GiB
Current LE 2176
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:1


--- Logical volume ---
LV Path /dev/pve/root
LV Name root
VG Name pve
LV UUID qVVwCy-Ifi0-CppX-fiuc-nzlX-iKwp-DbSzIn
LV Write Access read/write
LV Creation host, time proxmox, 2012-10-08 10:54:06 -0400
LV Status available
# open 1
LV Size 17.00 GiB
Current LE 4352
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0


--- Logical volume ---
LV Path /dev/pve/data
LV Name data
VG Name pve
LV UUID mvpfJT-w6K1-9jaG-c4b7-nuSf-PL7l-codnFX
LV Write Access read/write
LV Creation host, time proxmox, 2012-10-08 10:54:06 -0400
LV Status available
# open 1
LV Size 33.84 GiB
Current LE 8662
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:2



Any help would be appriciated. We are new to proxmox and kvm.
Thanks,
Wayne
 
It sounds to me like you've got both nodes talking to the same iSCSI target. That will result in a corrupted file system without a network aware file system (such as RedHat's GFS2, etc.) to coordinate the writes.

If you already have shared storage, and are using 2 nodes, why not configure your SAN to support NFS instead of creating two independent iSCSI targets? Using NFS you can do live migration with KVM guests. NFS has a slight performance penalty compared to iSCSI, but the benefits of live migration and simplicity of setup, makes it a winner, for me.
 
It sounds to me like you've got both nodes talking to the same iSCSI target. That will result in a corrupted file system...

He uses LVM volumes on shared storage. The PVE tools make sure those LVs are accessed from only one host, so I think that is not the problem.
 
It looks like the iscsi block size is set for 4K. Is that a problem? We are using a Thecus N8900 as the SAN device. Not sure about the actual drives.
 
We were using 4k iscsi sectors. I switched to 512byte and now it is working great. Are 4k sectors not supported for LVM groups?
 
I' am experiencing the same problem on a fresh installed PVE 2.1 cluster with lvm group on an iscsi drive (Thecus NAS 4 SATA Disks in RAID10, useing 4K sectors). Everything works fine if the LVM is not uses as shared.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!