[SOLVED] One more time about Shared SAS Storage and cluster

DrDestiny

Active Member
Oct 2, 2017
4
0
41
40
Hi, community! Please help me with SAS Storage on my Proxmox 5 Cluster.

I have 4 nodes with Proxmox 5. On each node there is a HBA adapter linked with Dell MD3200 storage by one cord. Every node "can see" drive from storage like /dev/sda with correct size and etc.
Next I configured cluster:
- pvecm create ...
- pvecm add ...
Next I configured LVM and shared it via GUI according this link https://pve.proxmox.com/wiki/Intel_Modular_Server
- fdisk ...
- pvcreate ...
- vgcreate ...
After that I create a VM Guest on first node with 50GB disk image on that LVM storage. And other nodes doesn't "see" that disk image via GUI if I click on the LVM volume group -> Content. On the first node that disk image will be displayed.
In that situation if I create a VM Guest with (any size) disk image at second node disk image on the first node will be corrupted by OS installation in second VM guest.

A have read forum and now very confused in this situation. I would appreciate if you could give me some advices to use SAS Shared storage on nodes in cluster. Should I use cluster mode in lvm-config (lvm-config --enable-cluster)?
 
whats your pveversion -v ?
 
dcsapak,
Code:
root@proxnode1:~# pveversion -v
proxmox-ve: 5.0-15 (running kernel: 4.10.15-1-pve)
pve-manager: 5.0-23 (running version: 5.0-23/af4267bf)
pve-kernel-4.10.15-1-pve: 4.10.15-15
libpve-http-server-perl: 2.0-5
lvm2: 2.02.168-pve2
corosync: 2.4.2-pve3
libqb0: 1.0.1-1
pve-cluster: 5.0-10
qemu-server: 5.0-12
pve-firmware: 2.0-2
libpve-common-perl: 5.0-16
libpve-guest-common-perl: 2.0-11
libpve-access-control: 5.0-5
libpve-storage-perl: 5.0-12
pve-libspice-server1: 0.12.8-3
vncterm: 1.5-2
pve-docs: 5.0-6
pve-qemu-kvm: 2.9.0-2
pve-container: 2.0-14
pve-firewall: 3.0-1
pve-ha-manager: 2.0-2
ksm-control-daemon: 1.2-2
glusterfs-client: 3.8.8-1
lxc-pve: 2.0.8-3
lxcfs: 2.0.7-pve2
criu: 2.11.1-1~bpo90
novnc-pve: 0.6-4
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.6.5.9-pve16~bpo90
 
please update to a current version

in an lvm update, we set the use_lvmetad to 0 in /etc/lvm/lvm.conf (which you can also set) which is necessary for all nodes to not try to cache the
disk information for lvm
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!