New Installation on 2 HP-Proliant DL380G7 - MSAP2000G3

Hi all, everybody... to be continue..
I have reinstalling 2 proxmox PMOX1 and PMOX2, create cluster but problem, to create it..... i prepare and finish installation and i tel you.
I must make local repository debian to install differents debian packages... on pmox1 and 2.

Is necessary to install multipath-tools-boot ? i have just installing multipath-tools
 
Oh my god !!!! i have problem to add node 2, i try to delete it, and readd (with administration guide) but impossible... i 'm tired.

on pmox 1 :

pvecm status :

Quorum information
------------------
Date: Fri May 4 09:46:08 2018
Quorum provider: corosync_votequorum
Nodes: 1
Node ID: 0x00000001
Ring ID: 1/12
Quorate: No

Votequorum information
----------------------
Expected votes: 2
Highest expected: 2
Total votes: 1
Quorum: 2 Activity blocked
Flags:

Membership information
----------------------
Nodeid Votes Name
0x00000001 1 10.33.68.90 (local)
root@PREF33-S-PMOX1:~#

to add node 2, i made this on node 2 :

root@PREF33-S-PMOX2:~# pvecm add 10.33.68.90
cluster not ready - no quorum?
unable to add node: command failed (ssh 10.33.68.90 -o BatchMode=yes pvecm addnode PREF33-S-PMOX2 --force 1)
root@PREF33-S-PMOX2:~#

What is this new problem ?
 
Hi all, everybody... to be continue..
I have reinstalling 2 proxmox PMOX1 and PMOX2, create cluster but problem, to create it..... i prepare and finish installation and i tel you.
I must make local repository debian to install differents debian packages... on pmox1 and 2.

Is necessary to install multipath-tools-boot ? i have just installing multipath-tools

no

Oh my god !!!! i have problem to add node 2, i try to delete it, and readd (with administration guide) but impossible... i 'm tired.

on pmox 1 :

pvecm status :

Quorum information
------------------
Date: Fri May 4 09:46:08 2018
Quorum provider: corosync_votequorum
Nodes: 1
Node ID: 0x00000001
Ring ID: 1/12
Quorate: No

Votequorum information
----------------------
Expected votes: 2
Highest expected: 2
Total votes: 1
Quorum: 2 Activity blocked
Flags:

Membership information
----------------------
Nodeid Votes Name
0x00000001 1 10.33.68.90 (local)
root@PREF33-S-PMOX1:~#

to add node 2, i made this on node 2 :

root@PREF33-S-PMOX2:~# pvecm add 10.33.68.90
cluster not ready - no quorum?
unable to add node: command failed (ssh 10.33.68.90 -o BatchMode=yes pvecm addnode PREF33-S-PMOX2 --force 1)
root@PREF33-S-PMOX2:~#

What is this new problem ?

What about requirements?
 
Jerry, thank's, but i erase installation on 2 pmox, and i have reinstalling again. is the third installation. When i 've doing the cluster correctly, and when i installing multipath i try to create LVM.

I tell you when i finish to reconfigure cluster.
 
New installation is ok, pmox1 & pmox2 running successfully... repository debian ready, and i've doing multipath installation.

multipath is ok...

root@PREF33-S-PMOX1:/dev# multipath -ll
3600c0ff0001ae601d4d8b05a01000000 dm-2 HP,P2000 G3 FC
size=7.6T features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 4:0:1:1 sde 8:64 active ready running
| `- 2:0:1:1 sdc 8:32 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
|- 4:0:0:1 sdd 8:48 active ready running
`- 2:0:0:1 sdb 8:16 active ready running


i do pvcreate with -vvvv /dev/mapper/3600c0ff0001ae601d4d8b05a01000000

but not create


253 appear in console :

253:0)
#device/dev-cache.c:352 /dev/disk/by-id/raid-pve-swap: Aliased to /dev/disk/by-id/dm-name-pve-swap in device cache (253:0)
#device/dev-cache.c:352 /dev/disk/by-uuid/668ea6df-f861-4874-aeab-e41e72250a3a: Aliased to /dev/disk/by-id/dm-name-pve-swap in device cache (253:0)
#device/dev-cache.c:352 /dev/mapper/pve-swap: Aliased to /dev/disk/by-id/dm-name-pve-swap in device cache (preferred name) (253:0)
#device/dev-cache.c:352 /dev/pve/swap: Aliased to /dev/mapper/pve-swap in device cache (preferred name) (253:0)
#device/dev-cache.c:356 /dev/dm-1: Added to device cache (253:1)
#device/dev-cache.c:352 /dev/disk/by-id/dm-name-pve-root: Aliased to /dev/dm-1 in device cache (preferred name) (253:1)
#device/dev-cache.c:352 /dev/disk/by-id/dm-uuid-LVM-IB31gs1Qeb1fM5ekT3JYFwPltu3IxWeJpHG64XAP6zkkHj9s2kNloktFUUx0McC2: Aliased to /dev/disk/by-id/dm-name-pve-root in device cache (253:1)
#device/dev-cache.c:352 /dev/disk/by-id/raid-pve-root: Aliased to /dev/disk/by-id/dm-name-pve-root in device cache (253:1)
#device/dev-cache.c:352 /dev/disk/by-uuid/b590e631-5861-472a-a43b-6776add5441c: Aliased to /dev/disk/by-id/dm-name-pve-root in device cache (253:1)
#device/dev-cache.c:352 /dev/mapper/pve-root: Aliased to /dev/disk/by-id/dm-name-pve-root in device cache (preferred name) (253:1)
#device/dev-cache.c:352 /dev/pve/root: Aliased to /dev/mapper/pve-root in device cache (preferred name) (253:1)
#device/dev-cache.c:352 /dev/dm-2: Aliased to /dev/disk/by-id/scsi-3600c0ff0001ae601d4d8b05a01000000 in device cache (253:2)
#device/dev-cache.c:352 /dev/disk/by-id/dm-name-3600c0ff0001ae601d4d8b05a01000000: Aliased to /dev/disk/by-id/scsi-3600c0ff0001ae601d4d8b05a01000000 in device cache (preferred name) (253:2)
#device/dev-cache.c:352 /dev/disk/by-id/dm-uuid-mpath-3600c0ff0001ae601d4d8b05a01000000: Aliased to /dev/disk/by-id/dm-name-3600c0ff0001ae601d4d8b05a01000000 in device cache (253:2)
#device/dev-cache.c:340 /dev/disk/by-id/scsi-3600c0ff0001ae601d4d8b05a01000000: Already in device cache
#device/dev-cache.c:340 /dev/disk/by-id/wwn-0x600c0ff0001ae601d4d8b05a01000000: Already in device cache
#device/dev-cache.c:352 /dev/mapper/3600c0ff0001ae601d4d8b05a01000000: Aliased to /dev/disk/by-id/dm-name-3600c0ff0001ae601d4d8b05a01000000 in device cache (preferred name) (253:2)
#device/dev-cache.c:356 /dev/dm-3: Added to device cache (253:3)
#device/dev-cache.c:352 /dev/disk/by-id/raid-pve-data_tmeta: Aliased to /dev/dm-3 in device cache (preferred name) (253:3)
#device/dev-cache.c:352 /dev/mapper/pve-data_tmeta: Aliased to /dev/disk/by-id/raid-pve-data_tmeta in device cache (preferred name) (253:3)
#device/dev-cache.c:356 /dev/dm-4: Added to device cache (253:4)
#device/dev-cache.c:352 /dev/disk/by-id/raid-pve-data_tdata: Aliased to /dev/dm-4 in device cache (preferred name) (253:4)
#device/dev-cache.c:352 /dev/mapper/pve-data_tdata: Aliased to /dev/disk/by-id/raid-pve-data_tdata in device cache (preferred name) (253:4)
 
i put in lvm.conf
in device section :

types = [ "bcache", 253 ]

Always and always..... error

always the same error :
root@PREF33-S-PMOX1:/dev# pvcreate /dev/mapper/3600c0ff0001ae601d4d8b05a01000000
Device /dev/mapper/3600c0ff0001ae601d4d8b05a01000000 not found (or ignored by filtering).
root@PREF33-S-PMOX1:/dev#
 
Last edited:
i put in lvm.conf
in device section :

types = [ "bcache", 253 ]

Always and always..... error

always the same error :
root@PREF33-S-PMOX1:/dev# pvcreate /dev/mapper/3600c0ff0001ae601d4d8b05a01000000
Device /dev/mapper/3600c0ff0001ae601d4d8b05a01000000 not found (or ignored by filtering).
root@PREF33-S-PMOX1:/dev#

Hallo Zaqen,
It's very strange. Are you sure that definition of hosts and hosts mappings in MSA are correct? Maybe they are read only mappings or the volume's setting at MSA does not allow to write on it down? I would suspect configuration of the SAN switches - zoning?.
I think that is not the problem with Proxmox (the same problems would happen in ESXi) or with multipath (you can try to pvcreate on /dev/sdb - it will fail probably).
I suggest to you to verify everything in the MSA/SAN switches configuration. If you will not find anything suspicious try to make smaller (below 2TB) volume to avoid GPT label. Maybe it will help or guide you to solve problem.
 
Hi Jerry, with my colleague we verfy all parameter on MSA, all is OK.
We try to delete 8tb and create juste one to 1,8 tb an try...
pvcreate on /dev/sdb not good because is msa lun.
 
Hi,

Same problem :

Delete Lun 8tb, create new disc 1,7tb... pmox 1 & pmox 2 see devices

when i do :

ls -lai /dev/sd (and press tab) devices founding is : /dev/sda (the local disc on machines pmox 1 and 2)

/dev/sdb , /dev/sdc /dev/sdd and /dev/sde not found in the path, is not normal... kernel not found
device.

Not same problem, but i have an error

root@xxxxxxx-PMOX1:~# pvcreate /dev/sdb
Device /dev/sdb not found (or ignored by filtering).
root@xxxxxxx-PMOX1:~#

Pffffffff

they are an admin proxmox team who can help me !!!!!!!!!!

GRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR
 
3600c0ff0001ae601a606f35a01000000 dm-2 HP,P2000 G3 FC
size=1.7T features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 4:0:1:1 sde 8:64 active ready running
| `- 2:0:1:1 sdc 8:32 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
|- 4:0:0:1 sdd 8:48 active ready running
`- 2:0:0:1 sdb 8:16 active ready running
root@PREF33-S-PMOX1:~#




root@PREF33-S-PMOX1:~# multipath -v3
May 15 11:15:50 | libdevmapper version 1.02.137 (2016-11-30)
May 15 11:15:50 | DM multipath kernel driver v1.12.0
May 15 11:15:50 | loading //lib/multipath/libchecktur.so checker
May 15 11:15:50 | loading //lib/multipath/libprioconst.so prioritizer
May 15 11:15:50 | sda: udev property ID_WWN whitelisted
May 15 11:15:50 | sda: mask = 0x1f
May 15 11:15:50 | sda: dev_t = 8:0
May 15 11:15:50 | sda: size = 286677120
May 15 11:15:50 | sda: vendor = HP
May 15 11:15:50 | sda: product = LOGICAL VOLUME
May 15 11:15:50 | sda: rev = 3.52
May 15 11:15:50 | sda: h:b:t:l = 3:1:0:0
May 15 11:15:50 | sda: tgt_node_name =
May 15 11:15:50 | sda: path state = running
May 15 11:15:50 | sda: 17844 cyl, 255 heads, 63 sectors/track, start at 0
May 15 11:15:50 | sda: serial = 5001438009E83B30
May 15 11:15:50 | sda: get_state
May 15 11:15:50 | sda: path_checker = tur (internal default)
May 15 11:15:50 | sda: checker timeout = 30 ms (internal default)
May 15 11:15:50 | sda: state = up
May 15 11:15:50 | sda: uid_attribute = ID_SERIAL (config file default)
May 15 11:15:50 | sda: uid = 3600508b1001ca15b462cb999f65494ed (udev)
May 15 11:15:50 | sda: detect_prio = yes (config file default)
May 15 11:15:50 | sda: prio = const (internal default)
May 15 11:15:50 | sda: prio args = "" (internal default)
May 15 11:15:50 | sda: const prio = 1
May 15 11:15:50 | sdd: udev property ID_WWN whitelisted
May 15 11:15:50 | sdd: mask = 0x1f
May 15 11:15:50 | sdd: dev_t = 8:48
May 15 11:15:50 | sdd: size = 3710937472
May 15 11:15:50 | sdd: vendor = HP
May 15 11:15:50 | sdd: product = P2000 G3 FC
May 15 11:15:50 | sdd: rev = T252
May 15 11:15:50 | sdd: h:b:t:l = 4:0:0:1
May 15 11:15:50 | SCSI target 4:0:0 -> FC rport 4:0-0
May 15 11:15:50 | sdd: tgt_node_name = 0x208000c0ff10a940
May 15 11:15:50 | sdd: path state = running
May 15 11:15:50 | sdd: 65535 cyl, 255 heads, 63 sectors/track, start at 0
May 15 11:15:50 | sdd: serial = 00c0ff1ae6010000a606f35a01000000
May 15 11:15:50 | sdd: get_state
May 15 11:15:50 | sdd: path_checker = tur (internal default)
May 15 11:15:50 | sdd: checker timeout = 30 ms (internal default)
May 15 11:15:50 | sdd: state = up
May 15 11:15:50 | sdd: uid_attribute = ID_SERIAL (config file default)
May 15 11:15:50 | sdd: uid = 3600c0ff0001ae601a606f35a01000000 (udev)
May 15 11:15:50 | sdd: detect_prio = yes (config file default)
May 15 11:15:50 | 4:0:0:1: attribute access_state not found in sysfs
May 15 11:15:50 | loading //lib/multipath/libprioalua.so prioritizer
May 15 11:15:50 | sdd: prio = alua (detected setting)
May 15 11:15:50 | sdd: prio args = "" (detected setting)
May 15 11:15:50 | sdd: reported target port group is 1
May 15 11:15:50 | sdd: aas = 01 [active/non-optimized]
May 15 11:15:50 | sdd: alua prio = 10
May 15 11:15:50 | sde: udev property ID_WWN whitelisted
May 15 11:15:50 | sde: mask = 0x1f
May 15 11:15:50 | sde: dev_t = 8:64
May 15 11:15:50 | sde: size = 3710937472
May 15 11:15:50 | sde: vendor = HP
May 15 11:15:50 | sde: product = P2000 G3 FC
May 15 11:15:50 | sde: rev = T252
May 15 11:15:50 | sde: h:b:t:l = 4:0:1:1
May 15 11:15:50 | SCSI target 4:0:1 -> FC rport 4:0-1
May 15 11:15:50 | sde: tgt_node_name = 0x208000c0ff10a940
May 15 11:15:50 | sde: path state = running
May 15 11:15:50 | sde: 65535 cyl, 255 heads, 63 sectors/track, start at 0
May 15 11:15:50 | sde: serial = 00c0ff1ae6010000a606f35a01000000
May 15 11:15:50 | sde: get_state
May 15 11:15:50 | sde: path_checker = tur (internal default)
May 15 11:15:50 | sde: checker timeout = 30 ms (internal default)
May 15 11:15:50 | sde: state = up
May 15 11:15:50 | sde: uid_attribute = ID_SERIAL (config file default)
May 15 11:15:50 | sde: uid = 3600c0ff0001ae601a606f35a01000000 (udev)
May 15 11:15:50 | sde: detect_prio = yes (config file default)
May 15 11:15:50 | 4:0:1:1: attribute access_state not found in sysfs
May 15 11:15:50 | sde: prio = alua (detected setting)
May 15 11:15:50 | sde: prio args = "" (detected setting)
May 15 11:15:50 | sde: reported target port group is 0
May 15 11:15:50 | sde: aas = 80 [active/optimized] [preferred]
May 15 11:15:50 | sde: alua prio = 50
May 15 11:15:50 | sdb: udev property ID_WWN whitelisted
May 15 11:15:50 | sdb: mask = 0x1f
May 15 11:15:50 | sdb: dev_t = 8:16
May 15 11:15:50 | sdb: size = 3710937472
May 15 11:15:50 | sdb: vendor = HP
May 15 11:15:50 | sdb: product = P2000 G3 FC
May 15 11:15:50 | sdb: rev = T252
May 15 11:15:50 | sdb: h:b:t:l = 2:0:0:1
May 15 11:15:50 | SCSI target 2:0:0 -> FC rport 2:0-0
May 15 11:15:50 | sdb: tgt_node_name = 0x208000c0ff10a940
May 15 11:15:50 | sdb: path state = running
May 15 11:15:50 | sdb: 65535 cyl, 255 heads, 63 sectors/track, start at 0
May 15 11:15:50 | sdb: serial = 00c0ff1ae6010000a606f35a01000000
May 15 11:15:50 | sdb: get_state
May 15 11:15:50 | sdb: path_checker = tur (internal default)
May 15 11:15:50 | sdb: checker timeout = 30 ms (internal default)
May 15 11:15:50 | sdb: state = up
May 15 11:15:50 | sdb: uid_attribute = ID_SERIAL (config file default)
May 15 11:15:50 | sdb: uid = 3600c0ff0001ae601a606f35a01000000 (udev)
May 15 11:15:50 | sdb: detect_prio = yes (config file default)
May 15 11:15:50 | 2:0:0:1: attribute access_state not found in sysfs
May 15 11:15:50 | sdb: prio = alua (detected setting)
May 15 11:15:50 | sdb: prio args = "" (detected setting)
May 15 11:15:50 | sdb: reported target port group is 1
May 15 11:15:50 | sdb: aas = 01 [active/non-optimized]
May 15 11:15:50 | sdb: alua prio = 10
May 15 11:15:50 | sdc: udev property ID_WWN whitelisted
May 15 11:15:50 | sdc: mask = 0x1f
May 15 11:15:50 | sdc: dev_t = 8:32
May 15 11:15:50 | sdc: size = 3710937472
May 15 11:15:50 | sdc: vendor = HP
May 15 11:15:50 | sdc: product = P2000 G3 FC
May 15 11:15:50 | sdc: rev = T252
May 15 11:15:50 | sdc: h:b:t:l = 2:0:1:1
May 15 11:15:50 | SCSI target 2:0:1 -> FC rport 2:0-1
May 15 11:15:50 | sdc: tgt_node_name = 0x208000c0ff10a940
May 15 11:15:50 | sdc: path state = running
May 15 11:15:50 | sdc: 65535 cyl, 255 heads, 63 sectors/track, start at 0
May 15 11:15:50 | sdc: serial = 00c0ff1ae6010000a606f35a01000000
May 15 11:15:50 | sdc: get_state
May 15 11:15:50 | sdc: path_checker = tur (internal default)
May 15 11:15:50 | sdc: checker timeout = 30 ms (internal default)
May 15 11:15:50 | sdc: state = up
May 15 11:15:50 | sdc: uid_attribute = ID_SERIAL (config file default)
May 15 11:15:50 | sdc: uid = 3600c0ff0001ae601a606f35a01000000 (udev)
May 15 11:15:50 | sdc: detect_prio = yes (config file default)
May 15 11:15:50 | 2:0:1:1: attribute access_state not found in sysfs
May 15 11:15:50 | sdc: prio = alua (detected setting)
May 15 11:15:50 | sdc: prio args = "" (detected setting)
May 15 11:15:50 | sdc: reported target port group is 0
May 15 11:15:50 | sdc: aas = 80 [active/optimized] [preferred]
May 15 11:15:50 | sdc: alua prio = 50
May 15 11:15:50 | sr0: blacklisted, udev property missing
May 15 11:15:50 | loop0: blacklisted, udev property missing
May 15 11:15:50 | loop1: blacklisted, udev property missing
May 15 11:15:50 | loop2: blacklisted, udev property missing
May 15 11:15:50 | loop3: blacklisted, udev property missing
May 15 11:15:50 | loop4: blacklisted, udev property missing
May 15 11:15:50 | loop5: blacklisted, udev property missing
May 15 11:15:50 | loop6: blacklisted, udev property missing
May 15 11:15:50 | loop7: blacklisted, udev property missing
May 15 11:15:50 | dm-0: blacklisted, udev property missing
May 15 11:15:50 | dm-1: blacklisted, udev property missing
May 15 11:15:50 | dm-2: blacklisted, udev property missing
May 15 11:15:50 | dm-3: blacklisted, udev property missing
May 15 11:15:50 | dm-4: blacklisted, udev property missing
May 15 11:15:50 | dm-5: blacklisted, udev property missing
===== paths list =====
uuid hcil dev dev_t pri dm_st chk_st vend/prod
3600508b1001ca15b462cb999f65494ed 3:1:0:0 sda 8:0 1 undef ready HP,LOGICA
3600c0ff0001ae601a606f35a01000000 4:0:0:1 sdd 8:48 10 undef ready HP,P2000
3600c0ff0001ae601a606f35a01000000 4:0:1:1 sde 8:64 50 undef ready HP,P2000
3600c0ff0001ae601a606f35a01000000 2:0:0:1 sdb 8:16 10 undef ready HP,P2000
3600c0ff0001ae601a606f35a01000000 2:0:1:1 sdc 8:32 50 undef ready HP,P2000
May 15 11:15:50 | params = 2 queue_if_no_path retain_attached_hw_handler 0 2 1 service-time 0 2 2 8:64 1 1 8:32 1 1 service-time 0 2 2 8:48 1 1 8:16 1 1
May 15 11:15:50 | status = 2 0 0 0 2 1 A 0 2 2 8:64 A 0 0 1 8:32 A 0 0 1 E 0 2 2 8:48 A 0 0 1 8:16 A 0 0 1
May 15 11:15:50 | 3600c0ff0001ae601a606f35a01000000: disassemble map [2 queue_if_no_path retain_attached_hw_handler 0 2 1 service-time 0 2 2 8:64 1 1 8:32 1 1 service-time 0 2 2 8:48 1 1 8:16 1 1 ]
May 15 11:15:50 | 3600c0ff0001ae601a606f35a01000000: disassemble status [2 0 0 0 2 1 A 0 2 2 8:64 A 0 0 1 8:32 A 0 0 1 E 0 2 2 8:48 A 0 0 1 8:16 A 0 0 1 ]
May 15 11:15:50 | 3600508b1001ca15b462cb999f65494ed: user_friendly_names = no (internal default)
May 15 11:15:50 | 3600508b1001ca15b462cb999f65494ed: alias = 3600508b1001ca15b462cb999f65494ed (default to wwid)
May 15 11:15:50 | sda: ownership set to 3600508b1001ca15b462cb999f65494ed
May 15 11:15:50 | sda: mask = 0xc
May 15 11:15:50 | sda: path state = running
May 15 11:15:50 | sda: get_state
May 15 11:15:50 | sda: state = up
May 15 11:15:50 | sda: const prio = 1
May 15 11:15:50 | 3600508b1001ca15b462cb999f65494ed: failback = "manual" (config file default)
May 15 11:15:50 | 3600508b1001ca15b462cb999f65494ed: path_grouping_policy = multibus (controller setting)
May 15 11:15:50 | 3600508b1001ca15b462cb999f65494ed: path_selector = "service-time 0" (internal default)
May 15 11:15:50 | 3600508b1001ca15b462cb999f65494ed: features = "0" (config file default)
May 15 11:15:50 | 3600508b1001ca15b462cb999f65494ed: hardware_handler = "0" (internal default)
May 15 11:15:50 | 3600508b1001ca15b462cb999f65494ed: rr_weight = "uniform" (internal default)
May 15 11:15:50 | 3600508b1001ca15b462cb999f65494ed: minio = 1 (config file setting)
May 15 11:15:50 | 3600508b1001ca15b462cb999f65494ed: no_path_retry = 12 (controller setting)
May 15 11:15:50 | 3600508b1001ca15b462cb999f65494ed: fast_io_fail_tmo = 5 (config file default)
May 15 11:15:50 | 3600508b1001ca15b462cb999f65494ed: retain_attached_hw_handler = yes (config file default)
May 15 11:15:50 | 3600508b1001ca15b462cb999f65494ed: deferred_remove = no (config file default)
May 15 11:15:50 | 3600508b1001ca15b462cb999f65494ed: delay_watch_checks = "off" (internal default)
May 15 11:15:50 | 3600508b1001ca15b462cb999f65494ed: delay_wait_checks = "off" (internal default)
May 15 11:15:50 | 3600508b1001ca15b462cb999f65494ed: skip_kpartx = no (config file default)
May 15 11:15:50 | 3600508b1001ca15b462cb999f65494ed: update dev_loss_tmo to 60
May 15 11:15:50 | 3600508b1001ca15b462cb999f65494ed: assembled map [2 queue_if_no_path retain_attached_hw_handler 0 1 1 service-time 0 1 1 8:0 1]
May 15 11:15:50 | 3600508b1001ca15b462cb999f65494ed: set ACT_CREATE (map does not exist)
May 15 11:15:50 | 3600508b1001ca15b462cb999f65494ed: failed to load map, error 16
May 15 11:15:50 | 3600508b1001ca15b462cb999f65494ed: domap (0) failure for create/reload map
May 15 11:15:50 | 3600508b1001ca15b462cb999f65494ed: ignoring map
May 15 11:15:50 | sda: orphan path, map flushed
May 15 11:15:50 | const prioritizer refcount 1
May 15 11:15:50 | tur checker refcount 5
May 15 11:15:50 | tur checker refcount 4
May 15 11:15:50 | alua prioritizer refcount 4
May 15 11:15:50 | tur checker refcount 3
May 15 11:15:50 | alua prioritizer refcount 3
May 15 11:15:50 | tur checker refcount 2
May 15 11:15:50 | alua prioritizer refcount 2
May 15 11:15:50 | tur checker refcount 1
May 15 11:15:50 | alua prioritizer refcount 1
May 15 11:15:50 | unloading alua prioritizer
May 15 11:15:50 | unloading const prioritizer
May 15 11:15:50 | unloading tur checker


Bla bla bla bla bla
 
Do you want virtualization environnement who don't work ? choose proxmox, LOL !

Seriously
 
May 15 11:15:50 PREF33-S-PMOX1 kernel: [502468.998692] device-mapper: table: 253:6: multipath: error getting device
May 15 11:15:50 PREF33-S-PMOX1 kernel: [502468.998801] device-mapper: ioctl: error adding target to table

I found this in /var/log/kernel.log

I configure this in lvm.conf
types = [ "bcache", 253 ]

same problem
 
Zaqen, the thread is 3 page..... you continue to try to create lvm on a /dev/sdx when you have to create it on /dev/mapper/yourmultipathmapper ....
 
root@PREF33-S-PMOX1:~# pvcreate /dev/mapper/3600c0ff0001ae601a606f35a01000000
Physical volume "/dev/mapper/3600c0ff0001ae601a606f35a01000000" successfully created.
root@PREF33-S-PMOX1:~#

Great ! Thank'you...
 
Thank'you all, i create first lvm, now i try to create my first virtual machine.
The problem on HP MSA, is proxmox not support 8tb to create lvm (mode gpt) we delete 8tb volume, to create one more less of 1,9tb
 
Thank'you all, i create first lvm, now i try to create my first virtual machine.
The problem on HP MSA, is proxmox not support 8tb to create lvm (mode gpt) we delete 8tb volume, to create one more less of 1,9tb
Congratulations Zaqen. It is worth to be patience. I hope that usage of PVE rewards all your suffering ;)
The problem was in size of volume made on MSA as I suggested, am I right?
Proxmox has nothing to do with support of GPT or LVM because it's based on Debian with Ubuntu kernel. I checked out some limits and LVM2 supports up to 8EB on 64-bit architectures and 2.6 kernel. Now I'm using 4.1TB LVM (pv & vg) with Proxmox with any problem. Practically I would not create volumes more than 4.xTB to use with PVE.
If I will have box with enough disks I'll try to breach your limit of 8TB.
Please check this thread as "Solved"
 
Congratulations Zaqen. It is worth to be patience. I hope that usage of PVE rewards all your suffering ;)
The problem was in size of volume made on MSA as I suggested, am I right?

Thank you Jerry, you're right ! ;-)

I have create a VG and LVM...
Is nessessary to mount LVM on pmox 1 and pmox 2 ?
 
First of all you have to define second server and mappings in MSA configuration.
Since storage devices are defined at Datacenter there is no need to add storage on each node/server. If you want share LVM between servers you have to set up Proxmox Cluster and add second server to cluster.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!