Hello Proxmox users,
I'm trying to configure Ceph storage (the latest one Quincy) with HP Synergy DAS D3940 SAS storage.
I have 3 servers with attached 6 SSD SAS drives (from DAS D3940) per server for Ceph storage.
Unfortunately I'm not sure how to manage the multipath for the drives.
Currently, I can see each drive twice (because there is two paths toward the disks).
The drives dedicated for Ceph are configured as JBOD (sda, sdb, sdc, sdd, sde, sdf, sdg, sdh, sdi, sdj, sdk sdl).
My question is - shall I configure the drives with multipath from the OS (for example dev/sda+dev/sdb=md0 (multi-path device zero) or I should leave this to Ceph?
Below is the output from my server:
[root@compute1 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 1.5T 0 disk
sdb 8:16 0 1.5T 0 disk
sdc 8:32 0 1.5T 0 disk
sdd 8:48 0 1.5T 0 disk
sde 8:64 0 1.5T 0 disk
sdf 8:80 0 1.5T 0 disk
sdg 8:96 0 1.5T 0 disk
sdh 8:112 0 1.5T 0 disk
sdi 8:128 0 1.5T 0 disk
sdj 8:144 0 1.5T 0 disk
sdk 8:160 0 1.5T 0 disk
sdl 8:176 0 1.5T 0 disk
sdn 8:208 0 279.4G 0 disk
|-sdn1 8:209 0 953M 0 part /boot/efi
|-sdn2 8:210 0 953M 0 part /boot
|-sdn3 8:211 0 264.5G 0 part
| |-rhel_compute1-root 253:0 0 18.6G 0 lvm /
| |-rhel_compute1-var_log 253:2 0 9.3G 0 lvm /var/log
| |-rhel_compute1-var_tmp 253:3 0 4.7G 0 lvm /var/tmp
| |-rhel_compute1-tmp 253:4 0 4.7G 0 lvm /tmp
| |-rhel_compute1-var 253:5 0 37.3G 0 lvm /var
| |-rhel_compute1-opt 253:6 0 37.3G 0 lvm /opt
| |-rhel_compute1-aux1 253:7 0 107.1G 0 lvm /aux1
| |-rhel_compute1-home 253:8 0 20.5G 0 lvm /home
| `-rhel_compute1-aux0 253:9 0 25.2G 0 lvm /aux0
|-sdn4 8:212 0 7.5G 0 part [SWAP]
`-sdn5 8:213 0 4.7G 0 part /var/log/audit
sdo 8:224 0 1.5T 0 disk
`-sdo1 8:225 0 1.5T 0 part
`-rhel_local_VG-localstor 253:1 0 1.5T 0 lvm /localstor
I'm trying to configure Ceph storage (the latest one Quincy) with HP Synergy DAS D3940 SAS storage.
I have 3 servers with attached 6 SSD SAS drives (from DAS D3940) per server for Ceph storage.
Unfortunately I'm not sure how to manage the multipath for the drives.
Currently, I can see each drive twice (because there is two paths toward the disks).
The drives dedicated for Ceph are configured as JBOD (sda, sdb, sdc, sdd, sde, sdf, sdg, sdh, sdi, sdj, sdk sdl).
My question is - shall I configure the drives with multipath from the OS (for example dev/sda+dev/sdb=md0 (multi-path device zero) or I should leave this to Ceph?
Below is the output from my server:
[root@compute1 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 1.5T 0 disk
sdb 8:16 0 1.5T 0 disk
sdc 8:32 0 1.5T 0 disk
sdd 8:48 0 1.5T 0 disk
sde 8:64 0 1.5T 0 disk
sdf 8:80 0 1.5T 0 disk
sdg 8:96 0 1.5T 0 disk
sdh 8:112 0 1.5T 0 disk
sdi 8:128 0 1.5T 0 disk
sdj 8:144 0 1.5T 0 disk
sdk 8:160 0 1.5T 0 disk
sdl 8:176 0 1.5T 0 disk
sdn 8:208 0 279.4G 0 disk
|-sdn1 8:209 0 953M 0 part /boot/efi
|-sdn2 8:210 0 953M 0 part /boot
|-sdn3 8:211 0 264.5G 0 part
| |-rhel_compute1-root 253:0 0 18.6G 0 lvm /
| |-rhel_compute1-var_log 253:2 0 9.3G 0 lvm /var/log
| |-rhel_compute1-var_tmp 253:3 0 4.7G 0 lvm /var/tmp
| |-rhel_compute1-tmp 253:4 0 4.7G 0 lvm /tmp
| |-rhel_compute1-var 253:5 0 37.3G 0 lvm /var
| |-rhel_compute1-opt 253:6 0 37.3G 0 lvm /opt
| |-rhel_compute1-aux1 253:7 0 107.1G 0 lvm /aux1
| |-rhel_compute1-home 253:8 0 20.5G 0 lvm /home
| `-rhel_compute1-aux0 253:9 0 25.2G 0 lvm /aux0
|-sdn4 8:212 0 7.5G 0 part [SWAP]
`-sdn5 8:213 0 4.7G 0 part /var/log/audit
sdo 8:224 0 1.5T 0 disk
`-sdo1 8:225 0 1.5T 0 part
`-rhel_local_VG-localstor 253:1 0 1.5T 0 lvm /localstor
Last edited: