Storage Scan

inder441

New Member
Dec 17, 2024
22
2
3
Hello Team,

I have a PVE cluster with 5 nodes and I am using FC multipath and create LVM storage on top of it.

So, deleted the LVM for one disk out of 5 and I still see the disk in "disks" list. I am trying to understand how I make it go from my "disks" list.

I tried running rescan-scsi-bus.sh. it does not do anything. If I run rescan-scsi-bus.sh --forcerescan my pve crashes.

I am trying to find a way to rescan HBAs without reboot.

Any assistance will be highly appreciated
 
Hi @inder441 ,

Can you clarify a few things for us please?

- How many MP devices are there?
- How many paths does each one have?
- What do you mean by "deleted LVM for one disk out of 5"?
- What "disks" list are you referring to ?

Adding some command line output as text encoded with CODE tags may be helpful as well.

Cheers


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Hi @inder441 ,

Can you clarify a few things for us please?

- How many MP devices are there?
- How many paths does each one have?
- What do you mean by "deleted LVM for one disk out of 5"?
- What "disks" list are you referring to ?

Adding some command line output as text encoded with CODE tags may be helpful as well.

Cheers


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
I have 4 MP devices.
I have 8 paths for each device (disk).
deleted LVM means: I deleted the LVM created on top of the device and then remove the VG & PV and then remove the disk from storage.
The disk list I am referring to is the list in GUI "disk" field.

So if I delete LVM then remove VG, PV and remove disk from storage. I still see the device is "disk" list in GUI.

I am trying to understand a way to rescan HBA without crashing my host, so that the disk is gone from "disk" list in the host GUI.

Let me know what command outputs etc you need I will share.

Appreciate the help.
 
What is your FC storage vendor and model?
What FC cards are you using in the host?
What FC switches are in use?
Is everything up to date firmware-wise?
Did you have to install vendor-specific FC-related drivers or recompile the Kernel?
Have you confirmed with your FC vendor that they support the particular Linux Kernel installed on your PVE?
What PVE version and what Kernel version are you running?
Have you tried with the latest Kernel available from Proxmox?
What does "remove disk" mean? Are you unmapping the LUN? Deleting it live? Something else?
What do "lsscsi" and "lsblk" look like before your remove the drive?
What do "lsscsi" and "lsblk" look like after you remove the drive?
What messages do you see in "dmesg" and "journalctl -n 100" (or -f) when you remove the drive?
What exactly happens when PVE crashes? Do you have a message trace?

Do realize, that PVE is using fairly unmodified Ubuntu Kernel. The FC communication is almost entirely Linux Kernel contained. You may want to reach out to your storage vendor and ask for best practices related to Ubuntu Kernel based host (or a particular Kernel version).


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox