Hard Drives not showing up under Storage / Disks

Dec 1, 2021
15
2
3
52
Hi,
I can't add new ZFS pools from the GUI, beause it's not recognising the disks respectively showing a communication faiure (0).
During installation all drives showed up in the installation manager and i was able to install PBS on a ZFS RAID1 mirror.
I'm using an LSI SAS Controller in IT Mode with 15 SAS hard drives:
Bash:
root@pluto:~# lspci | grep LSI
01:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS2308 PCI-Express Fusion-MPT SAS-2 (rev 05)
root@pluto:~# lsblk -o NAME,SIZE,FSTYPE,TYPE
NAME    SIZE FSTYPE     TYPE
sda     1.1T            disk
sdb     1.8T            disk
sdc     1.8T            disk
sdd     1.8T            disk
sde     1.8T            disk
sdf     1.1T            disk
├─sdf1 1007K            part
├─sdf2  512M vfat       part
└─sdf3  1.1T zfs_member part
sdg     1.1T            disk
sdh     1.1T            disk
├─sdh1 1007K            part
├─sdh2  512M vfat       part
└─sdh3  1.1T zfs_member part
sdi     1.1T            disk
sdj     1.1T            disk
sdk     1.8T            disk
sdl     1.8T            disk
sdm     1.8T            disk
sdn     1.8T            disk
sdo     1.8T            disk
 
hi,

I can't add new ZFS pools from the GUI, beause it's not recognising the disks respectively showing a communication faiure (0).
are you seeing any error messages in journalctl?

does it possibly show up in the GUI after you wipe a disk? sgdisk --zap-all /dev/sdj (just an example, be careful what you wipe ;) )
 
hi,


are you seeing any error messages in journalctl?
Hi oguz,
thanks for the incredible fast reply.
No, journal looks all good. Controller and drives recognised & attached, no errors
does it possibly show up in the GUI after you wipe a disk? sgdisk --zap-all /dev/sdj (just an example, be careful what you wipe ;) )
I wiped a couple of the unused disks, unfortunatly still no disks showing up in the GUI
Bash:
root@pluto:~# sgdisk --zap-all /dev/sda
Warning: Partition table header claims that the size of partition table
entries is 0 bytes, but this program  supports only 128-byte entries.
Adjusting accordingly, but partition table may be garbage.
Warning: Partition table header claims that the size of partition table
entries is 0 bytes, but this program  supports only 128-byte entries.
Adjusting accordingly, but partition table may be garbage.
Creating new GPT entries in memory.
GPT data structures destroyed! You may now partition the disk using fdisk or
other utilities.
root@pluto:~# sgdisk -v /dev/sda
Warning: Partition table header claims that the size of partition table
entries is 0 bytes, but this program  supports only 128-byte entries.
Adjusting accordingly, but partition table may be garbage.
Warning: Partition table header claims that the size of partition table
entries is 0 bytes, but this program  supports only 128-byte entries.
Adjusting accordingly, but partition table may be garbage.
Creating new GPT entries in memory.

No problems found. 2344225901 free sectors (1.1 TiB) available in 1
segments, the largest of which is 2344225901 (1.1 TiB) in size.
 
thanks for checking.

could you post the output from proxmox-backup-manager versions --verbose?

No, journal looks all good. Controller and drives recognised & attached, no errors
in case there's something you missed, could you run journalctl -eu 'proxmox*' > journal.txt and attach the resulting file here?
 
thanks for checking.

could you post the output from proxmox-backup-manager versions --verbose?
Code:
proxmox-backup             2.1-1        running kernel: 5.13.19-4-pve
proxmox-backup-server      2.1.5-1      running version: 2.1.5     
pve-kernel-helper          7.1-10                                   
pve-kernel-5.13            7.1-7                                   
pve-kernel-5.13.19-4-pve   5.13.19-9                               
pve-kernel-5.13.19-1-pve   5.13.19-3                               
ifupdown2                  3.1.0-1+pmx3                             
libjs-extjs                7.0.0-1                                 
proxmox-backup-docs        2.1.5-1                                 
proxmox-backup-client      2.1.5-1                                 
proxmox-mini-journalreader 1.2-1                                   
proxmox-widget-toolkit     3.4-5                                   
pve-xtermjs                4.16.0-1                                 
smartmontools              7.2-1                                   
zfsutils-linux             2.1.2-pve1
in case there's something you missed, could you run journalctl -eu 'proxmox*' > journal.txt and attach the resulting file here?
Code:
-- Boot 10a695a5d2c445ac98e4c5e38a2cb6db --
Feb 09 10:12:15 pluto systemd[1]: Starting Proxmox Backup Server Login Banner...
Feb 09 10:12:15 pluto systemd[1]: Finished Proxmox Backup Server Login Banner.
Feb 09 10:12:15 pluto systemd[1]: Started Daily Proxmox Backup Server update and maintenance activities.
Feb 09 10:12:17 pluto systemd[1]: Starting Proxmox Backup API Server...
Feb 09 10:12:18 pluto proxmox-backup-api[1342]: service is ready
Feb 09 10:12:18 pluto systemd[1]: Started Proxmox Backup API Server.
Feb 09 10:12:18 pluto systemd[1]: Starting Proxmox Backup API Proxy Server...
Feb 09 10:12:18 pluto proxmox-backup-proxy[1502]: service is ready
Feb 09 10:12:18 pluto systemd[1]: Started Proxmox Backup API Proxy Server.
Feb 09 10:12:18 pluto proxmox-backup-proxy[1502]: applied rrd journal (2801 entries in 0.130 seconds)
Feb 09 10:12:18 pluto proxmox-backup-proxy[1502]: write rrd data back to disk
Feb 09 10:12:18 pluto proxmox-backup-proxy[1502]: starting rrd data sync
Feb 09 10:12:18 pluto proxmox-backup-proxy[1502]: rrd journal successfully committed (16 files in 0.035 seconds)
Feb 09 10:42:18 pluto proxmox-backup-proxy[1502]: write rrd data back to disk
Feb 09 10:42:18 pluto proxmox-backup-proxy[1502]: starting rrd data sync
Feb 09 10:42:19 pluto proxmox-backup-proxy[1502]: rrd journal successfully committed (16 files in 0.128 seconds)
 
Last edited:
could you post the response from the API call when you load the GUI page with the disks? open up the developer tools on your browser and click "Network" tab then find the GET request on endpoint "list", see my screenshot:
api-backup-disks.jpg
from the "Response" part right click on the first line of the JSON, and do "Copy All" and paste it here (in code tags would be nice)
 
could you post the response from the API call when you load the GUI page with the disks? open up the developer tools on your browser and click "Network" tab then find the GET request on endpoint "list", see my screenshot:

from the "Response" part right click on the first line of the JSON, and do "Copy All" and paste it here (in code tags would be nice)
There is no response:
Screen.png
 
Hi Thomas,
What do you get if you try the equivalent debug command on the PBS host directly?
proxmox-backup-debug api get /nodes/localhost/disks/list
Could be a timeout issue. The command takes more than 30 seconds to return the list. The GUI gives me a communication failure after about 25 seconds.

Bash:
root@pluto:~# proxmox-backup-debug api get /nodes/localhost/disks/list
┌───────────┬─────┬──────┬───────────────┬────────┬──────┬──────────┬─────────────────┬─────┬──────────────────┬─────────┬─────────┬────────────────────┐
│ disk-type │ gpt │ name │          size │ status │ used │ devpath  │ model           │ rpm │ serial           │ vendor  │ wearout │ wwn                │
╞═══════════╪═════╪══════╪═══════════════╪════════╪══════╪══════════╪═════════════════╪═════╪══════════════════╪═════════╪═════════╪════════════════════╡
│ hdd       │   1 │ sda  │ 1200243695616 │ passed │ zfs  │ /dev/sda │ HUC101212CSS600 │     │ 5000cca01dc89378 │ HGST    │         │ 0x5000cca01dc89378 │
├───────────┼─────┼──────┼───────────────┼────────┼──────┼──────────┼─────────────────┼─────┼──────────────────┼─────────┼─────────┼────────────────────┤
│ hdd       │   1 │ sdb  │ 2000398934016 │ passed │ zfs  │ /dev/sdb │ ST2000NX0273    │     │ 5000c500ddbb9547 │ SEAGATE │         │ 0x5000c500ddbb9547 │
├───────────┼─────┼──────┼───────────────┼────────┼──────┼──────────┼─────────────────┼─────┼──────────────────┼─────────┼─────────┼────────────────────┤
│ hdd       │   1 │ sdc  │ 2000398934016 │ passed │ zfs  │ /dev/sdc │ ST2000NX0273    │     │ 5000c500ddbc375f │ SEAGATE │         │ 0x5000c500ddbc375f │
├───────────┼─────┼──────┼───────────────┼────────┼──────┼──────────┼─────────────────┼─────┼──────────────────┼─────────┼─────────┼────────────────────┤
│ hdd       │   1 │ sdd  │ 2000398934016 │ passed │ zfs  │ /dev/sdd │ ST2000NX0273    │     │ 5000c500ddc39f23 │ SEAGATE │         │ 0x5000c500ddc39f23 │
├───────────┼─────┼──────┼───────────────┼────────┼──────┼──────────┼─────────────────┼─────┼──────────────────┼─────────┼─────────┼────────────────────┤
│ hdd       │   1 │ sde  │ 2000398934016 │ passed │ zfs  │ /dev/sde │ ST2000NX0273    │     │ 5000c500ddcdd71b │ SEAGATE │         │ 0x5000c500ddcdd71b │
├───────────┼─────┼──────┼───────────────┼────────┼──────┼──────────┼─────────────────┼─────┼──────────────────┼─────────┼─────────┼────────────────────┤
│ hdd       │   1 │ sdf  │ 1200243695616 │ passed │ zfs  │ /dev/sdf │ ST1200MM0009    │     │ 5000c5009f418e33 │ SEAGATE │         │ 0x5000c5009f418e33 │
├───────────┼─────┼──────┼───────────────┼────────┼──────┼──────────┼─────────────────┼─────┼──────────────────┼─────────┼─────────┼────────────────────┤
│ hdd       │   1 │ sdg  │ 1200243695616 │ passed │ zfs  │ /dev/sdg │ HUC101212CSS600 │     │ 5000cca01dca1a58 │ HGST    │         │ 0x5000cca01dca1a58 │
├───────────┼─────┼──────┼───────────────┼────────┼──────┼──────────┼─────────────────┼─────┼──────────────────┼─────────┼─────────┼────────────────────┤
│ hdd       │   1 │ sdh  │ 1200243695616 │ passed │ zfs  │ /dev/sdh │ ST1200MM0009    │     │ 5000c5009f4195cb │ SEAGATE │         │ 0x5000c5009f4195cb │
├───────────┼─────┼──────┼───────────────┼────────┼──────┼──────────┼─────────────────┼─────┼──────────────────┼─────────┼─────────┼────────────────────┤
│ hdd       │   1 │ sdi  │ 1200243695616 │ passed │ zfs  │ /dev/sdi │ HUC101212CSS600 │     │ 5000cca01dc9cb6c │ HGST    │         │ 0x5000cca01dc9cb6c │
├───────────┼─────┼──────┼───────────────┼────────┼──────┼──────────┼─────────────────┼─────┼──────────────────┼─────────┼─────────┼────────────────────┤
│ hdd       │   1 │ sdj  │ 1200243695616 │ passed │ zfs  │ /dev/sdj │ ST1200MM0009    │     │ 5000c5009fc4fceb │ SEAGATE │         │ 0x5000c5009fc4fceb │
├───────────┼─────┼──────┼───────────────┼────────┼──────┼──────────┼─────────────────┼─────┼──────────────────┼─────────┼─────────┼────────────────────┤
│ hdd       │   1 │ sdk  │ 1200243695616 │ passed │ zfs  │ /dev/sdk │ HUC101212CSS600 │     │ 5000cca01dc89204 │ HGST    │         │ 0x5000cca01dc89204 │
├───────────┼─────┼──────┼───────────────┼────────┼──────┼──────────┼─────────────────┼─────┼──────────────────┼─────────┼─────────┼────────────────────┤
│ hdd       │   1 │ sdl  │ 2000398934016 │ passed │ zfs  │ /dev/sdl │ ST2000NX0273    │     │ 5000c500ddce1ed3 │ SEAGATE │         │ 0x5000c500ddce1ed3 │
├───────────┼─────┼──────┼───────────────┼────────┼──────┼──────────┼─────────────────┼─────┼──────────────────┼─────────┼─────────┼────────────────────┤
│ hdd       │   1 │ sdm  │ 2000398934016 │ passed │ zfs  │ /dev/sdm │ ST2000NX0273    │     │ 5000c500ddcede77 │ SEAGATE │         │ 0x5000c500ddcede77 │
├───────────┼─────┼──────┼───────────────┼────────┼──────┼──────────┼─────────────────┼─────┼──────────────────┼─────────┼─────────┼────────────────────┤
│ hdd       │   1 │ sdn  │ 2000398934016 │ passed │ zfs  │ /dev/sdn │ ST2000NX0273    │     │ 5000c500ddcde24b │ SEAGATE │         │ 0x5000c500ddcde24b │
├───────────┼─────┼──────┼───────────────┼────────┼──────┼──────────┼─────────────────┼─────┼──────────────────┼─────────┼─────────┼────────────────────┤
│ hdd       │   1 │ sdo  │ 2000398934016 │ passed │ zfs  │ /dev/sdo │ ST2000NX0273    │     │ 5000c500ddcddbe3 │ SEAGATE │         │ 0x5000c500ddcddbe3 │
├───────────┼─────┼──────┼───────────────┼────────┼──────┼──────────┼─────────────────┼─────┼──────────────────┼─────────┼─────────┼────────────────────┤
│ hdd       │   1 │ sdp  │ 2000398934016 │ passed │ zfs  │ /dev/sdp │ ST2000NX0273    │     │ 5000c500ddced97b │ SEAGATE │         │ 0x5000c500ddced97b │
└───────────┴─────┴──────┴───────────────┴────────┴──────┴──────────┴─────────────────┴─────┴──────────────────┴─────────┴─────────┴────────────────────┘
 
Could be a timeout issue. The command takes more than 30 seconds to return the list. The GUI gives me a communication failure after about 25 seconds.
Yes, that's sounds about right as synchronous API requests have a 30 second timeout.
It's still weird that this API call needs that much time in your setup though.. I'd figure that the lsblk command doesn't requires that much time or?
 
my guess is that the smartctl call for all disks add up to more than 30s...
 
Easy to test, can you please execute the following:

time proxmox-backup-debug api get /nodes/localhost/disks/list --skipsmart 1

edit: fixed command
 
Last edited:
Easy to test, can you please execute the following:

time proxmox-backup-debug api get /nodes/localhost/disks/list --skipsmart
There seems something missing in the command.

Bash:
time proxmox-backup-debug api get /nodes/localhost/disks/list --skipsmart
Error: parameter verification errors

parameter 'skipsmart': missing parameter value.

Usage: proxmox-backup-debug api get <api-path> [OPTIONS]
 <api-path> <string>
             API path.

Without the --skip-smart it takes about 32s to complete, as we already know
 
There seems something missing in the command.
should be time proxmox-backup-debug api get /nodes/localhost/disks/list --skipsmart 1
 
Actually it's only one of 4 HGST drives that takes so long to respond.
I will exchange that one with a spare and check if the issue is solved.
Thanks for your kind support, that's very appreciated.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!