Unknown status

Fuggin

New Member
Jan 20, 2024
11
3
3
Hello...I am getting this partially unknown issue recently and I don't know where to begin with troubleshooting. I hadn't tinkered around with the environment for months because it was running fine then all of the sudden it became unresponsive. I rebooted, memtested (10 passes), etc...no joy. Please advise.

1722213733664.png
 
It's probably a decent idea to click on the Datacenter level entry there, then see what shows up under "Storage" in the main panel.

That's where the definitions for each of those storage items (ProxOS, Unraid, apps, local, local-lvm) is located, so hopefully something obvious will stand out for you there with a bit of poking around. :)
 
Hi,
please check your system journal for any errors, e.g. journalctl -b. Please also share the output of pveversion -v.
 
  • Like
Reactions: justinclift
Hi,
please check your system journal for any errors, e.g. journalctl -b. Please also share the output of pveversion -v.
proxmox-ve: 8.2.0 (running kernel: 6.8.8-4-pve)
pve-manager: 8.2.4 (running version: 8.2.4/faa83925c9641325)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.8: 6.8.8-4
proxmox-kernel-6.8.8-4-pve-signed: 6.8.8-4
proxmox-kernel-6.8.8-3-pve-signed: 6.8.8-3
proxmox-kernel-6.8.8-1-pve-signed: 6.8.8-1
proxmox-kernel-6.5.13-6-pve-signed: 6.5.13-6
proxmox-kernel-6.5: 6.5.13-6
ceph-fuse: 17.2.7-pve1
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx9
intel-microcode: 3.20231114.1
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.1
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.1.4
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.7
libpve-cluster-perl: 8.0.7
libpve-common-perl: 8.2.1
libpve-guest-common-perl: 5.1.3
libpve-http-server-perl: 5.1.0
libpve-network-perl: 0.9.8
libpve-rs-perl: 0.8.9
libpve-storage-perl: 8.2.3
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.2.7-1
proxmox-backup-file-restore: 3.2.7-1
proxmox-firewall: 0.4.2
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.6
proxmox-widget-toolkit: 4.2.3
pve-cluster: 8.0.7
pve-container: 5.1.12
pve-docs: 8.2.2
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.1
pve-firewall: 5.0.7
pve-firmware: 3.13-1
pve-ha-manager: 4.0.5
pve-i18n: 3.2.2
pve-qemu-kvm: 9.0.0-6
pve-xtermjs: 5.3.0-3
qemu-server: 8.2.2
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.4-pve1

Jul 29 22:29:24 MediaNUC pvestatd[1722]: command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
Jul 29 22:29:34 MediaNUC pvestatd[1722]: command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
Jul 29 22:29:44 MediaNUC pvestatd[1722]: command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
Jul 29 22:29:54 MediaNUC pvestatd[1722]: command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
Jul 29 22:30:04 MediaNUC pvestatd[1722]: command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
Jul 29 22:30:14 MediaNUC pvestatd[1722]: command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
Jul 29 22:30:25 MediaNUC pvestatd[1722]: command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
Jul 29 22:30:34 MediaNUC pvestatd[1722]: command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
Jul 29 22:30:44 MediaNUC pvestatd[1722]: command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
Jul 29 22:30:49 MediaNUC pveproxy[70946]: worker exit
Jul 29 22:30:49 MediaNUC pveproxy[1765]: worker 70946 finished
Jul 29 22:30:49 MediaNUC pveproxy[1765]: starting 1 worker(s)
Jul 29 22:30:49 MediaNUC pveproxy[1765]: worker 79597 started
Jul 29 22:30:54 MediaNUC pvestatd[1722]: command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
Jul 29 22:31:04 MediaNUC pvestatd[1722]: command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
Jul 29 22:31:14 MediaNUC pvestatd[1722]: command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
Jul 29 22:31:25 MediaNUC pvestatd[1722]: command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
Jul 29 22:31:34 MediaNUC pvestatd[1722]: command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
Jul 29 22:31:44 MediaNUC pvestatd[1722]: command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
Jul 29 22:31:54 MediaNUC pvestatd[1722]: command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
Jul 29 22:32:04 MediaNUC pvestatd[1722]: command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
Jul 29 22:32:14 MediaNUC pvestatd[1722]: command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
Jul 29 22:32:25 MediaNUC pvestatd[1722]: command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
Jul 29 22:32:34 MediaNUC pvestatd[1722]: command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
Jul 29 22:32:44 MediaNUC pvestatd[1722]: command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
Jul 29 22:32:54 MediaNUC pvestatd[1722]: command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
Jul 29 22:33:04 MediaNUC pvestatd[1722]: command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
Jul 29 22:33:14 MediaNUC pvestatd[1722]: command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
Jul 29 22:33:25 MediaNUC pvestatd[1722]: command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
Jul 29 22:33:34 MediaNUC pvestatd[1722]: command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
Jul 29 22:33:44 MediaNUC pvestatd[1722]: command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
 
It's probably a decent idea to click on the Datacenter level entry there, then see what shows up under "Storage" in the main panel.

That's where the definitions for each of those storage items (ProxOS, Unraid, apps, local, local-lvm) is located, so hopefully something obvious will stand out for you there with a bit of poking around. :)
Already looked there...everything seems fine...just can't access the ProxOS storage....so I am assuming it probably is just borked.
 
Ok, so the ProxOS storage needs a bit more investigating.

Would you be ok to paste the contents of /etc/pve/storage.cfg here? That contains the actual definitions for each of your storage locations.

From that we should be able to figure out what to look for next.
 
Last edited:
Ok, so the ProxOS storage needs a bit more investigating.

Would you be ok to paste the contents of /etc/pve/storage.cfg here? That contains the actual definitions for each of your storage locations.

From that we should be able to figure out what to look for next.
dir: local
path /var/lib/vz
content vztmpl,iso,backup

lvmthin: local-lvm
thinpool data
vgname pve
content rootdir,images
nodes MediaNUC

lvmthin: apps
thinpool apps
vgname apps
content images,rootdir
nodes MediaNUC

lvm: ProxOS
vgname ProxOS
content images
nodes MediaNUC
saferemove 0
shared 0
 
  • Like
Reactions: justinclift
No worries. Does the LVM volume group ProxOS exist on that node?

I haven't personally been using LVM much recently, but I'm pretty sure running the command vgs on the node should output the list of LVM volume groups. :)
 
No worries. Does the LVM volume group ProxOS exist on that node?

I haven't personally been using LVM much recently, but I'm pretty sure running the command vgs on the node should output the list of LVM volume groups. :)
root@MediaNUC:~# fdisk -l /dev/sda
Disk /dev/sda: 232.89 GiB, 250059350016 bytes, 488397168 sectors
Disk model: CT250BX100SSD1
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 90659F4D-7D26-4D99-A7A2-18A8A3796134

Device Start End Sectors Size Type
/dev/sda1 34 2047 2014 1007K BIOS boot
/dev/sda2 2048 2099199 2097152 1G EFI System
/dev/sda3 2099200 488397134 486297935 231.9G Linux LVM
root@MediaNUC:~# pvs
PV VG Fmt Attr PSize PFree
/dev/nvme0n1 apps lvm2 a-- <476.94g 124.00m
/dev/sda3 pve lvm2 a-- 231.88g 16.00g
root@MediaNUC:~# vgs
VG #PV #LV #SN Attr VSize VFree
apps 1 3 0 wz--n- <476.94g 124.00m
pve 1 3 0 wz--n- 231.88g 16.00g
root@MediaNUC:~# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
apps apps twi-aotz-- <467.28g 12.93 0.82
snap_vm-200-disk-0_Update_20240728_203855 apps Vri---tz-k 60.00g apps vm-200-disk-0
vm-200-disk-0 apps Vwi-a-tz-- 60.00g apps 99.92
data pve twi-aotz-- 137.11g 0.00 1.16
root pve -wi-ao---- <67.97g
swap pve -wi-ao---- 8.00g
 
  • Like
Reactions: justinclift
Code:
root@MediaNUC:~# pvs
PV           VG   Fmt  Attr PSize    PFree
/dev/nvme0n1 apps lvm2 a--  <476.94g 124.00m
/dev/sda3    pve  lvm2 a--   231.88g  16.00g
root@MediaNUC:~# vgs
VG   #PV #LV #SN Attr   VSize    VFree
apps   1   3   0 wz--n- <476.94g 124.00m
pve    1   3   0 wz--n-  231.88g  16.00g
There is no volume group with name ProxOS or associated PV. This could mean it's not present or failed to activate properly. Please share the full boot log and the output of lsblk -f. Are all the drives you expect present in the output there?
 
  • Like
Reactions: justinclift
There is no volume group with name ProxOS or associated PV. This could mean it's not present or failed to activate properly. Please share the full boot log and the output of lsblk -f. Are all the drives you expect present in the output there?

Looks like sda1 partition is not working?
root@MediaNUC:~# lsblk -f
NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINTS
sda
├─sda1
├─sda2 vfat FAT32 318F-2ED0 1010.3M 1% /boot/efi
└─sda3 LVM2_member LVM2 001 cTi586-rhxM-CpyL-xfkw-mbI3-Qegx-ClKDs9
├─pve-swap swap 1 b1f40fa8-bde9-46f0-a568-3b5a8ce415ba [SWAP]
├─pve-root ext4 1.0 80a6279c-9f04-4d75-9d4e-7d4ff2cf4ead 53.3G 15% /
├─pve-data_tmeta
│ └─pve-data-tpool
│ └─pve-data
└─pve-data_tdata
└─pve-data-tpool
└─pve-data
nvme0n1 LVM2_member LVM2 001 Kd2anZ-X7ta-0ZPm-7lV0-m7yT-wYSD-D3CH24
├─apps-apps_tmeta
│ └─apps-apps-tpool
│ ├─apps-apps
│ └─apps-vm--200--disk--0 ext4 1.0 10ed41d0-4136-4e96-89fe-1e592c869f96
└─apps-apps_tdata
└─apps-apps-tpool
├─apps-apps
└─apps-vm--200--disk--0 ext4 1.0 10ed41d0-4136-4e96-89fe-1e592c869f96

https://pastebin.com/6dLzVgAw
 
Looks like sda1 partition is not working?
Nah. That sda1 partition is shown in the output of your fdisk -l command above. In your case it's just a tiny thing (~1MB) used by the system for holding boot info.

The reason the ProxOS storage location is doing the question mark thing is because it's apparently using an LVM volume group called "ProxOS" that doesn't exist on the server. Well, that's my assumption from reading the vgs output, as it's not showing that volume group as existing.

Please run lsblk -o tran,name,type,size,vendor,model,label,rota,log-sec,phy-sec and paste the output here. That'll give the complete list of physical storage devices attached to your system, with some useful reference info about them too.

Btw, please check the output and let us know if there's some storage attached to the system which isn't showing up in that list, as that would be a problem that needs investigating. :)
 
Nah. That sda1 partition is shown in the output of your fdisk -l command above. In your case it's just a tiny thing (~1MB) used by the system for holding boot info.

The reason the ProxOS storage location is doing the question mark thing is because it's apparently using an LVM volume group called "ProxOS" that doesn't exist on the server. Well, that's my assumption from reading the vgs output, as it's not showing that volume group as existing.

Please run lsblk -o tran,name,type,size,vendor,model,label,rota,log-sec,phy-sec and paste the output here. That'll give the complete list of physical storage devices attached to your system, with some useful reference info about them too.

Btw, please check the output and let us know if there's some storage attached to the system which isn't showing up in that list, as that would be a problem that needs investigating. :)
root@MediaNUC:~# lsblk -o tran,name,type,size,vendor,model,label,rota,log-sec,phy-sec
TRAN NAME TYPE SIZE VENDOR MODEL LABEL ROTA LOG-SEC PHY-SEC
sata sda disk 232.9G ATA CT250BX100SSD1 0 512 512
├─sda1 part 1007K 0 512 512
├─sda2 part 1G 0 512 512
└─sda3 part 231.9G 0 512 512
├─pve-swap lvm 8G 0 512 512
├─pve-root lvm 68G 0 512 512
├─pve-data_tmeta lvm 1.4G 0 512 512
│ └─pve-data-tpool lvm 137.1G 0 512 512
│ └─pve-data lvm 137.1G 0 512 512
└─pve-data_tdata lvm 137.1G 0 512 512
└─pve-data-tpool lvm 137.1G 0 512 512
└─pve-data lvm 137.1G 0 512 512
nvme nvme0n1 disk 476.9G Lexar SSD NM790 512GB 0 512 512
├─apps-apps_tmeta lvm 4.8G 0 512 512
│ └─apps-apps-tpool lvm 467.3G 0 512 512
│ ├─apps-apps lvm 467.3G 0 512 512
│ └─apps-vm--200--disk--0 lvm 60G 0 512 512
└─apps-apps_tdata lvm 467.3G 0 512 512
└─apps-apps-tpool lvm 467.3G 0 512 512
├─apps-apps lvm 467.3G 0 512 512
└─apps-vm--200--disk--0 lvm 60G 0 512 512
 
Ouch. Um, would you be ok to reformat those so it keeps the columns lined up? It's kind of impossible to read it's current layout. ;)
 
Cool, that works. :)

Nothing seems broken in that, and you haven't mentioned any storage missing, so I think things are physically fine.

Do you remember what the "ProxOS" storage was supposed to contain?

Asking because when looking at the system so far, it mostly looks like an extra (accidental?) entry that doesn't actually do anything. Is there any chance it's just a left over thing from when you were first installing Proxmox and it just needs removing?
 
  • Like
Reactions: fiona
ProxOS was the OS drive of Proxmox...the NVME was for apps (VMs/LXCs). I set it up that way a while ago and never had an issue until out of the blue it decided to go unknown.

I know I could simply redo it...I am just trying to understand why it did what it did when I never touched it.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!