Lost in Storage

Elderberg

Member
Nov 21, 2019
7
1
23
23
Gents, we're onboarding a new client who has proxmox in their environment with a couple VMs. PVE v5.0-23 is the current version.

I'm having a hard time figuring out what storage is associated with which disks. I realize from reviewing several youtube videos that the GUI is limited in this current version. Would someone be so kind as to guide me on how to figure out what is going on here? I would be curious to see which of the following Disks are in use for starters.

1581219655708.png

1581219732460.png
 
Seems like the "Filestore" in the second picture is based on an LVM (see "Type").
From the first picture I would guess that /dev/sda and /dev/sdb are likely involved (because they show up as LVM).

That would be an odd setup though, as disks are not off the same size.

Bottom line / my advice: get yourself access to SSH of that server and gather information about the volumegroups etc.
Start with "pvdisplay" to get the information about physical devices involved in LVM (or partitions, since first picture mentions partitioned devices).
"vgdisplay" to display volume groups and finally "lvdisplay" to get the logical volume, which likely is mounted at some point.

I would expect that both 1TB disks are somewhat related to each other, and the two 2TB disks. But the screenshot indicates different ...

HTH and good luck.
 
So I ran the 2 commands "pvdisplay" and "vgdisplay" see below i've included the output.


1581286251269.png

1581286303614.png


1. At this moment the "Backup" that's of type Directory isn't working. If i try to view its contents in the GUI it is empty. There are some scheduled tasks to backup that are failing. So i'm trying to figure out if I can create a new "storage" and set the backups to save to this as Iearn more.

2. Ultimately, i would like to identify if any of the disks are free and clear so as to create a new storage and migrate VMs to it. This way we would have a better appreciation of the system
 
Last edited:
1581306091076.png

I've ran a few commands from reading through the forums: pvs, vgs, lvs.

1. As far as I can tell it seems as if only /dev/sda and /dev/sdb are being used. Is there any other commands I can run to try and determine if /dev/sdc and /dev/sdd are being used?
 
From my understanding
  • sda is used for VG "pve"
  • sdb is used for VG "fileserve"

It seems that the whole "life" of the customer depends on these - not redundant disks ...

VM-101 consumes storage from "fileserve"
all others from "pve"
proxmox also seems to be booted from volumes stored on "pve"

So next find out what sdc and sdd are doing.
Start with getting the partitions via "fdisk -l /dev/sdc" and the same for sdd.
Next inspect "mount" command and try to find where they are used.
Eventually they are "free"

Needless to say but use extreme caution. If there are no backups you have an inheret risk of messing things up really badly.
 
see below the fdisk outputs. In the GUI there are 3 storage types as "Directory" is there a command to see more details of these?

1581369946945.png

1581369970254.png
 
So there is one partition each on the disk.
what does "mount" command show?
 
See below output. are you able to interpret anything from this?

root@asl:~# mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,relatime)
udev on /dev type devtmpfs (rw,nosuid,relatime,size=12271660k,nr_inodes=3067915,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=2457600k,mode=755)
/dev/mapper/pve-root on / type ext4 (rw,relatime,errors=remount-ro,data=ordered)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=27,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=856)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
mqueue on /dev/mqueue type mqueue (rw,relatime)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime)
sunrpc on /run/rpc_pipefs type rpc_pipefs (rw,relatime)
configfs on /sys/kernel/config type configfs (rw,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
lxcfs on /var/lib/lxcfs type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
/dev/fuse on /etc/pve type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)
tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=2457596k,mode=700)
root@asl:~#
 
In my opinion this is the only relevant line to get you further:
Code:
/dev/mapper/pve-root on / type ext4 (rw,relatime,errors=remount-ro,data=ordered)
Everything else is related to the "standard linux system" (e.g. procfs etc.).

The line I mentioned indicates that the root filesystem "/" is stored on /dev/mapper/pve-root
So if you are doing an "lvdisplay" you will probably find a reference to "pve-root" as a volume-name.

To me it seems that neither sdc nor sdd are used from a Proxmox perspective. Odd, I know. But the partitions are not mounted. So the only other possibility would be that these partitions are passed through to a VM, and are used in there.
Why you would do it in this way? I have no idea.

You can go through the configuration files in /etc/pve/quemu-server and try to find a reference to one of the physical devices. It would mention "/dev/..." at a disk definition. I am doing this as well to pass through physical disks to a VM. So that is technically possible.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!