LVM local2 unusable stating mkdir /dev/vg0/ ... File exists ... /usr/share/perl5/PVE/STORAGE/DirPlugin

ssldn

Member
Jul 13, 2020
59
2
8
48
Global
Hi,
very similar to this ancient thread my local storage local2 does not realize what it is, it has a mount point and setting in fstab as told in tutorials, but it displays this sentence (while idling):
Rich (BB code):
mkdir /dev/vg0/heavyload1: File exists at /usr/share/perl5/PVE/Storage/DirPlugin.pm line 109. (500)

KtcOlAB.png

Here the /etc/pve/storage.cfg:
Bash:
root@ghost0 ~ # cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content iso,vztmpl,images,backup
        maxfiles 15

lvmthin: local-lvm
        thinpool vmdata
        vgname vg0
        content rootdir,images

dir: storageprox
        path /mnt/data/backup
        content backup
        maxfiles 3
        nodes ghost0
        shared 1

dir: local2
        path /dev/vg0/heavyload1
        content rootdir,images,snippets,iso,backup,vztmpl
        maxfiles 1
        shared 1

Here the fstab:
Code:
root@ghost0 ~ # cat /etc/fstab
proc /proc proc defaults 0 0
# /dev/nvme0n1p1
UUID=639f9dfb-4543-4731-9d48-247c9792bb83 /boot ext3 defaults 0 0
# /dev/nvme0n12 belongs to LVM volume group 'vg0'
/dev/vg0/root  /  ext3  defaults 0 0
/dev/vg0/swap  swap  swap  defaults 0 0
/dev/vg0/home  /home  xfs  defaults 0 0
# /dev/nvme1n1p1 is parted in second SSD
/dev/vg0/heavyload1 /dev/vg0/heavyload1-s ext4 defaults 0 2
I already had a complete fail of the full server so far because a wrong fstab entry, not my special area so to say.
Kindly help if youre able to get me to the point that this space can be used.

Can it be that there must be the other path be pasted in the storage.cfg, like:
Code:
/dev/vg0/heavyload1-s
Can it still be that I get the inactive directory from an old bug as in the ancient thread?
Cause my lvm with ext4 formatted storage is not getting active, I cannot find any way how to make it active...

And second question around this LVM:
Is there no way to upload zip or exe files for usage in the windows VMs other than wget or so? and in which fodler or harddrive shoudl it then go? ... probably the one used by the VMs, right? But if they are kind of virtual space I tried to get into there, and have no idea of the path. all paths I've tried are "not a directory"

Am grateful for any help.
Andre
 
Last edited:
This seems odd:
/dev/vg0/heavyload1 /dev/vg0/heavyload1-s ext4 defaults 0 2
usually a mountpoint is not inside /dev (for pve it's customary to put them in '/mnt/pve/<storagename>')...

the question is what /dev/vg0/heavyload is - did you format it as ext4? is it a lvm-thinpool - a regular lvm?

please post the output of:
Code:
pvs -a
vgs -a
lvs -a
mount

in code tags
 
  • Like
Reactions: ssldn
Hello,
thanks for joining here and for the help.

I was already wondering if I could have found common principals written down any where of what is usual of mountpoints in general and espcailly in proxmox, also why this is and what could not work or shouldnt be done, but you're the first one to write it clearly to me. Thank you very much.
usually a mountpoint is not inside /dev (for pve it's customary to put them in '/mnt/pve/<storagename>')...
What can happen if its put inside dev (othe than not being considered)
Wonder cause I had a whole sysytem down because wrong lines in fstab already...
the question is what /dev/vg0/heavyload is - did you format it as ext4? is it a lvm-thinpool - a regular lvm?
I got certainly confused in between, when I started using LVM ... it was the first time to use LVM.
So, I think its an LVM set as directory inside of the only group.
Am also new to these things like COW and principals alike also weeks before I thought about setting btrfs for my home system and backed off because right now I couldnt learn working with it.

Here the output starting with pvs -a:
Bash:
root@ghost0 ~ # pvs -a
  PV                  VG  Fmt  Attr PSize    PFree
  /dev/nvme0n1                 ---        0      0
  /dev/nvme0n1p1               ---        0      0
  /dev/nvme0n1p2      vg0 lvm2 a--   476.43g     0
  /dev/nvme1n1        vg0 lvm2 a--  <476.94g 15.74g
  /dev/vg0/heavyload1          ---        0      0
  /dev/vg0/home                ---        0      0
  /dev/vg0/root                ---        0      0
  /dev/vg0/swap                ---        0      0

then vgs -a:
Bash:
root@ghost0 ~ # vgs -a
  VG  #PV #LV #SN Attr   VSize   VFree
  vg0   2   7   0 wz--n- 953.37g 15.74g
and also lvs -a:
Bash:
root@ghost0 ~ # lvs -a
  LV                            VG  Attr       LSize   Pool   Origin        Data%  Meta%  Move Log Cpy%Sync Convert
  heavyload1                    vg0 -wi-ao---- 350.00g
  home                          vg0 -wi-ao---- 455.43g
  [lvol0_pmspare]               vg0 ewi------- 100.00m
  root                          vg0 -wi-ao----  26.00g
  snap_vm-100-disk-0_First_Mini vg0 Vri---tz-k  45.00g vmdata vm-100-disk-0
  swap                          vg0 -wi-ao----   6.00g
  vm-100-disk-0                 vg0 Vwi-aotz--  45.00g vmdata               20.39
  vmdata                        vg0 twi-aotz-- 100.00g                      9.44   16.53
  [vmdata_tdata]                vg0 Twi-ao---- 100.00g
  [vmdata_tmeta]                vg0 ewi-ao---- 100.00m
and finally mount:
Bash:
root@ghost0 ~ # mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,relatime)
udev on /dev type devtmpfs (rw,nosuid,relatime,size=32794292k,nr_inodes=8198573,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=6563828k,mode=755)
/dev/mapper/vg0-root on / type ext3 (rw,relatime)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup2 on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
none on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/rdma type cgroup (rw,nosuid,nodev,noexec,relatime,rdma)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=25,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=19883)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
mqueue on /dev/mqueue type mqueue (rw,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
sunrpc on /run/rpc_pipefs type rpc_pipefs (rw,relatime)
configfs on /sys/kernel/config type configfs (rw,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
/dev/nvme0n1p1 on /boot type ext3 (rw,relatime)
/dev/mapper/vg0-home on /home type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota)
/dev/mapper/vg0-heavyload1 on /dev/vg0/heavyload1-s type ext4 (rw,relatime)
lxcfs on /var/lib/lxcfs type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
/dev/fuse on /etc/pve type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)
tracefs on /sys/kernel/debug/tracing type tracefs (rw,relatime)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime)
tmpfs on /run/user/1001 type tmpfs (rw,nosuid,nodev,relatime,size=6563824k,mode=700,uid=1001,gid=1001)

The PVE status shows it's (local2) in processing
Bash:
root@ghost0 ~ # cat /etc/pve/.rrd
pve2-node/ghost0:657092::1598355690:0.00:8:0.00676642990209811:0.000246051996439931:67213582336:6948384768:6442446848:0:27412131840:16690438144:1120531521:885778382
pve2.3-vm/100:381955:test1:running:0:1598355690:8:0.00565921490786653:4294967296:1567514624:48318382080:0:274:4059914:4763369984:3212069376
pve2-storage/ghost0/local-lvm:1598355690:107374182400:10136122818
pve2-storage/ghost0/local:1598355690:27412131840:16690442240
pve2-storage/ghost0/local2:1598355690:368837799936:71360512
pve2-storage/ghost0/storageprox:1598355690:27412131840:16690442240
 
Last edited:
What can happen if its put inside dev (othe than not being considered)
It can work - it's just rather odd and would be confusing to me - additionally '/dev/' is a virtual filesystem on modern linux distributions - which gets freshly created on every boot - meaning at least that you would need to make sure that the mountpoint exists as directory.


root@ghost0 ~ # lvs -a
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
heavyload1 vg0 -wi-ao---- 350.00g

seams heavyload1 is a LV in the volumegroup vg0

try mounting it on a temporary mount-point:
Code:
mkdir /mnt/pve/tmp
mount /dev/vg0/heavyload1 /mnt/tmp
dmesg
umount /mnt/tmp

with dmesg you should get a hint at which filesystem is on it (if it is a plain filesystem)

with that knowledge you can edit '/etc/fstab' or create a systemd mount unit for the filesystem

I hope this helps!
 
  • Like
Reactions: ssldn
t can work - it's just rather odd and would be confusing to me - additionally '/dev/' is a virtual filesystem on modern linux distributions - which gets freshly created on every boot - meaning at least that you would need to make sure that the mountpoint exists as directory.
Thanks a lot for this info.
It...

with that knowledge you can edit '/etc/fstab' or create a systemd mount unit for the filesystem
Code:
mkdir: cannot create directory ‘/mnt/pve/tmp’: No such file or directory
used the command as root (from sudoers group)

Am trying on.

I started playing around on my virtualbox with systemd aready, cause these concepts of deeper networking and harddrive management recently got into my view. Never had to usemuch of it before.
Sorry

Oh yeh, this probably meant that /mnt/pve dint exist. I remember, so I create
Code:
mkdir /mnt/pve
first
 
So, tis wasnt so much informative, thats why Ive created a report with Diagnostic Report Tool and because xdg-util is needed for this I've installed it via
Code:
apt-get update && apt-get install xdg-utils
while shortly after it gave me the error as follws:
Code:
root@ghost0 ~/progs/diagnostic_report # make install
install -d /usr/local/bin
install -d /usr/local/share/adi_diagnostic_report/
install ./adi_diagnostic_report /usr/local/bin/
install ./adi_diagnostic_report.glade /usr/local/share/adi_diagnostic_report/
xdg-desktop-menu install adi-diagnostic-report.desktop
xdg-desktop-menu: No writable system menu directory found.
make: *** [Makefile:18: install] Error 3
I've looked the error up the goolge search slit and this was the result:
Solution for Bug in xdg-utils and this worked.
I attach the report not, which was created by the tool which then displaýed on command prompt:
Code:
root@ghost0 ~/progs/diagnostic_report # adi_diagnostic_report --enable network --disable dmesg,fru
Successfully created report at "diagnostic_report.tar.bz2".
But @Stoiko Ivanov I will send you a link on your message privately, hope it will works
The content of it looked like this:
zmrWmbY.png


Although, I have to say, the mount at least gives this:
Code:
/dev/mapper/vg0-heavyload1 on /mnt/pve/tmp type ext4 (rw,relatime)

Any other of the reports, you would need ?So, I could post it here.
 
Last edited:
This seems odd:

usually a mountpoint is not inside /dev (for pve it's customary to put them in '/mnt/pve/<storagename>')...

the question is what /dev/vg0/heavyload is - did you format it as ext4? is it a lvm-thinpool - a regular lvm?
Ok, so I then would need to create first a folder to /mnt/pve with name heavyload1
Code:
/mnt/pve/heavyload1
for a mount inside of fstab like the following?:

Code:
/dev/mapper/vg0-heavyload1  /mnt/pve/heavyload1 ext4 defaults 0 2
because it seems to be a "plain filesystem", right?
 
/dev/mapper/vg0-heavyload1 on /mnt/pve/tmp type ext4 (rw,relatime)
seems like a ext4 filesystem :)
/dev/mapper/vg0-heavyload1 /mnt/pve/heavyload1 ext4 defaults 0 2
should also work - alternatively you can create a mount-unit (see `man systemd.mount`) in /etc/systemd/system

I hope this helps!
 
  • Like
Reactions: ssldn
seems like a ext4 filesystem :)

should also work - alternatively you can create a mount-unit (see `man systemd.mount`) in /etc/systemd/system

I hope this helps!
Hi,
I have done so, and... it worked like a charm!
EDIT: This is the output of
Bash:
lsblk
NAME                       MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
nvme0n1                    259:0    0   477G  0 disk
├─nvme0n1p1                259:2    0   512M  0 part /boot
└─nvme0n1p2                259:3    0 476.4G  0 part
  ├─vg0-root               253:0    0    26G  0 lvm  /
  ├─vg0-swap               253:1    0     6G  0 lvm  [SWAP]
  └─vg0-home               253:2    0 455.4G  0 lvm  /home
nvme1n1                    259:1    0   477G  0 disk
├─vg0-root                 253:0    0    26G  0 lvm  /
├─vg0-heavyload1           253:3    0   350G  0 lvm  /mnt/pve/heavyload1
├─vg0-vmdata_tmeta         253:4    0   100M  0 lvm
│ └─vg0-vmdata-tpool       253:6    0   100G  0 lvm
│   ├─vg0-vmdata           253:7    0   100G  0 lvm
│   └─vg0-vm--100--disk--0 253:8    0    45G  0 lvm
└─vg0-vmdata_tdata         253:5    0   100G  0 lvm
  └─vg0-vmdata-tpool       253:6    0   100G  0 lvm
    ├─vg0-vmdata           253:7    0   100G  0 lvm
    └─vg0-vm--100--disk--0 253:8    0    45G  0 lvm
And dos this structure make sense overall?
THere are only two SSDs with each 512 GB space, but am binding in the storage servers now...
Kindly take a look at those both outputs also:

Bash:
root@ghost0 /etc # vgs -a
  VG  #PV #LV #SN Attr   VSize   VFree
  vg0   2   7   0 wz--n- 953.37g 15.74g
root@ghost0 /etc # lvs -a
  LV                            VG  Attr       LSize   Pool   Origin        Data%  Meta%  Move Log Cpy%Sync Convert
  heavyload1                    vg0 -wi-ao---- 350.00g
  home                          vg0 -wi-ao---- 455.43g
  [lvol0_pmspare]               vg0 ewi------- 100.00m
  root                          vg0 -wi-ao----  26.00g
  snap_vm-100-disk-0_First_Mini vg0 Vri---tz-k  45.00g vmdata vm-100-disk-0
  swap                          vg0 -wi-ao----   6.00g
  vm-100-disk-0                 vg0 Vwi-aotz--  45.00g vmdata               20.14
  vmdata                        vg0 twi-aotz-- 100.00g                      10.66  17.34
  [vmdata_tdata]                vg0 Twi-ao---- 100.00g
  [vmdata_tmeta]                vg0 ewi-ao---- 100.00m
 
Last edited:
Another short question about this LVM thing. As it seems to be solely virtual (to me, in terms of directories that can be visited through a terminal) I wouldn't know where to upload something that can be used from all the VMs inside there.
So, which directory can I use to do wget and have .exe, .pdf and .zip files and a like uploaded to the environment where the VMs can access later also?
Later inside the VMs I could use any cloud hoster but I would like to have certain files available right from the beginning of a newly set VM:::
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!