Storage question, backup problems: is '/mnt' part of 'local' or 'local-lvm' storage?

mizifih

New Member
Mar 8, 2024
5
0
1
Brazil
Am I doing it wrong because /mnt is part of local and not local-lvm?

Code:
INFO: Starting Backup of VM 102 (qemu)
INFO: Backup started at 2024-03-14 11:07:18
INFO: status = running
INFO: VM Name: Ubuntu
INFO: include disk 'scsi0' 'local-lvm:vm-102-disk-0' 128G
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating vzdump archive '/mnt/onedrive/dump/vzdump-qemu-102-2024_03_14-11_07_18.vma.zst'
INFO: skipping guest-agent 'fs-freeze', agent configured but not running?
INFO: started backup task '4d72a582-2f3b-4376-a091-13d5226f2484'
INFO: resuming VM again
INFO:   1% (2.4 GiB of 128.0 GiB) in 3s, read: 826.5 MiB/s, write: 263.2 MiB/s
INFO:   3% (3.9 GiB of 128.0 GiB) in 6s, read: 506.7 MiB/s, write: 200.4 MiB/s
INFO:   4% (5.2 GiB of 128.0 GiB) in 9s, read: 435.8 MiB/s, write: 225.4 MiB/s
INFO:   5% (6.5 GiB of 128.0 GiB) in 14s, read: 263.9 MiB/s, write: 239.7 MiB/s
INFO:   6% (7.8 GiB of 128.0 GiB) in 18s, read: 348.9 MiB/s, write: 336.0 MiB/s
INFO:   7% (9.2 GiB of 128.0 GiB) in 24s, read: 236.8 MiB/s, write: 232.3 MiB/s
INFO:   8% (10.4 GiB of 128.0 GiB) in 28s, read: 298.2 MiB/s, write: 280.2 MiB/s
INFO:   9% (11.7 GiB of 128.0 GiB) in 32s, read: 333.5 MiB/s, write: 333.5 MiB/s
INFO:  10% (12.9 GiB of 128.0 GiB) in 35s, read: 419.3 MiB/s, write: 407.3 MiB/s
INFO:  11% (14.3 GiB of 128.0 GiB) in 38s, read: 462.8 MiB/s, write: 450.3 MiB/s
INFO:  12% (15.6 GiB of 128.0 GiB) in 41s, read: 455.5 MiB/s, write: 455.5 MiB/s
INFO:  13% (16.9 GiB of 128.0 GiB) in 46s, read: 262.4 MiB/s, write: 208.2 MiB/s
INFO:  14% (18.1 GiB of 128.0 GiB) in 49s, read: 410.8 MiB/s, write: 196.4 MiB/s
INFO:  15% (19.3 GiB of 128.0 GiB) in 53s, read: 309.0 MiB/s, write: 305.5 MiB/s
INFO:  16% (20.7 GiB of 128.0 GiB) in 57s, read: 367.0 MiB/s, write: 357.5 MiB/s
INFO:  17% (21.9 GiB of 128.0 GiB) in 1m, read: 381.8 MiB/s, write: 378.8 MiB/s
INFO:  18% (23.2 GiB of 128.0 GiB) in 1m 4s, read: 346.2 MiB/s, write: 331.7 MiB/s
INFO:  19% (24.4 GiB of 128.0 GiB) in 1m 8s, read: 299.1 MiB/s, write: 291.9 MiB/s
INFO:  20% (25.7 GiB of 128.0 GiB) in 1m 13s, read: 269.0 MiB/s, write: 259.6 MiB/s
INFO:  21% (27.1 GiB of 128.0 GiB) in 1m 18s, read: 296.5 MiB/s, write: 288.4 MiB/s
INFO:  22% (28.5 GiB of 128.0 GiB) in 1m 21s, read: 476.1 MiB/s, write: 465.9 MiB/s
INFO:  23% (29.5 GiB of 128.0 GiB) in 1m 24s, read: 340.8 MiB/s, write: 340.7 MiB/s
INFO:  24% (31.0 GiB of 128.0 GiB) in 1m 28s, read: 367.0 MiB/s, write: 359.8 MiB/s
INFO:  25% (32.3 GiB of 128.0 GiB) in 1m 32s, read: 341.7 MiB/s, write: 334.1 MiB/s
INFO:  26% (33.5 GiB of 128.0 GiB) in 1m 36s, read: 309.9 MiB/s, write: 308.1 MiB/s
INFO:  27% (34.8 GiB of 128.0 GiB) in 1m 41s, read: 270.4 MiB/s, write: 263.9 MiB/s
INFO:  28% (36.1 GiB of 128.0 GiB) in 1m 47s, read: 210.9 MiB/s, write: 210.6 MiB/s
INFO:  29% (37.3 GiB of 128.0 GiB) in 1m 53s, read: 209.6 MiB/s, write: 203.6 MiB/s
INFO:  30% (38.5 GiB of 128.0 GiB) in 1m 59s, read: 208.4 MiB/s, write: 202.7 MiB/s
INFO:  31% (39.7 GiB of 128.0 GiB) in 2m 5s, read: 202.4 MiB/s, write: 201.1 MiB/s
INFO:  32% (41.0 GiB of 128.0 GiB) in 2m 10s, read: 262.4 MiB/s, write: 255.9 MiB/s
INFO:  33% (42.3 GiB of 128.0 GiB) in 2m 17s, read: 199.4 MiB/s, write: 193.7 MiB/s
INFO:  34% (43.6 GiB of 128.0 GiB) in 2m 23s, read: 209.6 MiB/s, write: 206.3 MiB/s
INFO:  35% (44.8 GiB of 128.0 GiB) in 2m 30s, read: 185.9 MiB/s, write: 178.6 MiB/s
INFO:  36% (46.2 GiB of 128.0 GiB) in 2m 37s, read: 194.7 MiB/s, write: 189.0 MiB/s
INFO:  37% (47.4 GiB of 128.0 GiB) in 2m 43s, read: 204.8 MiB/s, write: 203.0 MiB/s
INFO:  38% (48.8 GiB of 128.0 GiB) in 2m 49s, read: 243.5 MiB/s, write: 185.2 MiB/s
INFO:  39% (50.0 GiB of 128.0 GiB) in 2m 53s, read: 302.7 MiB/s, write: 210.5 MiB/s
INFO:  40% (51.7 GiB of 128.0 GiB) in 2m 59s, read: 288.9 MiB/s, write: 255.9 MiB/s
INFO:  41% (52.7 GiB of 128.0 GiB) in 3m 4s, read: 210.9 MiB/s, write: 186.4 MiB/s
INFO:  42% (53.9 GiB of 128.0 GiB) in 3m 10s, read: 210.2 MiB/s, write: 200.8 MiB/s
INFO:  43% (55.2 GiB of 128.0 GiB) in 3m 15s, read: 250.6 MiB/s, write: 217.0 MiB/s
INFO:  44% (56.5 GiB of 128.0 GiB) in 3m 20s, read: 264.9 MiB/s, write: 230.1 MiB/s
INFO:  45% (57.9 GiB of 128.0 GiB) in 3m 26s, read: 252.5 MiB/s, write: 245.2 MiB/s
INFO:  46% (59.1 GiB of 128.0 GiB) in 3m 29s, read: 383.0 MiB/s, write: 331.8 MiB/s
INFO:  47% (60.4 GiB of 128.0 GiB) in 3m 34s, read: 283.4 MiB/s, write: 239.6 MiB/s
zstd: error 70 : Write error : cannot write block : No space left onERROR: vma_queue_write: write error - Broken pipe
INFO: aborting backup job
INFO: resuming VM again
ERROR: Backup of VM 102 failed - vma_queue_write: write error - Broken pipe
INFO: Failed at 2024-03-14 11:11:02
 
Am I doing it wrong because /mnt is part of local and not local-lvm?
Technically /mnt is not part of local or local-lvm. The local is part of the same filesystem as /mnt (in most cases).

However, your backup is pointed at /mnt/onedrive . Only you know, at this point, what it is.
For example, it could be an external drive mount, separate from the root filesystem. Or it could be part of root filesystem because you did not mount the disk.
If its the latter, then you just filled your root filesystem to the rim, which is not a good thing.
If its the former, then you ran out of space on your external device, which is also not great.

You can make things clearer by looking at and providing output of:
lsblk
mount
cat /etc/pve/storage.cfg
pvesm status

good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Technically /mnt is not part of local or local-lvm. The local is part of the same filesystem as /mnt (in most cases).

However, your backup is pointed at /mnt/onedrive . Only you know, at this point, what it is.
For example, it could be an external drive mount, separate from the root filesystem. Or it could be part of root filesystem because you did not mount the disk.
If its the latter, then you just filled your root filesystem to the rim, which is not a good thing.
If its the former, then you ran out of space on your external device, which is also not great.

You can make things clearer by looking at and providing output of:
lsblk
mount
cat /etc/pve/storage.cfg
pvesm status

good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Thank you. Ok, so... I'm not a linux guy, I know you can mount all sorts of crazy stuff, like remote SMB and NFS shares. All I did was:
mkdir /mnt/onedrive

So wherever mnt was, I believe onedrive is there with it, right? So, if I had to guess, I'd say its part of /root. Is it?

Here's some output from your suggestions:
lsblk
Code:
NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda                            8:0    0 111.8G  0 disk
└─sda1                         8:1    0 111.8G  0 part
sdb                            8:16   0 111.8G  0 disk
└─sdb1                         8:17   0 111.8G  0 part
sdc                            8:32   0 238.5G  0 disk
└─sdc1                         8:33   0 238.5G  0 part
nvme0n1                      259:0    0 953.9G  0 disk
├─nvme0n1p1                  259:1    0  1007K  0 part
├─nvme0n1p2                  259:2    0     1G  0 part /boot/efi
└─nvme0n1p3                  259:3    0 952.9G  0 part
  ├─pve-swap                 252:0    0     8G  0 lvm  [SWAP]
  ├─pve-root                 252:1    0    96G  0 lvm  /
  ├─pve-data_tmeta           252:2    0   8.3G  0 lvm
  │ └─pve-data-tpool         252:4    0 816.2G  0 lvm
  │   ├─pve-data             252:5    0 816.2G  1 lvm
  │   ├─pve-vm--101--disk--0 252:6    0     4M  0 lvm
  │   ├─pve-vm--101--disk--1 252:7    0    48G  0 lvm
  │   ├─pve-vm--102--disk--0 252:8    0   128G  0 lvm
  │   ├─pve-vm--103--disk--0 252:9    0   128G  0 lvm
  │   └─pve-vm--103--disk--1 252:10   0     4M  0 lvm
  └─pve-data_tdata           252:3    0 816.2G  0 lvm
    └─pve-data-tpool         252:4    0 816.2G  0 lvm
      ├─pve-data             252:5    0 816.2G  1 lvm
      ├─pve-vm--101--disk--0 252:6    0     4M  0 lvm
      ├─pve-vm--101--disk--1 252:7    0    48G  0 lvm
      ├─pve-vm--102--disk--0 252:8    0   128G  0 lvm
      ├─pve-vm--103--disk--0 252:9    0   128G  0 lvm
      └─pve-vm--103--disk--1 252:10   0     4M  0 lvm

Since /mnt/onedrive wasn't actually mounted, mount didn't return anything about it, I guess. Or maybe I couldn't identify it.

cat /etc/pve/storage.cfg returned this:
Code:
dir: local
        path /var/lib/vz
        content backup,iso,vztmpl

lvmthin: local-lvm
        thinpool data
        vgname pve
        content rootdir,images

cifs: backup-truenas
        path /mnt/pve/backup-truenas
        server 192.168.1.40
        share Storage
        content images,vztmpl,snippets,iso,rootdir,backup
        prune-backups keep-all=1
        subdir /BACKUP/proxmox
        username mizifih

cifs: vdisks-remote
        path /mnt/pve/vdisks-remote
        server 192.168.1.40
        share Appdata
        content images,iso
        prune-backups keep-all=1
        subdir /proxmox/vdisks
        username mizifih

dir: onedrive-rclone
        path /mnt/onedrive
        content backup
        prune-backups keep-all=1
        shared 0

pvesm status returned all my storages (TIL)
Code:
Name                   Type     Status           Total            Used       Available        %
backup-truenas         cifs     active      4653563264      2171875712      2481687552   46.67%
local                   dir     active        98497780        61483860        31964372   62.42%
local-lvm           lvmthin     active       855855104       166035890       689819213   19.40%
onedrive-rclone         dir     active        98497780        61483860        31964372   62.42%
vdisks-remote          cifs     active      2495601664        13914112      2481687552    0.56%
 
Last edited:
And in case you tried to mount something there, but you failed doing that, your directory type storage should have been the "is_mountpoint" option set so you can't crash your PVE node by accidently completely filling up the root filesystem.
 
So, if I had to guess, I'd say its part of /root. Is it?
The root filesystem is "/" not "/root". And is stored on the "root" LV of VG "pve" on your system disk.
Since /mnt/onedrive wasn't actually mounted, mount didn't return anything about it, I guess. Or maybe I couldn't identify it.
Hard to tell if you don't show us the output.

df -h and lvs is also missing.
 
Last edited:
All I did was:
mkdir /mnt/onedrive
Like you said, you didnt mount anything. All you did was create a folder. Similar to Windows, if you create a c:\disk folder and do not link it elsewhere, your c: drive will be filled if you dump something in c:\disk. Making a directory does not magically make it point somewhere other where this directory is located.

/mntt does not have any special characteristics on its own. It is just a directory that by convention is often used to mount/submount other data.

So wherever mnt was, I believe onedrive is there with it, right? So, if I had to guess, I'd say its part of /root. Is it?
No, technically both mnt and /root are part of /


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
Thanks bbgeek17 and Dunuin. I mistakenly wrote /root for / but I actually knew that, my bad.

Dunuin, I'll post the output for mount at the end of this post.

About /mnt just being a directory, yeah, I get that too. And mounting stuff in sub-directiries there doesn't actually mean files are actually there, in my noob way to explain it, I'd say it's a reference to storage somewhere else.

Just so you know (and I should have made it clear and Dunuin will hate me for this), /mnt/onedrive is the path I'm using in Rclone to upload my backups to OneDrive. So, mount output will say something, if it's mounting something there, and I can see files in there using ls, so, most likelly it's mounting. Even though it's a mounting point for Rclone, I understand backing up files there will consume storage, since files will be saved locally and than Rclone will upload to OneDrive.

Guys thank you for your patience. I'm sorry I didn't tell the whole picture before.

Here's mount output:
Code:
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,relatime)
udev on /dev type devtmpfs (rw,nosuid,relatime,size=16358864k,nr_inodes=4089716,mode=755,inode64)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,noexec,relatime,size=3278516k,mode=755,inode64)
/dev/mapper/pve-root on / type ext4 (rw,relatime,errors=remount-ro)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,inode64)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k,inode64)
cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
efivarfs on /sys/firmware/efi/efivars type efivarfs (rw,nosuid,nodev,noexec,relatime)
bpf on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=30,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=27503)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,nosuid,nodev,noexec,relatime)
tracefs on /sys/kernel/tracing type tracefs (rw,nosuid,nodev,noexec,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,nosuid,nodev,noexec,relatime)
configfs on /sys/kernel/config type configfs (rw,nosuid,nodev,noexec,relatime)
ramfs on /run/credentials/systemd-sysusers.service type ramfs (ro,nosuid,nodev,noexec,relatime,mode=700)
ramfs on /run/credentials/systemd-tmpfiles-setup-dev.service type ramfs (ro,nosuid,nodev,noexec,relatime,mode=700)
/dev/nvme0n1p2 on /boot/efi type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro)
ramfs on /run/credentials/systemd-sysctl.service type ramfs (ro,nosuid,nodev,noexec,relatime,mode=700)
ramfs on /run/credentials/systemd-tmpfiles-setup.service type ramfs (ro,nosuid,nodev,noexec,relatime,mode=700)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,nosuid,nodev,noexec,relatime)
lxcfs on /var/lib/lxcfs type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
sunrpc on /run/rpc_pipefs type rpc_pipefs (rw,relatime)
/dev/fuse on /etc/pve type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)
//192.168.1.40/Storage/BACKUP/proxmox on /mnt/pve/backup-truenas type cifs (rw,relatime,vers=3.1.1,cache=strict,username=mizifih,uid=0,noforceuid,gid=0,noforcegid,addr=192.168.1.40,file_mode=0755,dir_mode=0755,soft,nounix,serverino,mapposix,rsize=4194304,wsize=4194304,bsize=1048576,echo_interval=60,actimeo=1,closetimeo=1)
//192.168.1.40/Appdata/proxmox/vdisks on /mnt/pve/vdisks-remote type cifs (rw,relatime,vers=3.1.1,cache=strict,username=mizifih,uid=0,noforceuid,gid=0,noforcegid,addr=192.168.1.40,file_mode=0755,dir_mode=0755,soft,nounix,serverino,mapposix,rsize=4194304,wsize=4194304,bsize=1048576,echo_interval=60,actimeo=1,closetimeo=1)
tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=3278512k,nr_inodes=819628,mode=700,inode64)
 
local dir active 98497780 61483860 31964372 62.42%

onedrive-rclone dir active 98497780 61483860 31964372 62.42%
as you can see the numbers are identical, further pointer for you that they both are located on the same physical storage.
If you are doing async copy of data to the cloud, and in interim storing data locally - then you need sufficient space to do so.
You have 32G available, clearly not enough to store your 128G virtual disk.



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
as you can see the numbers are identical, further pointer for you that they both are located on the same physical storage.
If you are doing async copy of data to the cloud, and in interim storing data locally - then you need sufficient space to do so.
You have 32G available, clearly not enough to store your 128G virtual disk.



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Ok, let's see if I'm starting to get it.

pvesm status say local is a directory and storage.cfg says it has a path: /var/lib/vz.
onedrive-rclone is also a directory according to pvesm status, and storage.cfg says it has a path: /mnt/onedrive.

Now, returning to my orginal question. Is it correct to say that the path /mnt/onedrive is part of local?

And what about local-lvm?
Code:
lvmthin: local-lvm
        thinpool data
        vgname pve
        content rootdir,images

local-lvm is type lvmthin, some sort of LVM, I guess, and I'm not familiar with LVMs, but as far as I know it's some kind of elastic volume, and that it gave me a headache once when I needed to expand it's storage size in a previous Ubuntu install. But I digress. I notice that line that says vgname pve and the one that says content rootdir,images. Does it have anything to do with the path /mnt/pve, is this the mount point path to this LVM, if it's actually a LVM? If I ls inside /mnt/pve, I only get these: backup-truenas isos-remote remote-truenas vdisks-remote.
 
Last edited:
Now, returning to my orginal question. Is it correct to say that the path /mnt/onedrive is part of local?
As I said before - its not correct in the strict technical term. "local" is a label, specifically it points to /var/lib/vz.
Clearly /mnt/onedrive is not part of /var/lib/vz, they are both equal directories on the root filesystem. In default installation, which you have, they are part of "/".

Yes, LVM is a volume manager. It presents block devices to upper layer application, whether its PVE or an OS driven filesystem.
LVM can be mounted in /mnt or anywhere else, IF it had filesystem placed on it.
In PVE LVM is used in raw block form. The LVs are passed-through to VMs, where VM then places a file system on them.
No, LVM block devices are not mounted in /mnt for them to be passed-through.
/mnt/pve is a directory that PVE created and uses for FILE based storage types that do need to be mounted before presentation to upper layer portions of PVE.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
As I said before - its not correct in the strict technical term. "local" is a label, specifically it points to /var/lib/vz.
Clearly /mnt/onedrive is not part of /var/lib/vz, they are both equal directories on the root filesystem. In default installation, which you have, they are part of "/".

Yes, LVM is a volume manager. It presents block devices to upper layer application, whether its PVE or an OS driven filesystem.
LVM can be mounted in /mnt or anywhere else, IF it had filesystem placed on it.
In PVE LVM is used in raw block form. The LVs are passed-through to VMs, where VM then places a file system on them.
No, LVM block devices are not mounted in /mnt for them to be passed-through.
/mnt/pve is a directory that PVE created and uses for FILE based storage types that do need to be mounted before presentation to upper layer portions of PVE.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Sorry for the delay in my response, but thank you very much for all your effort explaining this to me and sticking with me this far. I really appreciate it.

Then local-lvm is a LVM volume taking around 85% of my actual real, solid and touchable 1TB storage (assuming that it's total size show by pvesm status is represented in Kilobytes)?

You said LVMs can be mounted virtually anywhere (except, probably, a few system paths that could cause problems, maybe), "IF it had filesystem placed on it". So, "IF" is the case here, since "PVE LVM is used in raw block form", since you said, "the LVs are passed-through to VMs, where VM then places a file system on them". Then, if a five y/o said that Proxmox is reserving these blocks specifically to add virtual disks on top of it, the kid would be right?

Are PVE LVMs files or actually raw sectors and blocks reserved from the physical disk? Because if local-lvm is a file, I probably need to trim it a little bit, ~850GB is way too much for my VM needs. Those 19.40% shown by pvesm status are not going to grow that much bigger, I'll probably never go above 40%. My Storage is managed by a TrueNAS VM, an array of spinning rust connected to a SAS PCIe Card that I'm passing-through.
 
Are PVE LVMs files or actually raw sectors and blocks reserved from the physical disk? Because if local-lvm is a file, I probably need to trim it a little bit, ~850GB is way too much for my VM needs.
Those are LVs, so block devices and not files. Think of them like something similar to sub-sub-sub-partitions (physical disk as a block device -> partition as block device ontop -> LVM with its VG and thin pool as block devices ontop -> thin volumes as block devices).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!