[SOLVED] Datastore on LVM thin volume (works OK)

wbk

Active Member
Oct 27, 2019
195
24
38
Hi all,

TL;DR:
I created a thin volume and a mount point, but forgot to mount it before creating a datastore. The result was not what I expected. Not finding LVM thin explicitly in the PBS docs, I asked whether a datastore on thinly provisioned LVM would work. Conclusion: yes it does, just mount it before creating the datastore.

---------------------------------------------------------------------
Original message

Is datastore on a thin volume supported? Creating the datastore runs well enough, but the result is not what I expected.

These are the steps I took to create the thin volume and add a datastore on it:
# lvcreate -V 2.5T -T mt_alledingen/mt_alledingen_pool -n mt_pbs Logical volume "mt_pbs" created. # mkfs.ext4 -L mt_pbs_thin # echo '/dev/mapper/mt_alledingen/mt_pbs /home/data/backup/mt_pbs ext4 defaults,noatime 0 0' >> /etc/fstab # proxmox-backup-manager datastore create mt_pbs_thin /home/data/backup/mt_pbs

It took a few minutes to initialize the chunkstore, then it showed the datastore in the GUI:
image-4.png


Querying via CLI, I got:


# proxmox-backup-manager datastore list ┌─────────────┬──────────────────────────┬─────────┐ │ name │ path │ comment │ ╞═════════════╪══════════════════════════╪═════════╡ │ mt_pbs_thin │ /home/data/backup/mt_pbs │ │ └─────────────┴──────────────────────────┴─────────┘ # proxmox-backup-manager disk list ┌──────┬────────────┬─────┬───────────┬───────────────┬───────────────────────────┬─────────┬─────────┐ │ name │ used │ gpt │ disk-type │ size │ model │ wearout │ status │ ╞══════╪════════════╪═════╪═══════════╪═══════════════╪═══════════════════════════╪═════════╪═════════╡ │ sda │ lvm │ 1 │ hdd │ 3000592982016 │ Hitachi_HDS5C3030BLE630 │ - │ passed │ ├──────┼────────────┼─────┼───────────┼───────────────┼───────────────────────────┼─────────┼─────────┤ │ sdb │ lvm │ 0 │ ssd │ 500107862016 │ Samsung_SSD_860_EVO_500GB │ 1.00 % │ passed │ ├──────┼────────────┼─────┼───────────┼───────────────┼───────────────────────────┼─────────┼─────────┤ │ sdc │ lvm │ 1 │ hdd │ 2000398934016 │ WDC_WD20EFRX-68AX9N0 │ - │ passed │ ├──────┼────────────┼─────┼───────────┼───────────────┼───────────────────────────┼─────────┼─────────┤ │ sdd │ lvm │ 1 │ hdd │ 2000398934016 │ WDC_WD20EFRX-68AX9N0 │ - │ passed │ ├──────┼────────────┼─────┼───────────┼───────────────┼───────────────────────────┼─────────┼─────────┤ │ sde │ lvm │ 1 │ hdd │ 31914983424 │ Storage_Device │ - │ unknown │ ├──────┼────────────┼─────┼───────────┼───────────────┼───────────────────────────┼─────────┼─────────┤ │ sdf │ lvm │ 1 │ hdd │ 32105299968 │ Internal_SD-CARD │ - │ unknown │ ├──────┼────────────┼─────┼───────────┼───────────────┼───────────────────────────┼─────────┼─────────┤ │ sdg │ partitions │ 0 │ hdd │ 268435456 │ LUN_01_Media_0 │ - │ unknown │ ├──────┼────────────┼─────┼───────────┼───────────────┼───────────────────────────┼─────────┼─────────┤ │ sdh │ lvm │ 1 │ ssd │ 500107862016 │ Samsung_SSD_860_EVO_500GB │ 0.00 % │ passed │ └──────┴────────────┴─────┴───────────┴───────────────┴───────────────────────────┴─────────┴─────────┘

The backing thin pool looks like :

# lvs mt_alledingen LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert mt_alledingen_pool mt_alledingen twi-aot--- 5.07t 0.07 8.36 mt_pbs mt_alledingen Vwi-a-t--- 2.50t mt_alledingen_pool 0.14 vm-102-disk-0 mt_alledingen Vwi-aot--- 8.00g mt_alledingen_pool 2.98

The line for `mt_pbs` shows 2.50t, which is as expected.

Is it correct to assume that the chunk store takes ~6GB for the whole 2.5TB, with the reported stats in PBS growing with usage?

To add insult to injury, `lvs` above gives it away: this server shares PBS and PVE and both of them share storage in this thin volume. I think that is content for another thread: it also does not really behave the way I'd expected. In my defense: it is on the one server, so I thought 'shared storage' is not an item, and the reason for sharing the thin volume is so that I can use a single dm_cache on SSD for the (spinning) thin pool LV (.. is the idea, anyway..)

Thank you for reading!
 
Last edited:
Hey, that's peculiar..

I went ahead, added the datastore via PVE and created a backup.

The backup is still running, but now the size of the datastore is displayed correctly on the dashboard:

1698264845567.png


After completing the backup, the 'used' column still reads <30 KB.

Task output on the PVE side indicates things did not run smoothly...
INFO: 100% (13.0 GiB of 13.0 GiB) in 5m 12s, read: 2.9 MiB/s, write: 0 B/s INFO: backup is sparse: 8.48 GiB (65%) total zero data INFO: backup was done incrementally, reused 8.48 GiB (65%) INFO: transferred 13.00 GiB in 1366 seconds (9.7 MiB/s) INFO: adding notes to backup WARN: unable to add notes - proxmox-backup-client failed: Error: unable to update manifest blob - unable to load blob '"/home/data/backup/mt_pbs/vm/100/2023-10-25T19:56:32Z/index.json.blob"' - No such file or directory (os error 2) INFO: Finished Backup of VM 100 (00:22:46) INFO: Backup finished at 2023-10-25 22:19:18 WARN: uploading backup task log failed: Error: unable to read "/home/data/backup/mt_pbs/vm/100/owner" - No such file or directory (os error 2) INFO: Backup job finished successfully TASK WARNINGS: 2

... while the same task on the PBS side does not give any warning:
2023-10-25T21:56:32+02:00: starting new backup on datastore 'mt_pbs_thin' from ::ffff:172.26.2.50: "vm/100/2023-10-25T19:56:32Z" 2023-10-25T21:56:32+02:00: GET /previous: 400 Bad Request: no valid previous backup 2023-10-25T21:56:32+02:00: created new fixed index 1 ("vm/100/2023-10-25T19:56:32Z/drive-scsi0.img.fidx") 2023-10-25T21:56:32+02:00: add blob "/home/data/backup/mt_pbs/vm/100/2023-10-25T19:56:32Z/qemu-server.conf.blob" (354 bytes, comp: 354) 2023-10-25T21:56:32+02:00: add blob "/home/data/backup/mt_pbs/vm/100/2023-10-25T19:56:32Z/fw.conf.blob" (34 bytes, comp: 34) 2023-10-25T22:02:17+02:00: Upload statistics for 'drive-scsi0.img.fidx' 2023-10-25T22:02:17+02:00: UUID: 7bca5776d3024e6fafb5648ce79e115f 2023-10-25T22:02:17+02:00: Checksum: 39ac5bced7c7b708d4abdfc6d8786a4cad6570b494de45f09e262f3380173b2f 2023-10-25T22:02:17+02:00: Size: 13958643712 2023-10-25T22:02:17+02:00: Chunk count: 3328 2023-10-25T22:02:17+02:00: Upload size: 4857004032 (34%) 2023-10-25T22:02:17+02:00: Duplicates: 2170+0 (65%) 2023-10-25T22:02:17+02:00: Compression: 46% 2023-10-25T22:02:17+02:00: successfully closed fixed index 1 2023-10-25T22:02:17+02:00: add blob "/home/data/backup/mt_pbs/vm/100/2023-10-25T19:56:32Z/index.json.blob" (376 bytes, comp: 376) 2023-10-25T22:19:18+02:00: successfully finished backup 2023-10-25T22:19:18+02:00: backup finished successfully 2023-10-25T22:19:18+02:00: TASK OK

There is no content though, when I look in the datastore:
1698265791157.png



# proxmox-backup-manager datastore list ┌─────────────┬──────────────────────────┬─────────┐ │ name │ path │ comment │ ╞═════════════╪══════════════════════════╪═════════╡ │ mt_pbs_thin │ /home/data/backup/mt_pbs │ │ └─────────────┴──────────────────────────┴─────────┘ # df -h /home/data/backup/mt_pbs/ Filesystem Size Used Avail Use% Mounted on /dev/mapper/mt_alledingen-mt_pbs 2.5T 28K 2.4T 1% /home/data/backup/mt_pbs # lvs mt_alledingen LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert mt_alledingen_pool mt_alledingen twi-aot--- 5.07t 0.07 8.36 mt_pbs mt_alledingen Vwi-aot--- 2.50t mt_alledingen_pool 0.14 vm-102-disk-0 mt_alledingen Vwi-aot--- 8.00g mt_alledingen_pool 3.00

When I check the actual directory at the mount point, it seems the chuck store has disappeared (even though the mount point is still correct, `df` output):
# pwd /home/data/backup/mt_pbs # ls -al total 24 drwxr-xr-x 3 root root 4096 Oct 25 02:24 . drwxr-xr-x 3 root root 4096 Oct 25 02:28 .. drwx------ 2 root root 16384 Oct 25 02:24 lost+found # df -h . Filesystem Size Used Avail Use% Mounted on /dev/mapper/mt_alledingen-mt_pbs 2.5T 28K 2.4T 1% /home/data/backup/mt_pbs

I do notice that the directory is owned by root, while I connected to PBS with a PBS-user. The PBS user has acces to the datastore as DatastoreAdmin:
1698266313950.png

I will run the backup once more to see whether it fails again.

* edit *
That was short, it ended in an error within a minute:
INFO: creating Proxmox Backup Server archive 'vm/100/2023-10-25T20:40:04Z' ERROR: VM 100 qmp command 'backup' failed - backup connect failed: command error: Permission denied (os error 13) INFO: aborting backup job INFO: resuming VM again ERROR: Backup of VM 100 failed - VM 100 qmp command 'backup' failed - backup connect failed: command error: Permission denied (os error 13) INFO: Failed at 2023-10-25 22:40:04 INFO: Backup job finished with errors TASK ERROR: job errors


What can I do to troubleshoot this?
 

Attachments

  • 1698264824673.png
    1698264824673.png
    13.5 KB · Views: 0
Last edited:
OK, I got cause of the strange behaviour: the datastore was not mounted when creating the datastore and not when starting the backup :p

While intending to keep an eye on `dstat` and dm-cache usage, I mounted the store when I didn't see any IO, not realizing that PBS needs the same mountpoint.

When I unmount the intended datastore, half the backup is back (on the datastore that was created on the root partition)

# pwd /home/data/backup/mt_pbs # ls -al total 24 drwxr-xr-x 3 root root 4096 Oct 25 02:24 . drwxr-xr-x 3 root root 4096 Oct 25 02:28 .. drwx------ 2 root root 16384 Oct 25 02:24 lost+found # umount /home/data/backup/mt_pbs # ls -al total 1076 drwxr-xr-x 4 backup backup 4096 Oct 25 21:56 . drwxr-xr-x 3 root root 4096 Oct 25 02:28 .. drwxr-x--- 1 backup backup 1085440 Oct 25 02:35 .chunks -rw-r--r-- 1 backup backup 0 Oct 25 02:28 .lock drwxr-xr-x 3 backup backup 4096 Oct 25 21:56 vm # df -h . Filesystem Size Used Avail Use% Mounted on /dev/mapper/mt_usbsdraid-mt_prox_sys 18G 7.5G 9.3G 45% /

I removed the datastore with
# proxmox-backup-manager datastore remove mt_pbs_thin Removing datastore from config... TASK OK

Now, trying to keep this thread on the rails: should datastore on thin LV work? Will/should it claim the full declared size of the thin LV on initialization?
 
Last edited:
Now, trying to keep this thread on the rails: should datastore on thin LV work? Will/should it claim the full declared size of the thin LV on initialization?
a datastore just needs a working filesystem, regardless where the filesystem is. so yes, it should work on lvm thin, but you have to make sure that it is mounted before pbs starts up/creates the datastore (as you found out)
 
  • Like
Reactions: wbk

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!