WARNING: Sum of all thin volume sizes (1002.00 GiB) exceeds the size of thin pool crucial_ssd/crucial_ssd and the size of whole volume group (931.51 G

verderame

New Member
Jun 1, 2025
6
1
3
Hello to everyone,
I am new in backing up servers and I have a problem with proxmox backing up my samba LXC.
I have a raspberry pi 3 model b+ that has open media vault it has a crucial ssd 500gb and an hdd with 2tb.
Every time I try to back up proxmox it fails at the Samba container, I am assuming it is trying to back up even the copy of the shared folder between the other lxc which have the shared folders with the mounted point.


here's the error I get when proxmox fail at samba:

INFO: create storage snapshot 'vzdump'
WARNING: You have not turned on protection against thin pools running out of space.
WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full.
Logical volume "snap_vm-205-disk-0_vzdump" created.
WARNING: Sum of all thin volume sizes (1002.00 GiB) exceeds the size of thin pool crucial_ssd/crucial_ssd and the size of whole volume group (931.51 GiB).
INFO: creating vzdump archive '/mnt/pve/backup-omv/dump/vzdump-lxc-205-2025_06_01-01_46_05.tar.zst'
 
Hi verderame,

The warning in your task log is not about Samba. Are you familiar with LVM, and then especially thin volumes?

As you have "New Member" as status and "new to backing up" in your post, I'll elaborate. Thin volumes allow to create a storage device of certain size without at the same time allocating storage resources.

For example, if you have five containers you could give all of them 200 GB of space, backed by a 500 GB SSD. Every container has seemingly 200 GB available, and if storage requirements go up equally, everything runs fine until they use 100 GB per piece. At that moment (5x100GB=500GB) the SSD is full, even though each of the containers thinks only half the storage is full.

The next time a container wants to write another GB of data to its 200 GB storage, an error will occur: there is no medium backing the data above 500 GB. Inconsistencies follow.

The setup described in this example is valid. The system warns about the possibility of the virtual space 'overflowing' the physical space available.

With the above as a background, seeing there are only warnings related to LVM but no errors, I think these are a red herring.

I have no samba in use so I may not be helpful with your actual problem.

From your description, it is not clear to me what the situation is. For example, what has PBS (on x64) to do with your RPi (ARM), and which way is the backup going?
 
  • Like
Reactions: Johannes S
is you Proxmox VE a not supported ARM64 version? Then you have it, ask some else.
 
Hi verderame,

The warning in your task log is not about Samba. Are you familiar with LVM, and then especially thin volumes?

As you have "New Member" as status and "new to backing up" in your post, I'll elaborate. Thin volumes allow to create a storage device of certain size without at the same time allocating storage resources.

For example, if you have five containers you could give all of them 200 GB of space, backed by a 500 GB SSD. Every container has seemingly 200 GB available, and if storage requirements go up equally, everything runs fine until they use 100 GB per piece. At that moment (5x100GB=500GB) the SSD is full, even though each of the containers thinks only half the storage is full.

The next time a container wants to write another GB of data to its 200 GB storage, an error will occur: there is no medium backing the data above 500 GB. Inconsistencies follow.

The setup described in this example is valid. The system warns about the possibility of the virtual space 'overflowing' the physical space available.

With the above as a background, seeing there are only warnings related to LVM but no errors, I think these are a red herring.

I have no samba in use so I may not be helpful with your actual problem.

From your description, it is not clear to me what the situation is. For example, what has PBS (on x64) to do with your RPi (ARM), and which way is the backup going?
Hello, thanks for your reply and sorry if I didnt answer, I am familiar with LVM btw these are all LXC not vm, I am not saying this is a samba problem, I just said that this warning and then the backup loop with no results ( It even gets my raspberry keeping crashing i suppose for low voltage ) but as soon as I get that error, always on the start of samba CT it blocks the backup process and goes in loop and doesent do nothing, I have to kill the processes. Here is the situation: I have samba which share the same 3 folders to some other LXC, the LXC have the same amount of storage from that, I think the problem of the backup is the samba container since every other container before the samba one have the backup process successfully, I know these log are not errors but warning, but as soon I get that warning the backup is "compromised" and it doesent go any further from there.
Here is the catch: I dont think its the case but it may be,
As I said before I run those backup in my omv on the raspberry, as soon as it reaches the samba lxc it starts doing weird stuff such as crashing, literally rebooting the system, I wonder if it could be a low voltage but it has the original power supply, an ssd to boot via usb and an hdd which contains the backups,
It sounds weird if the problem is the big load of the samba backup since it is only bigger and has no priority or speed modified.

Sorry if I am confused it has been a month since I have this problem and I can't get away with it.
 
what do you mean?
The question is on which hardware you run ProxmoxVE since you mentioned as Raspberry as OMV host in your original post.
At the moment ProxmoxVE isn't supported on ARM (the CPU in Raspberry) Hardware although there are (since ProxmoxVE is Opensource) inofficial ports like this one: https://github.com/jiangcuo/pxvirt
So if you happen to use one of these projects they might have issues the official version on supported hardware doesn't have.

However I think we have just a minor missunderstanding between @news and you since you didn't mentioned your ProxmxoVE hardware, just that you use a Raspberry as NAS with OpenMediaVault. Although personally I would use different operating system this is fine and supported by OMV developers.

Which hardware are you using as ProxmoxVE server host?
 
  • Like
Reactions: news
The question is on which hardware you run ProxmoxVE since you mentioned as Raspberry as OMV host in your original post.
At the moment ProxmoxVE isn't supported on ARM (the CPU in Raspberry) Hardware although there are (since ProxmoxVE is Opensource) inofficial ports like this one: https://github.com/jiangcuo/pxvirt
So if you happen to use one of these projects they might have issues the official version on supported hardware doesn't have.

However I think we have just a minor missunderstanding between @news and you since you didn't mentioned your ProxmxoVE hardware, just that you use a Raspberry as NAS with OpenMediaVault. Although personally I would use different operating system this is fine and supported by OMV developers.

Which hardware are you using as ProxmoxVE server host?
it's a nuc by minisforum, Minisforum EliteMini UM250 - 16GB RAM+512GB SSD. it is not the problem btw
 
  • Like
Reactions: Johannes S
it's a nuc by minisforum, Minisforum EliteMini UM250 - 16GB RAM+512GB SSD. it is not the problem btw

Agreed :) Did you already try to check whether there are snapshots or similiar things you can remove? Can you please post the output of lvs and lvdisplay from your system console?
 
Agreed :) Did you already try to check whether there are snapshots or similiar things you can remove? Can you please post the output of lvs and lvdisplay from your system console?
Yesterday I did something with snapshots, I had one with the volume of 500 gb of that specific container, Once I have eliminated it I made a disaster lol...I am now trying to fix this, here are the output:

Code:
root@pve:~# lvs
  LV            VG          Attr       LSize    Pool        Origin Data%  Meta%  Move Log Cpy%Sync Convert
  crucial_ssd   crucial_ssd twi-aotz--  912.76g                    20.12  0.82                           
  vm-205-disk-0 crucial_ssd Vwi-aotz--  500.00g crucial_ssd        36.56                                 
  vm-211-disk-0 crucial_ssd Vwi-aotz--    2.00g crucial_ssd        42.54                                 
  data          pve         twi-aotz-- <348.82g                    8.43   0.75                           
  root          pve         -wi-ao----   96.00g                                                           
  swap          pve         -wi-ao----    8.00g                                                           
  vm-200-disk-0 pve         Vwi-aotz--    2.00g data               38.20                                 
  vm-201-disk-0 pve         Vwi-aotz--    2.00g data               99.15                                 
  vm-202-disk-0 pve         Vwi-a-tz--    2.00g data               7.04                                   
  vm-203-disk-0 pve         Vwi-aotz--    2.00g data               25.38                                 
  vm-204-disk-0 pve         Vwi-aotz--    2.00g data               61.51                                 
  vm-206-disk-0 pve         Vwi-aotz--    8.00g data               97.22                                 
  vm-207-disk-0 pve         Vwi-aotz--    4.00g data               56.22                                 
  vm-208-disk-0 pve         Vwi-a-tz--    4.00g data               3.96                                   
  vm-209-disk-0 pve         Vwi-a-tz--    8.00g data               72.98                                 
  vm-210-disk-0 pve         Vwi-a-tz--   35.00g data               24.50                                 
  vm-212-disk-0 pve         Vwi-a-tz--    8.00g data               2.09


Code:
root@pve:~# lvdisplay
  --- Logical volume ---
  LV Name                data
  VG Name                pve
  LV UUID                484WRr-7P68-EOdD-KRwg-AqOV-sFAr-E6iHQf
  LV Write Access        read/write (activated read only)
  LV Creation host, time proxmox, 2025-02-13 19:44:55 +0100
  LV Pool metadata       data_tmeta
  LV Pool data           data_tdata
  LV Status              available
  # open                 0
  LV Size                <348.82 GiB
  Allocated pool data    8.43%
  Allocated metadata     0.75%
  Current LE             89297
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:7
  
  --- Logical volume ---
  LV Path                /dev/pve/swap
  LV Name                swap
  VG Name                pve
  LV UUID                QDA3md-mqcG-wW9J-QCRz-VKgS-TC84-C9qawM
  LV Write Access        read/write
  LV Creation host, time proxmox, 2025-02-13 19:44:46 +0100
  LV Status              available
  # open                 2
  LV Size                8.00 GiB
  Current LE             2048
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:2
  
  --- Logical volume ---
  LV Path                /dev/pve/root
  LV Name                root
  VG Name                pve
  LV UUID                EhTTKd-QMav-2JO4-co2t-Dbkd-K1bC-Cewk74
  LV Write Access        read/write
  LV Creation host, time proxmox, 2025-02-13 19:44:46 +0100
  LV Status              available
  # open                 1
  LV Size                96.00 GiB
  Current LE             24576
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:3
  
  --- Logical volume ---
  LV Path                /dev/pve/vm-210-disk-0
  LV Name                vm-210-disk-0
  VG Name                pve
  LV UUID                Gcbyht-Bhpx-8dZa-1zu2-X2mZ-Z6dv-e3wafH
  LV Write Access        read/write
  LV Creation host, time pve, 2025-02-18 22:33:01 +0100
  LV Pool name           data
  LV Status              available
  # open                 0
  LV Size                35.00 GiB
  Mapped size            24.50%
  Current LE             8960
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:8
  
  --- Logical volume ---
  LV Path                /dev/pve/vm-201-disk-0
  LV Name                vm-201-disk-0
  VG Name                pve
  LV UUID                FVF0h8-xB7z-29s5-2nlI-JkSM-2jsd-I9zx63
  LV Write Access        read/write
  LV Creation host, time pve, 2025-02-23 14:39:45 +0100
  LV Pool name           data
  LV Status              available
  # open                 1
  LV Size                2.00 GiB
  Mapped size            99.15%
  Current LE             512
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:9
  
  --- Logical volume ---
  LV Path                /dev/pve/vm-202-disk-0
  LV Name                vm-202-disk-0
  VG Name                pve
  LV UUID                eVJOGH-G7K3-d24v-P5PN-Vfa2-WI1K-3Ckd6H
  LV Write Access        read/write
  LV Creation host, time pve, 2025-03-01 13:34:39 +0100
  LV Pool name           data
  LV Status              available
  # open                 0
  LV Size                2.00 GiB
  Mapped size            7.04%
  Current LE             512
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:10
  
  --- Logical volume ---
  LV Path                /dev/pve/vm-203-disk-0
  LV Name                vm-203-disk-0
  VG Name                pve
  LV UUID                jKIcfX-ATD1-7pH8-Z5N7-5uaE-z8FL-s09AAV
  LV Write Access        read/write
  LV Creation host, time pve, 2025-03-08 09:28:19 +0100
  LV Pool name           data
  LV Status              available
  # open                 1
  LV Size                2.00 GiB
  Mapped size            25.38%
  Current LE             512
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:11
  
  --- Logical volume ---
  LV Path                /dev/pve/vm-204-disk-0
  LV Name                vm-204-disk-0
  VG Name                pve
  LV UUID                JxfeCO-Wslx-CYRw-MJlU-kWRk-wy6G-f5G223
  LV Write Access        read/write
  LV Creation host, time pve, 2025-03-09 13:33:59 +0100
  LV Pool name           data
  LV Status              available
  # open                 1
  LV Size                2.00 GiB
  Mapped size            61.51%
  Current LE             512
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:12
  
  --- Logical volume ---
  LV Path                /dev/pve/vm-200-disk-0
  LV Name                vm-200-disk-0
  VG Name                pve
  LV UUID                GZGX0z-Lnpn-4sue-apok-SoDP-6iey-Xfo3rL
  LV Write Access        read/write
  LV Creation host, time pve, 2025-03-29 11:50:18 +0100
  LV Pool name           data
  LV Status              available
  # open                 1
  LV Size                2.00 GiB
  Mapped size            38.20%
  Current LE             512
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:13
  
  --- Logical volume ---
  LV Path                /dev/pve/vm-206-disk-0
  LV Name                vm-206-disk-0
  VG Name                pve
  LV UUID                CSPlCi-98Md-uvTp-e1mQ-Mw69-wJTF-wq0ZLG
  LV Write Access        read/write
  LV Creation host, time pve, 2025-04-21 18:25:15 +0200
  LV Pool name           data
  LV Status              available
  # open                 1
  LV Size                8.00 GiB
  Mapped size            97.22%
  Current LE             2048
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:14
  
  --- Logical volume ---
  LV Path                /dev/pve/vm-207-disk-0
  LV Name                vm-207-disk-0
  VG Name                pve
  LV UUID                Q7HRqk-YOgM-6u5N-exDf-3mFS-qQsf-fFnsGr
  LV Write Access        read/write
  LV Creation host, time pve, 2025-04-26 21:12:03 +0200
  LV Pool name           data
  LV Status              available
  # open                 1
  LV Size                4.00 GiB
  Mapped size            56.22%
  Current LE             1024
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:15
  
  --- Logical volume ---
  LV Path                /dev/pve/vm-209-disk-0
  LV Name                vm-209-disk-0
  VG Name                pve
  LV UUID                Ka3fh5-k22Z-Qf6V-hAP4-OtfK-onjs-wI8h62
  LV Write Access        read/write
  LV Creation host, time pve, 2025-05-02 23:42:43 +0200
  LV Pool name           data
  LV Status              available
  # open                 0
  LV Size                8.00 GiB
  Mapped size            72.98%
  Current LE             2048
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:16
  
  --- Logical volume ---
  LV Path                /dev/pve/vm-212-disk-0
  LV Name                vm-212-disk-0
  VG Name                pve
  LV UUID                8OhS3V-ZyjA-Z7XP-Skoy-cYlX-72mv-qMaw6N
  LV Write Access        read/write
  LV Creation host, time pve, 2025-05-22 22:32:35 +0200
  LV Pool name           data
  LV Status              available
  # open                 0
  LV Size                8.00 GiB
  Mapped size            2.09%
  Current LE             2048
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:17
  
  --- Logical volume ---
  LV Path                /dev/pve/vm-208-disk-0
  LV Name                vm-208-disk-0
  VG Name                pve
  LV UUID                lynNdf-vboT-UfQf-QsHU-2WST-s6be-62QoKC
  LV Write Access        read/write
  LV Creation host, time pve, 2025-05-23 10:53:45 +0200
  LV Pool name           data
  LV Status              available
  # open                 0
  LV Size                4.00 GiB
  Mapped size            3.96%
  Current LE             1024
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:18
  
  --- Logical volume ---
  LV Name                crucial_ssd
  VG Name                crucial_ssd
  LV UUID                l5ERiX-lHBU-vifn-Mb0j-2yD4-XUnD-SBRKBA
  LV Write Access        read/write (activated read only)
  LV Creation host, time pve, 2025-03-27 14:19:53 +0100
  LV Pool metadata       crucial_ssd_tmeta
  LV Pool data           crucial_ssd_tdata
  LV Status              available
  # open                 0
  LV Size                912.76 GiB
  Allocated pool data    20.12%
  Allocated metadata     0.82%
  Current LE             233667
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:20
  
  --- Logical volume ---
  LV Path                /dev/crucial_ssd/vm-205-disk-0
  LV Name                vm-205-disk-0
  VG Name                crucial_ssd
  LV UUID                Fm4ytn-5WAC-HpFh-rG5C-9kys-c94C-x8SAN8
  LV Write Access        read/write
  LV Creation host, time pve, 2025-03-30 15:06:21 +0200
  LV Pool name           crucial_ssd
  LV Status              available
  # open                 1
  LV Size                500.00 GiB
  Mapped size            36.56%
  Current LE             128000
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:21
  
  --- Logical volume ---
  LV Path                /dev/crucial_ssd/vm-211-disk-0
  LV Name                vm-211-disk-0
  VG Name                crucial_ssd
  LV UUID                xnlLMP-pts9-GscI-Q0cS-Hbu3-j1nj-zN6kZb
  LV Write Access        read/write
  LV Creation host, time pve, 2025-05-07 09:55:32 +0200
  LV Pool name           crucial_ssd
  LV Status              available
  # open                 1
  LV Size                2.00 GiB
  Mapped size            42.54%
  Current LE             512
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:22
 
little update: I was able to fix my problems and now I eliminated the bugged snapshot....however my backup is running since 12 hours...its pretty impossible for a single LXC (there are about 350gb of stuff).
any suggestions?
 

Attachments

  • Screenshot 2025-06-13 141921.png
    Screenshot 2025-06-13 141921.png
    119.7 KB · Views: 4
Hi verderame,

Good you were able to fix the problem. In case you can recall, it would be nice for future readers of this thread to find how you were able to solve the problem.

As for the runtime of the backup process, do I understand the infra correctly?
  • Two machines are involved
    • One RPi running OMV with 512 GB SSD and 2 TB HDD providing storage over SMB
    • One Minisforum running PBS (and maybe PVE?) with 512 GB SSD
  • They are connected at 1000 Mbps via cable and a switch?

Would you care to elaborate? It seems Johannes can make a picture of your setup in his mind, but after reading your posts it is unclear to me which part of text refers to which bit of hardware and which piece of software runs where.

Running ahead, based on the size of the storage devices, is it correct to assume that
  • The containers that PVE runs are stored on OMV
  • The backups of BPS are written to storage on OMV
If that is the case, while making backups, PBS needs to make two roundtrips from Minisforum to RPi over the network for each chunk of data. As PBS is sensitive to I/O and latency, I imagine the double network segments cause delays. This 350 GB container, on what kind of storage is it?