VM (i think) run out of space, but appears to have plenty.

austempest

Member
Jan 2, 2022
6
0
6
38
PVE 8.3.0, VM running Ubuntu 24.04.1, fully updated. Docker, portainer-agent installed.

I have a docker container of binhex/arch-qbittorrentvpn. It's been running fine for years, but recently nothing downloads. I checked a few days ago and current speed was zero. I added a bunch of linux ISOs and the top seeded torrents just to have something that will defiantly download at max speed. Restarting makes a bunch of torrents start and immediately shoot up in speed for a single '3 second refresh cycle' before everything drops to zero. I tested a bunch of those torrents on my desktop and they all download at full speed (500Mbps), so the specific torrents aren't the problem.

The last time this happened, it was because i ran out of space in the VM (no room left to download to, makes sense why everything drops to zero DL speed), when that happened last year, i extended the lvm from 100G to 200G and moved the incomplete to the NAS so it should never run out of space again...

Everything i can see shows plenty of space left. It's configured to have both incomplete and complete downloads to save to my NAS (with about 13TB free)

1733482752648.png

all of these run inside the VM:

Code:
austempest@ubuntu-server:~$ df -h
Filesystem                         Size  Used Avail Use% Mounted on
tmpfs                              1.2G  1.6M  1.2G   1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv  196G   24G  164G  13% /
tmpfs                              5.4G     0  5.4G   0% /dev/shm
tmpfs                              5.0M     0  5.0M   0% /run/lock
/dev/sda2                          974M  182M  725M  21% /boot
192.168.1.120:/volume1/nas1         42T   29T   13T  70% /media/nas/nas1
192.168.1.120:/volume2/nas2         11T  1.6T  9.0T  15% /media/nas/nas2
tmpfs                              1.1G   12K  1.1G   1% /run/user/1000

I've given it a 200G lvm, and it's using 24G (13%). ncdu -x / doesn;t show anything untoward, just where the 24G are.

Code:
austempest@ubuntu-server:~$ sudo lvdisplay
  --- Logical volume ---
  LV Path                /dev/ubuntu-vg/ubuntu-lv
  LV Name                ubuntu-lv
  VG Name                ubuntu-vg
  LV UUID                uNeOgK-xYju-8sFh-U774-mcM7-aRIG-5MJHNI
  LV Write Access        read/write
  LV Creation host, time ubuntu-server, 2021-01-12 22:41:14 +1100
  LV Status              available
  # open                 1
  LV Size                <199.00 GiB
  Current LE             50943
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:0

Code:
austempest@ubuntu-server:~$ sudo vgdisplay
  --- Volume group ---
  VG Name               ubuntu-vg
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  9
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <199.00 GiB
  PE Size               4.00 MiB
  Total PE              50943
  Alloc PE / Size       50943 / <199.00 GiB
  Free  PE / Size       0 / 0
  VG UUID               zbiwgw-DStT-f44L-VGiI-1ma6-jfDp-uGKFrd

As for the host, i have over allocated to VMs/CTs, but i've always done that and never run into issues as long as i'm not a the limit of the drive (please correct me if i'm wrong. [is there a command to run on the PVE1 host to show this better?]

1733484114237.png
1733484136689.png
1733484160650.png

Code:
root@pve1:~# df -h
Filesystem               Size  Used Avail Use% Mounted on
udev                     7.8G     0  7.8G   0% /dev
tmpfs                    1.6G  2.6M  1.6G   1% /run
/dev/mapper/pve-root      16G   11G  4.0G  73% /
tmpfs                    7.8G   66M  7.7G   1% /dev/shm
tmpfs                    5.0M     0  5.0M   0% /run/lock
efivarfs                 128K  116K  7.6K  94% /sys/firmware/efi/efivars
/dev/sda2                511M  328K  511M   1% /boot/efi
/dev/fuse                128M   36K  128M   1% /etc/pve
//192.168.1.120/proxmox   42T   29T   13T  70% /mnt/pve/nas
tmpfs                    1.6G     0  1.6G   0% /run/user/0

I'm at a loss of why the torrent downloads are not working, and all symptoms are pointing to running out of space, but i can for the life of me see where or why.
 
Hi,
please use df -ih to check the inode usage of the filesystem. How does the usage on the NAS side look like? Do you get an actual error when you manually create (larger) files on the storage?
 
on the node:
Code:
root@pve1:~# df -ih
Filesystem              Inodes IUsed IFree IUse% Mounted on
udev                      2.0M   585  2.0M    1% /dev
tmpfs                     2.0M  1.1K  2.0M    1% /run
/dev/mapper/pve-root     1008K  109K  900K   11% /
tmpfs                     2.0M   136  2.0M    1% /dev/shm
tmpfs                     2.0M    29  2.0M    1% /run/lock
efivarfs                     0     0     0     - /sys/firmware/efi/efivars
/dev/sda2                    0     0     0     - /boot/efi
/dev/fuse                 256K    71  256K    1% /etc/pve
//192.168.1.120/proxmox      0     0     0     - /mnt/pve/nas
tmpfs                     398K    20  398K    1% /run/user/0

on the VM:
Code:
austempest@ubuntu-server:~$ df -ih
Filesystem                        Inodes IUsed IFree IUse% Mounted on
tmpfs                               1.5M   954  1.5M    1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv    19M  401K   19M    3% /
tmpfs                               1.4M     1  1.4M    1% /dev/shm
tmpfs                               1.4M     3  1.4M    1% /run/lock
/dev/sda2                            64K   321   64K    1% /boot
192.168.1.120:/volume1/nas1            0     0     0     - /media/nas/nas1
192.168.1.120:/volume2/nas2            0     0     0     - /media/nas/nas2
tmpfs                               278K    32  278K    1% /run/user/1000

The NAS usage is shown in the above df (it's mapped via 192.168.1.120), 42T total, 13T avail.

I don't get any error at all when creating files of any size on the NAS.
 
on the node:
Code:
root@pve1:~# df -ih
Filesystem              Inodes IUsed IFree IUse% Mounted on
udev                      2.0M   585  2.0M    1% /dev
tmpfs                     2.0M  1.1K  2.0M    1% /run
/dev/mapper/pve-root     1008K  109K  900K   11% /
tmpfs                     2.0M   136  2.0M    1% /dev/shm
tmpfs                     2.0M    29  2.0M    1% /run/lock
efivarfs                     0     0     0     - /sys/firmware/efi/efivars
/dev/sda2                    0     0     0     - /boot/efi
/dev/fuse                 256K    71  256K    1% /etc/pve
//192.168.1.120/proxmox      0     0     0     - /mnt/pve/nas
tmpfs                     398K    20  398K    1% /run/user/0

on the VM:
Code:
austempest@ubuntu-server:~$ df -ih
Filesystem                        Inodes IUsed IFree IUse% Mounted on
tmpfs                               1.5M   954  1.5M    1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv    19M  401K   19M    3% /
tmpfs                               1.4M     1  1.4M    1% /dev/shm
tmpfs                               1.4M     3  1.4M    1% /run/lock
/dev/sda2                            64K   321   64K    1% /boot
192.168.1.120:/volume1/nas1            0     0     0     - /media/nas/nas1
192.168.1.120:/volume2/nas2            0     0     0     - /media/nas/nas2
tmpfs                               278K    32  278K    1% /run/user/1000

The NAS usage is shown in the above df (it's mapped via 192.168.1.120), 42T total, 13T avail.
What about the inode usage on the NAS itself?
I don't get any error at all when creating files of any size on the NAS.
Is there anything interesting in the system logs of the involved hosts and guests? Any firewall configured? I'd also test the network with e.g. iperf
 
I tried df on the nas (Synology. DS1815), but it wasn't installed. I'll try to get it to work in a bit.

I'm running in minimal sleep due to three kids ATM. Please forgive my tardiness.

And thank you for all the help so far.

I'll find time as soon as I can to report back
 
hi @fiona,

on the NAS;
Code:
austempest@DiskStation:~$ df -ih
Filesystem             Inodes IUsed IFree IUse% Mounted on
/dev/md0                 152K   40K  113K   26% /
devtmpfs                 492K   984  491K    1% /dev
tmpfs                    493K     3  493K    1% /dev/shm
tmpfs                    493K  2.3K  491K    1% /run
tmpfs                    493K    10  493K    1% /sys/fs/cgroup
tmpfs                    493K   169  493K    1% /tmp
/dev/mapper/cachedev_1      0     0     0     - /volume2
/dev/mapper/cachedev_0      0     0     0     - /volume1

firewall is disabled on the host.

I've installed sabnzbb downloader on the same VM, and it downloads fine.

I installed a qBittorrent LXC from proxmox helper scripts, and added the ubuntu iso torrent, and it downloaded at the max speed for my connection

1734868052902.png

As far as i can tell, everything is working perfectly, except for this VM now. I would delete the entire VM and replace it with literally anything else that allowed me to run a torrent download behind a VPN.

To check weather it was the docker container or the VM, i spun up another container on the same VM, with same docker compose (different config folder and host port), fed it the same ubuntu iso torrent and it stalled just like all the others.

1734869792225.png

...

I've tried spinning up another VM to host the container, but i'm receiving completely unrelated errors to this, preventing me from just deleting this VM and moving over. i might need to start another thread to get some help sorting that out ...
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!