[SOLVED] Running out of space despite HDD being larger than files to be copied

Por12

Member
Mar 6, 2023
59
3
8
Hi there,

I have recently set up a new proxmox machine and one of my first projects is to set up a debian lxc container to use as destination for a backup destination.

My server has two drives, one 256GB SSD (which contains the proxmox installation, the lxc and the vms) and one 4TB HDD. The backed up files will be stored on the HDD, which contains an LVM-thin volume)

This HDD is mounted to the lxc as an additional mount point (mp = /mnt/rust-storage, mountoptions = lazytime; noatime; size=2000G) and the container can read/write correctly onto the volume.

The issue I'm having is that, despite the HDD being empty when I start the copy and the files to be backed up being only ~920GB, the files transfer cuts out because my drives become full and the server gets an io-error. If I delete the target folder (situated on /mnt/rust-storage/backup) and run an fstrim, the space comes back and the io-error disappears.

I'm completely lost as I don't understand how I am running out of space.

Any help?
 
Code:
root@proxmox:~# pct config 122
arch: amd64
cores: 2
hostname: borg-turnkey
memory: 1024
mp0: local:122/vm-122-disk-1.raw,mp=/mnt/rust-storage,mountoptions=lazytime;noat                                              ime,size=2000G
net0: name=eth0,bridge=vmbr0,hwaddr=B2:C7:6B:0C:05:3F,ip=dhcp,ip6=dhcp,type=veth
ostype: debian
rootfs: local:122/vm-122-disk-0.raw,size=8G
swap: 512

I'm copying from another server (unraid) on the network using borgbackup. Since I've deleted the files to remove the io error, I'll make the copy again and come back with the additional info.

Thanks
 
Last edited:
mp0: local:122/vm-122-disk-1.raw,mp=/mnt/rust-storage,mountoptions=lazytime;noat
In a standard installation "local" is a directory type storage that points to your ROOT disk. The line above says that there is a file called vm-122-disk-1.raw located on your root disk, that gets mounted inside lxc into /mnt/rust-storage

nothing in the config appears to indicate that you are using HDD in any way. Perhaps you heavily modified your storage config? What does /etc/pve/storage.cfg looks like?


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
I'm not aware of modifying it. Here it is:

Code:
dir: local
        path /var/lib/vz
        content iso,images,rootdir,vztmpl,backup
        shared 0

lvmthin: rust-storage
        thinpool rust-storage
        vgname rust-storage
        content rootdir,images
        nodes proxmox

nfs: media
        export /mnt/user/data
        path /mnt/pve/media
        server 192.168.88.253
        content images
        prune-backups keep-all=1

My 101 vm is a barebones windows10 machine that I pretend to use for Blue Iris in the future. No idea on what the vm-122-disk-1.raw is as it is not part of the vm config.

Code:
root@proxmox:~# qm config 101
agent: 1
balloon: 1024
boot: order=scsi0;net0
cores: 2
machine: pc-i440fx-7.1
memory: 8192
meta: creation-qemu=7.1.0,ctime=1672340829
name: windows10-blueiris
net0: virtio=BA:74:CB:16:75:66,bridge=vmbr0,firewall=1
numa: 0
ostype: win10
scsi0: local:101/vm-101-disk-0.qcow2,discard=on,iothread=1,size=32G
scsi1: rust-storage:vm-101-disk-0,backup=0,discard=on,iothread=1,size=4000G
scsihw: virtio-scsi-single
smbios1: uuid=4b6d75a6-46ce-4f81-bd39-5a1e26b8c117
sockets: 1
vmgenid: 733f9e8b-7d34-482b-83dc-02bff473121c
 
You started the thread explaining that you have an issue with an LXC container.
You said that this container is using LVM thin storage located on one of your HDDs.
You presented a config file of the LXC container in question and it shows that the container does NOT use LVM Thin storage called "rust-storage".
Both the root disk and the data disk of the LXC container in question use file based storage that points to your 256GB Root disk.
This is the reason why you are running out of space.

The VM config, even though it has nothing to do with your original question, presents an example of correctly pointing to LVM Thin storage "rust-storage" for the additional disk.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
  • Like
Reactions: Por12
That makes sense. Sorry as I'm still quite new. Was my error on how to create the mount point for the lxc? If so, I've tried re-creating it again. My goal is that the container indeed writes the files onto the 4 TB HDD when it writes on to /mnt/rust-storage/backup

Code:
arch: amd64
cores: 2
hostname: borg-turnkey
memory: 1024
mp0: rust-storage:vm-122-disk-0,mp=/mnt/rust-storage,mountoptions=lazytime;noatime,size=2000G
net0: name=eth0,bridge=vmbr0,hwaddr=B2:C7:6B:0C:05:3F,ip=dhcp,ip6=dhcp,type=veth
ostype: debian
rootfs: local:122/vm-122-disk-0.raw,size=8G
swap: 512
 
Was my error on how to create the mount point for the lxc?
You probably picked the wrong storage in GUI during disk add. Or, when local is default, you just clicked "next" without selecting correct one.
mp0: rust-storage:vm-122-disk-0,mp=/mnt/rust-storage,mountoptions=lazytime;noatime,size=2000G
This line looks good. The new disk is now on the storage that you wanted.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
I'm repeating the copy and the data is indeed going to the HDD.

Thanks for the help.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!