resize live /tmp tmpfs on unprivileged container [SOLVED]

mathx

Renowned Member
Jan 15, 2014
197
6
83
I have a job running on a node that i've modelled elsewhere too. To speed it up, I put it in /tmp, sync back to disk when done. However I realize through modelling that when the job finishes it will flush a bunch of data to disk into /tmp and run out of space and lose all the work.

In a privileged container I can just mount -o remount,size=xxG /tmp no problem. But in an unprivleged container I get

mount: /tmp: fsconfig() failed: tmpfs: Invalid uid '100000'.

How can I resize the /tmpfs without this error? I tried a few things but got errors:

Code:
#pct exec 131 -- mount -o remount,size=34G /tmp
mount: /tmp: fsconfig() failed: tmpfs: Invalid uid '100000'.

and

Code:
#PID=$(lxc-info -n 131 -p -H)
#nsenter --mount=/proc/$PID/ns/mnt mount -o remount,size=34G /tmp
mount: /tmp: must be superuser to use mount.
       dmesg(1) may have more information after failed mount system call.

The following actually worked, gets around all the UID remapping for root (uid=0) errors, so keeping for posterity here and for others:

Create this python script:

Code:
# Config
CT_ID = "131"  # Replace with your Container ID
NEW_SIZE = "size=34G"
TARGET = "/tmp"
 
# 1. Get PID
try:
    with os.popen(f"lxc-info -n {CT_ID} -p -H") as f:
        pid = f.read().strip()
except Exception as e:
    print(f"Error finding PID: {e}")
    exit(1)
 
# 2. Open the mount namespace file
fd = os.open(f"/proc/{pid}/ns/mnt", os.O_RDONLY)
 
# 3. Join the namespace (CLONE_NEWNS = 0x00020000)
libc = ctypes.CDLL(ctypes.util.find_library('c'), use_errno=True)
if libc.setns(fd, 0x00020000) != 0:
    e = ctypes.get_errno()
    print(f"setns failed: {os.strerror(e)}")
    exit(1)
 
# 4. Perform the remount syscall directly
# MS_REMOUNT = 32
ret = libc.mount(None, TARGET.encode(), None, 32, NEW_SIZE.encode())
 
if ret != 0:
    e = ctypes.get_errno()
    print(f"Mount syscall failed: {os.strerror(e)}")
else:
    print(f"Successfully resized /tmp to {NEW_SIZE}")


put that in a file resize.py and run it with python3 resize.py

Is there a more official way without this code?
 
Last edited:
Hi,

The cleanest way to handle this is to define the size in the container's configuration file on the host. This ensures the size is correct every time the container starts, and you don't have to hack it while it's running.
  1. On the Proxmox host, edit the config file:nano /etc/pve/lxc/131.conf
  2. Add a mount point entry for /tmp specifically. Even if it's already a tmpfs, explicitly defining it allows you to set the size:lxc.mount.entry: tmpfs tmp tmpfs nodev,nosuid,size=34G 0 0
  3. Restart the container.
 
The key element of this post is the "live" aspect of it indicating restart is not possible. Trivial to set it in configs of course. This was a sensitive long-running job as intoned by my description. Could not be restarted.
 
Last edited: