Search results for query: lxc mountpoint

  1. B

    Proxmox 9.1.5 breaks LXC mount points?

    Hi! I just updated to the latest version of Proxmox (9.1.5), and I've noticed that the LXCs with mount points (in my case, NFS from my NAS) are not starting. If I remove the mount point, it starts correctly. See attached images. if I remove it Media is an NFS share from my NAS on another...
  2. B

    LXC containers not startting after Proxmox update

    root@NODE5:~# pct start 514 --debug run_buffer: 571 Script exited with status 1 lxc_init: 845 Failed to run lxc.hook.pre-start for container "514" __lxc_start: 2046 Failed to initialize container "514" d 0 hostid 100000 range 65536 INFO lsm - ../src/lxc/lsm/lsm.c:lsm_init_static:38 -...
  3. S

    LXC - Remount mountpoint without rebooting Container

    ...Container, wait a bit, then start the LXC Container again. I tried some Things in this little Script here to analyse which are the LXC Mountpoints that might depend on NFS/CIFS/etc Shares, but no actual implementation yet...
  4. 3

    [SOLVED] virtiofs directory mapping does not mount exec in guest vm

    Hi everyone, I'm trying out a virtiofs directory mapping from a mounted HDD ("Vapordisk") to a guest VM (cachyos) and despite being mounted and read-writeable in the guest VM, I cannot get the drive mounted exec. My ultimate goal is to mount this virtiofs mapping in Steam on the cachyos VM as...
  5. S

    Backup of privileged LXC fails

    I had the same issue. Proxmox backup tried mounting the zfs in a subtree of /mnt/vzsnap0 ERROR: Backup of VM 117 failed - command 'mount -o ro -t zfs secure/subvol-117-disk-0@vzdump /mnt/vzsnap0//mnt' failed: exit code 2 Turns out there was residual data on the host under the path...
  6. M

    bindmount ZFS dataset and children to unprivileged LXC

    On my host I have a ZFS dataset looking like this: NAME MOUNTPOINT Data /Data Data/Nas /Data/Nas Data/Nas/Documents /Data/Nas/Documents Data/Nas/Photos /Data/Nas/Photos I have a LXC...
  7. J

    [SOLVED] Issue of memory for a container

    You can use ZRAM swap, and disk swap also, that way when your Zram swap gets full it will swap to disk. ZRAM will also help against SSD wear. Not really a problem on a spinning disk, but those are also super slow and would bog down your Container if you are using too much swap. I have a small...
  8. L

    Using Turnkey Fileserver for storage share - major issues

    I have multiple Proxmox Nodes (since version 7) running with turnkey file servers and never had a problem like yours. But I dont gernerate a storage for the lxc, I manually mount the folder example, your turnkey is lxc 100 (file-server) you open the config nano /etc/pve/lxc/100.conf arch...
  9. M

    Added 6th node to cluster, attempted migration from node 2 to node 6, failed with disk full message and I have lost all VM and LXC data in the GUI

    I had a 5-node cluster with many VMs and LXCs. I successfully added a 6th node that had a 256GB boot drive and a 3 drive ZFS pool (approx 10 TiB) I attempted to migrate a 600gb VM running Docker workloads from node 2 to to node 6. This proceeded until the task got a "not enough space on...
  10. A

    USB to SATA causing proxmox to crash after write

    Im running proxmox(Linux 6.5.13-6-pve) on a T8 Firebat Plus N100 Mini PC, I am currently having an issue with using qbitorrent to download files to my SSD(setup in a VG and added as a mountpoint in a LXC) which is connected via USB to Sata w/o power, after a certain amount of download usually...
  11. S

    Geändertes Verhalten von Mountpoints in LXC nach Neuinstallation von PVE und Restore des LXC

    Ich betreibe Proxmox VE auf ZFS mit einer zusätzlichen ZFS RAID0 Spiegelplatte (ZFS Pool "tank"). Ich habe mehrere LXCs, die ihren Datenbereich per Mountpoint von tank erhalten. Ich habe nun mein Proxmox VE mit Version 9.1.4 neu aufgesetzt. Meinen ZFS Pool "tank" mit den Daten-Subvolumes meiner...
  12. S

    Help Understand Replication for an SMB

    Hello, I am relatively new tto Proxmox so bare with me as I am still learning the right terms butt so far Proxmox is amazing. I just managed to get 3 PCs in a cluster. I have an extra drive in each of them and have them as a zfs drive so I can move containers between them without an issue. I...
  13. N

    Backup

    Ich hoffe das passt so und das ich nichts geteilt habe was man nicht teilen sollte
  14. D

    Paperless NGX - mount auf Proxmox klappt nicht

    Hallo Zusammen, ich habe nun endlich meinen Proxmox am laufen. Als erstes wollte ich meine Paperless NGX Dateien von Synology auf Proxmox umziehen. Dazu habe ich einen LXC Paperless erstellt. Ich bin nach dieser Anleitung vorgegangen...
  15. S

    Backup Problem mit einem Container

    Was mir gerade noch aufgefallen ist: Warum will proxmox den snapshot unter /mnt/vzdump0/srv mounten? Ist das weil der mountpoint im container als /srv gemounted ist? Bei zwei meiner drei andern container ist das nämlich auch der Fall, bei denen läuft das backup allerdings ohne probleme durch...
  16. S

    [SOLVED] Problem backing up container

    I have one more question: Why does proxmox want to mount the snapshot in /mnt/vzsnap0/srv is it because my mountpoint mp0 ist mounted as /srv in the container? The strange thing is that two of my other three containers have a mountpoint mounted under /srv but they get backed up without any...
  17. S

    [SOLVED] Problem backing up container

    Yesterday I updated Proxmox from 8 to 9. Last night I encauntered errors backing up one of my Containers to a Proxmox Backup Server Error Message: NFO: starting new backup job: vzdump 101 --all 0 --prune-backups 'keep-daily=7,keep-monthly=12,keep-weekly=4,keep-yearly=999' --mode snapshot...
  18. S

    Backup Problem mit einem Container

    Ich habe gestern Proxmox von 8 auf 9 upgedatet, heute in der Nacht hat dann bei einem Container das Backup nicht mehr funktioniert Fehlermeldung: INFO: starting new backup job: vzdump 101 --all 0 --prune-backups 'keep-daily=7,keep-monthly=12,keep-weekly=4,keep-yearly=999' --mode snapshot...
  19. C

    Container with mp0 configuration won't start via GUI, OK via `lxc-start`

    So this is something pct will check/stumble on but not lxc-start? According to ls, mount point has extended permissions / ACLs root@pve:~#ls -ld /mnt/nfsshare drwxrwx---+ 5 root root 5 Dec 26 03:09 /mnt/nfsshare So I've installed the acl package to execute getfacl but don't see any special...
  20. C

    Container with mp0 configuration won't start via GUI, OK via `lxc-start`

    It gets a "Permission denied" for the (PVE side) mountpoint with a leading "/":