Search results

  1. M

    Shared folders between host and container

    Hi oguz, thanks a lot! Hmm I will test that and let you know if it works. Yesterday it seemed a bit odd to me that each file got its x bit set.
  2. M

    Shared folders between host and container

    One additional thing: I just realised that every file created within this mount point has the 'x' bit set. Why is this and how can I avoid it? (inside the container) # ls -al total 16 drwxr-xr-x 2 root root 0 Feb 18 16:36 . drwxr-xr-x 25 root root 25 Feb 18 16:41 .. -rwxr-xr-x 1 root root...
  3. M

    Shared folders between host and container

    Hi oguz, yes I saw that. Indeed the uid/gid needs to be set correctly. However I am unsure how I shall do that when I mount the Samba/CIFS share? so far I tested it as follows: (on the PVE host) mount -t cifs -o username=testuser,password=test123,gid=100000,uid=100000 //server/share /srv/share...
  4. M

    Shared folders between host and container

    Hi, I have a container in which I would like to run a webserver (for Nextcloud). The data storage of this web server shall be on a NAS, which is mounted via Samba. I first thought that I can mount the Samba share on the host and then "redirect" it to the container via a bind mount, but the...
  5. M

    VM start timeout after snapshot deleted

    Hi guys, sorry for my late reply. I tried qm unlock and it didn't help. However, I didn't reboot the server since approx. 180 days, and because I have a cron job which does regular updates, the system was updated but still running an "older" kernel. For some reason this resulted in different...
  6. M

    VM start timeout after snapshot deleted

    Hi guys, I have a Windows VM where I had one old snapshot. Today, I wanted to migrate the VM to another PVE node, and this gave me the following error: Cannot migrate VM with local CD/DVD. The reason was that the VM had a DVD drive while I made the snapshot, but now I removed the DVD drive...
  7. M

    VM and nNode status bogus

    thanks. I restarted pvestatd service and now the pve2 node looks good. I don't know what caused pvestatd to hang or crash, but it was for sure not the storage because the storage works just fine. However, restarting the pvestatd service solved the problem and now everything looks good again.
  8. M

    VM and nNode status bogus

    Since a few days, my 2nd PVE node cannot display its status, and I cannot access the VMs (shell) from the web gui. Everything seems to work fine, though. What could be the reason for this kind of problem? Note that on both nodes, I run the latest Proxmox for which I used the update procedure as...
  9. M

    ZFS: cannot snapshot, out of space

    Hi guys, I wanted to make snapshots of one of my VMs. However, even though my zpool has still enough space left (so I thought), I cannot do any snapshots. Proxmox aborts with the error "cannot snapshot: out of space". I know that other users had similar issues, however I don't understand it in...
  10. M

    VM doesn't start Proxmox 6 - timeout waiting on systemd

    Hmm, it appears that it worked just fine for a few days and the problem now appears again?! Same error message, "timeout waiting on systemd".
  11. M

    Restore LXC failed

    Hi Stefan, I do have my two nodes in a cluster, but pve2 uses different hardware than pve1, and I wanted to verify whether it is possible to restore the LXC container on a different hardware without issues. (The answer is: yes, it works. After the above "hack" with --rootfs, I was able to...
  12. M

    Restore LXC failed

    Hi, I have two nodes, pve1 and pve2, in a cluster. I created a LXC container on pve1, which uses a ZFS subvolume of size 2G. I successfuly made a backup of this container on a network share and I also checked whether that backup can be restored successfully, which works fine on pve1. However, on...
  13. M

    VM doesn't start Proxmox 6 - timeout waiting on systemd

    I also had the same issue. Besides that, it was not possible to open the "Console" view in the browser. It appears that using options vhost_net experimental_zcopytx=0 in /etc/modprobe.d/vhost-net.conf and update-initramfs -u fixed the problem.
  14. M

    Change ZFS device names

    HA! temporarily disabling the storage helped a lot. With that it worked like a charm and stays permanent, even survives a reboot - as intended. Thanks!
  15. M

    Change ZFS device names

    can I temporarily do that without affecting my existing VMs? I assume you are talking about the storage in the "Datacenter". So the steps would be a) disable storage b) zpool export tank c) zpool import tank -d /dev/disk/by-vdev right? I just tested following: zpool export tank && zpool...
  16. M

    Change ZFS device names

    I created a zpool where I used the disk ID names from /dev/disk/by-id. This works fine, however I recently read about the vdev_id.conf file. I created my own vdev_id.conf where I could create disk aliases for the slots where the disks are in, so I have my disks now accessible through...