Search results

  1. M

    VM and nNode status bogus

    thanks. I restarted pvestatd service and now the pve2 node looks good. I don't know what caused pvestatd to hang or crash, but it was for sure not the storage because the storage works just fine. However, restarting the pvestatd service solved the problem and now everything looks good again.
  2. M

    VM and nNode status bogus

    Since a few days, my 2nd PVE node cannot display its status, and I cannot access the VMs (shell) from the web gui. Everything seems to work fine, though. What could be the reason for this kind of problem? Note that on both nodes, I run the latest Proxmox for which I used the update procedure as...
  3. M

    ZFS: cannot snapshot, out of space

    Hi guys, I wanted to make snapshots of one of my VMs. However, even though my zpool has still enough space left (so I thought), I cannot do any snapshots. Proxmox aborts with the error "cannot snapshot: out of space". I know that other users had similar issues, however I don't understand it in...
  4. M

    VM doesn't start Proxmox 6 - timeout waiting on systemd

    Hmm, it appears that it worked just fine for a few days and the problem now appears again?! Same error message, "timeout waiting on systemd".
  5. M

    Restore LXC failed

    Hi Stefan, I do have my two nodes in a cluster, but pve2 uses different hardware than pve1, and I wanted to verify whether it is possible to restore the LXC container on a different hardware without issues. (The answer is: yes, it works. After the above "hack" with --rootfs, I was able to...
  6. M

    Restore LXC failed

    Hi, I have two nodes, pve1 and pve2, in a cluster. I created a LXC container on pve1, which uses a ZFS subvolume of size 2G. I successfuly made a backup of this container on a network share and I also checked whether that backup can be restored successfully, which works fine on pve1. However, on...
  7. M

    VM doesn't start Proxmox 6 - timeout waiting on systemd

    I also had the same issue. Besides that, it was not possible to open the "Console" view in the browser. It appears that using options vhost_net experimental_zcopytx=0 in /etc/modprobe.d/vhost-net.conf and update-initramfs -u fixed the problem.
  8. M

    Change ZFS device names

    HA! temporarily disabling the storage helped a lot. With that it worked like a charm and stays permanent, even survives a reboot - as intended. Thanks!
  9. M

    Change ZFS device names

    can I temporarily do that without affecting my existing VMs? I assume you are talking about the storage in the "Datacenter". So the steps would be a) disable storage b) zpool export tank c) zpool import tank -d /dev/disk/by-vdev right? I just tested following: zpool export tank && zpool...
  10. M

    Change ZFS device names

    I created a zpool where I used the disk ID names from /dev/disk/by-id. This works fine, however I recently read about the vdev_id.conf file. I created my own vdev_id.conf where I could create disk aliases for the slots where the disks are in, so I have my disks now accessible through...