Search results

  1. A

    Removing Snapshots CLI

    Conf-files edited. The faulty snapshots disappeared as soon as I saved the conf-file. Thanks @bbgeek17 !
  2. A

    Removing Snapshots CLI

    That simple? Thanks, will try! Also, when looking for the actual snapshot file I couldn't find it on the node. Would this be an indication the snapshot file is actually gone but remains in the 113.conf only for some reason?
  3. A

    Removing Snapshots CLI

    Hello! I have just recently run into a similar problem. See output below. Any advice on how to delete the snapshots dckr01 and j4? The auto snapshots are readily deletable from both GUI and CLI, the two mentioned above I snapshot manually before some tweaks, are not, from neither GUI or CLI...
  4. A

    Resize LXC DISK on Proxmox

    I think you may be right. Just checked a few VMs with their virtual harddrives located outside the local-lvm (they're on an external nfs-share) and I can't list these VMs using the commands above. But even when I'm in the correct mounted folder, pointing to/located on the nfs share, I can't list...
  5. A

    Resize LXC DISK on Proxmox

    Thanks! Am not sure. But I'm thinking since it's mounted on a particular node, and accessible from there, should it matter where it actually is located? ie just point to wherever the raw file is. I might be totally wrong about this though...
  6. A

    [update] Wake (and other) on LAN for VMs (v0.3)

    And thank you for creating this thing originally!
  7. A

    [update] Wake (and other) on LAN for VMs (v0.3)

    Thank you! This still works with PVE 8.2.4, and does exactly what I need.
  8. A

    [SOLVED] Lxc unprivileged - mount from /etc/fstab file

    Never mind, found it! Attaching a screen dump for other people who get lost like me in the GUI. :) The fstab-mounting worked like a charm BTW upon reboot. Thanks for setting me on the right track @Stoiko Ivanov !
  9. A

    [SOLVED] Lxc unprivileged - mount from /etc/fstab file

    Got the privileged container thing sorted, but can't see anything anywhere about enabling NFS-feature. Where do I find that? Thanks.
  10. A

    [SOLVED] Check of pool pve/data failed (status:1) manual repair required!

    I didn't. This particular node had non-critical machines. The critical I moved to a secondary node a few days before while the previous node still sort of worked.
  11. A

    [SOLVED] Check of pool pve/data failed (status:1) manual repair required!

    I ended up reinstalling the node. Got nothing else for you unfortunately.
  12. A

    Cores vs sockets

    Good info, thanks!
  13. A

    Cores vs sockets

    Cool, thanks for the command! root@cyndane5:~# numactl -s policy: default preferred node: current physcpubind: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 cpubind: 0 1 nodebind: 0 1 membind: 0 1 root@dragonborn2:~# numactl -s policy: default preferred node: current...
  14. A

    Cores vs sockets

    The servers are Dell PE's, a 710 and a 720. How can I tell if they support NUMA? I can't see from the tech specs on the Dell support site if they have it, unless I'm missing it completely. Edit Found this...
  15. A

    Cores vs sockets

    Hi all, Experimenting a bit with Nextcloud Server snap in a vm. Noticed a significant decrease in wait states when going from four cores on one socket to ten cores on the one socket, as somewhat expected of course Now, the node I run this vm on, is a dual cpu server. I've noticed previously...
  16. A

    Upgrade to 7.1 gone wrong: activating LV 'pve/data' failed

    Thank you, that last piece helped me. Phew...
  17. A

    "systemd-timesyncd" status is dead, fresh install

    Thanks. Just noticed that and was looking for solutions. Am using ntp for time sync.
  18. A

    [SOLVED] Check of pool pve/data failed (status:1) manual repair required!

    Too many errors to fix, so I just removed the node from the cluster and resinstalled Proxmox. Problem solved.
  19. A

    [SOLVED] Remove vm-disks visible in the web-gui

    Too many errors to fix, so I just removed the node from the cluster and resinstalled Proxmox. Problem solved.
  20. A

    [SOLVED] Remove vm-disks visible in the web-gui

    Running this now, since the test seemed to go through. root@dragonborn:~# vgcfgrestore pve --test --force TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. Volume group pve has active volume: swap. Volume group pve has active volume: root. WARNING: Found 2 active...