Search results

  1. A

    [SOLVED] Lxc unprivileged - mount from /etc/fstab file

    Never mind, found it! Attaching a screen dump for other people who get lost like me in the GUI. :) The fstab-mounting worked like a charm BTW upon reboot. Thanks for setting me on the right track @Stoiko Ivanov !
  2. A

    [SOLVED] Lxc unprivileged - mount from /etc/fstab file

    Got the privileged container thing sorted, but can't see anything anywhere about enabling NFS-feature. Where do I find that? Thanks.
  3. A

    [SOLVED] Check of pool pve/data failed (status:1) manual repair required!

    I didn't. This particular node had non-critical machines. The critical I moved to a secondary node a few days before while the previous node still sort of worked.
  4. A

    [SOLVED] Check of pool pve/data failed (status:1) manual repair required!

    I ended up reinstalling the node. Got nothing else for you unfortunately.
  5. A

    Cores vs sockets

    Good info, thanks!
  6. A

    Cores vs sockets

    Cool, thanks for the command! root@cyndane5:~# numactl -s policy: default preferred node: current physcpubind: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 cpubind: 0 1 nodebind: 0 1 membind: 0 1 root@dragonborn2:~# numactl -s policy: default preferred node: current...
  7. A

    Cores vs sockets

    The servers are Dell PE's, a 710 and a 720. How can I tell if they support NUMA? I can't see from the tech specs on the Dell support site if they have it, unless I'm missing it completely. Edit Found this...
  8. A

    Cores vs sockets

    Hi all, Experimenting a bit with Nextcloud Server snap in a vm. Noticed a significant decrease in wait states when going from four cores on one socket to ten cores on the one socket, as somewhat expected of course Now, the node I run this vm on, is a dual cpu server. I've noticed previously...
  9. A

    Upgrade to 7.1 gone wrong: activating LV 'pve/data' failed

    Thank you, that last piece helped me. Phew...
  10. A

    "systemd-timesyncd" status is dead, fresh install

    Thanks. Just noticed that and was looking for solutions. Am using ntp for time sync.
  11. A

    [SOLVED] Check of pool pve/data failed (status:1) manual repair required!

    Too many errors to fix, so I just removed the node from the cluster and resinstalled Proxmox. Problem solved.
  12. A

    [SOLVED] Remove vm-disks visible in the web-gui

    Too many errors to fix, so I just removed the node from the cluster and resinstalled Proxmox. Problem solved.
  13. A

    [SOLVED] Remove vm-disks visible in the web-gui

    Running this now, since the test seemed to go through. root@dragonborn:~# vgcfgrestore pve --test --force TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. Volume group pve has active volume: swap. Volume group pve has active volume: root. WARNING: Found 2 active...
  14. A

    [SOLVED] Remove vm-disks visible in the web-gui

    I'll look into these threads: https://forum.proxmox.com/threads/local-lvm-disk-lost.75551/ https://forum.proxmox.com/threads/lvm-issue.29134/#post-146250
  15. A

    [SOLVED] Remove vm-disks visible in the web-gui

    Hi all, One of my cluster-nodes crashed the other other day and trashed its local-lvm. After some work, the local-lvm is up again but the web-gui shows the raw vm-disks, when in fact there is no trace whatsoever on the node it self. Where are these coming from, and how do I get rid of them? The...
  16. A

    [SOLVED] Check of pool pve/data failed (status:1) manual repair required!

    After 1h 55m it's seems to be online again. Htop says some kind of check is going on. Pve-data_tmeta, seems to be the lvm metadata that got borked for some reason when the node rebooted? Anything I can do to prevent stuff like this happening in the future?
  17. A

    [SOLVED] Check of pool pve/data failed (status:1) manual repair required!

    Hello all, So I ran into this a few hours ago. Can't access any kind of terminal to run a fsck manually. Please see the attached pic. Do I just wait or is there some magic I can use to make stuff happen (faster)? The server is a Dell R710 with 6x 2 TB drives in a raid5 fashion, using a...
  18. A

    Moving from lvm-thin to ceph/zfs, thing to consider

    Aha, thanks! Discovered that feature recently on our proxmox-lab--ceph-cluster at work, but didn't quite know how and when to use it.
  19. A

    sum of all thin volume sizes exceeds the size of thin pool...

    I see what you mean. However, I have five containers, each set up with 4 GB of storage, and totalling 20 GB. With that in mind, I still don't see where those 88 gigs are coming from.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!