Search results

  1. J

    Easily remove old/unused PVE kernels on your Proxmox VE system (PVE Kernel Cleaner)

    Did not work on one of the boxes. Keeps looping around and not removing anything. Worked on my other nodes. Boot disk space used is critically full at 84% capacity (729M free) [-] Searching for old PVE kernels on your system... "pve-kernel-5.3.18-3-pve" has been added to the kernel remove...
  2. J

    Ceph on Raid0

    It seems this is the guide: https://fohdeesha.com/docs/perc/ There also seems to be video info: https://www.youtube.com/watch?v=J82s_WYv3nU I have not done this on dell cards, but have done it Supermicro cards, that are same family LSI chips. All that said - supermicro cards are just easy to...
  3. J

    Ceph on Raid0

    For more adventurous - H710 Minis can be flashed to IT mode, and it seems it is not too hard to achieve.
  4. J

    PXVE, Nginx & Websocket proxy

    I would assume that you have a problem that there is a hash for 8006 port and hash for 3128 , but they work independently, so having a sticky on 8006 port on one server could give you spice sticky to another. Have you tried having just one ip in upstream for 3128 and 8006 ports to see if it works?
  5. J

    dist-upgrade trying to uninstall proxmox-ve

    Just install the proxmox-ve back in there if it still runs. Otherwise just backup /var/lib/vz folder, make clean install and restore it to fresh install. AFAIK this is the only needed folder that retains information about the proxmox environment. Also copying over hosts file would be great, so...
  6. J

    PXVE, Nginx & Websocket proxy

    Confirming that vrang->TJN combined with op hades666 solution now works on Proxmox 6.1-11 with proxied over nginx 1.17.7 on openwrt 19.07.2 If you are using multiple backend solution as hades666 is proposing then either weight=xxx should be added to nodes or evenbetter ip_hash; parameter so...
  7. J

    [SOLVED] "qm guest exec" problem

    Haha , cool. This was exactly the problem - I wanted to send mass trim to all my guests and "trim -av" failed. I just used "df" example since it's more reproducible. Thanks.
  8. J

    Proxmox for Raspberry Pi 4 with Ceph

    I'm new to debian package building, but is there "srpm" style packages available for debian, so that packages can simply be rebuilt without re-making all definitions needed for the debian package system?
  9. J

    [SOLVED] "qm guest exec" problem

    I have kinda similar kind of problem. I did get this to work now if I see the referenced "separate all arguments" approach. But I still have problem with dash. This works: root@phoenix:~# qm guest exec 1015 "df" { "exitcode" : 0, "exited" : 1, "out-data" : "Filesystem 1K-blocks...
  10. J

    hibernate a VM via qm command

    The command is blocking. It means it will not return before it's complete. If you have a lot of ram (100+GB etc.) then you can actually see it decreasing from proxmox webui under status. Depending on the machine the flushing can occur on ranges of 5-30GB/s.
  11. J

    [SOLVED] Properly flush cluster settings to recreate a new one

    Don't be so agressive. Did you also follow user micro advice? My edit was only complementary to his solution. And there is storage removal in proxmox UI
  12. J

    Node with question mark

    Happened to me too once again. pvestatd should really be multithread service so that it would not bog itself down if one of the metrics are not responding. For me I saw that VGS command hung for some reason.
  13. J

    hibernate a VM via qm command

    Yes I tested that out also. Got an extra file in my zfs pool. Also it seems it wil dump only needed memory and size of the "hibernate" file depends on the current ram usage. So for faster standby/resume echo 3 >/proc/sys/vm/drop_caches is one way to go. I use it also to make migrations faster...
  14. J

    hibernate a VM via qm command

    For me the confusing part is that where is the hibernation data kept? We got some guest machines with 250GB+ ram, so where will this all pushed?
  15. J

    ZFS shared storage over FC

    Well... if you make (local) zfs storage with same name on both cluster nodes it will make zfs transfer from one node to another and you still have live migration. This works with local storages (or shared storage which gives separate luns for each cluster node). But this does not unfortunately...
  16. J

    GTX 980 Ti passthrough, error 43. Can't find any updated resources for solving this.

    Nvidia is blocking yes. But for me it seemed that every time somebody made a new trick, nvidia detected it. So you could use older card with older driver version (before patch) or cannot get it to work . Did not get this to work with latest drivers at the time (~1.5 years ago). What driver...
  17. J

    dist-upgrade trying to uninstall proxmox-ve

    Thanks Helmo. Just stumbled on this also and could not figure it out. Removing kernels with your command was good fix. I always remove stock kernels after original install, but this original install was not done by me, so I haven't bumped on this before.
  18. J

    [SOLVED] Properly flush cluster settings to recreate a new one

    *This worked.* But did not mess around with editing sqlite file. I just removed the whole sqlite db. rm -rf /var/lib/pve-cluster/* It gets recreated after reboot. Proxmox v6.0-9
  19. J

    PXVE, Nginx & Websocket proxy

    Is this still broken in v6 version of proxmox?
  20. J

    How do I license MS Server 2016 standard and assign cores?

    https://www.altaro.com/hyper-v-backup/webinars/demystifying-windows-server-2016-licensing.php It seems that HT is not a core in terms of microsoft. So you don't have to license HT cores.