Search results

  1. J

    [SOLVED] ZFS partitions get rebuilt/activated by mdadm

    I have a trouble where mdadm autoactivates zfs volumes from VM's. I would not like mdadm to touch anything from zfs volumes. I tried to search for it since it's more like debian problem, but could not filnd a solution. Probably a filter when loading mdadm module? root@elite:~# cat /proc/mdstat...
  2. J

    ISO Upload Hangs pveproxy.service (>2GB)

    It's probably somewhat offtopic but why are we uploading stuff to /tmp to copy them over to some storage later instead of just copying directly to storage. I have hung the proxmox many times because of big image file and not enough room on /tmp folder. Also after uploading it will take...
  3. J

    [SOLVED] Can I move VMs from Intel to AMD?

    Live migrating between Xeon(R) CPU D-1541 and AMD EPYC 7401 will randomly crash Centos 8 guest OS. Funny enough, it will not crash straight away. It just freezes guest few minutes after migration has ended. Reset of guest (or stop-start) will reboot and fix it. It won't crash openwrt 21.02.
  4. J

    Ceph on Raid0

    AFAIk the leds only blink (activity) and will not show "dead" disks, because now it's HBA and does not monitor your disk statuses. You might still be able to call identify , but that's about it. You get led action at all? even for disk read-write? Was just hit reply then found this...
  5. J

    Starting VM with RDP

    I depends how you have this set up. If it's in private network then I would make a simple tcp listener that will use proxmox api to check if machine is alive already. if not then it would trigger the guest startup using pve api. If machine is already in running state then it would just pass tcp...
  6. J

    Easily remove old/unused PVE kernels on your Proxmox VE system (PVE Kernel Cleaner)

    Did not work on one of the boxes. Keeps looping around and not removing anything. Worked on my other nodes. Boot disk space used is critically full at 84% capacity (729M free) [-] Searching for old PVE kernels on your system... "pve-kernel-5.3.18-3-pve" has been added to the kernel remove...
  7. J

    Ceph on Raid0

    It seems this is the guide: https://fohdeesha.com/docs/perc/ There also seems to be video info: https://www.youtube.com/watch?v=J82s_WYv3nU I have not done this on dell cards, but have done it Supermicro cards, that are same family LSI chips. All that said - supermicro cards are just easy to...
  8. J

    Ceph on Raid0

    For more adventurous - H710 Minis can be flashed to IT mode, and it seems it is not too hard to achieve.
  9. J

    PXVE, Nginx & Websocket proxy

    I would assume that you have a problem that there is a hash for 8006 port and hash for 3128 , but they work independently, so having a sticky on 8006 port on one server could give you spice sticky to another. Have you tried having just one ip in upstream for 3128 and 8006 ports to see if it works?
  10. J

    dist-upgrade trying to uninstall proxmox-ve

    Just install the proxmox-ve back in there if it still runs. Otherwise just backup /var/lib/vz folder, make clean install and restore it to fresh install. AFAIK this is the only needed folder that retains information about the proxmox environment. Also copying over hosts file would be great, so...
  11. J

    PXVE, Nginx & Websocket proxy

    Confirming that vrang->TJN combined with op hades666 solution now works on Proxmox 6.1-11 with proxied over nginx 1.17.7 on openwrt 19.07.2 If you are using multiple backend solution as hades666 is proposing then either weight=xxx should be added to nodes or evenbetter ip_hash; parameter so...
  12. J

    [SOLVED] "qm guest exec" problem

    Haha , cool. This was exactly the problem - I wanted to send mass trim to all my guests and "trim -av" failed. I just used "df" example since it's more reproducible. Thanks.
  13. J

    Proxmox for Raspberry Pi 4 with Ceph

    I'm new to debian package building, but is there "srpm" style packages available for debian, so that packages can simply be rebuilt without re-making all definitions needed for the debian package system?
  14. J

    [SOLVED] "qm guest exec" problem

    I have kinda similar kind of problem. I did get this to work now if I see the referenced "separate all arguments" approach. But I still have problem with dash. This works: root@phoenix:~# qm guest exec 1015 "df" { "exitcode" : 0, "exited" : 1, "out-data" : "Filesystem 1K-blocks...
  15. J

    hibernate a VM via qm command

    The command is blocking. It means it will not return before it's complete. If you have a lot of ram (100+GB etc.) then you can actually see it decreasing from proxmox webui under status. Depending on the machine the flushing can occur on ranges of 5-30GB/s.
  16. J

    [SOLVED] Properly flush cluster settings to recreate a new one

    Don't be so agressive. Did you also follow user micro advice? My edit was only complementary to his solution. And there is storage removal in proxmox UI
  17. J

    Node with question mark

    Happened to me too once again. pvestatd should really be multithread service so that it would not bog itself down if one of the metrics are not responding. For me I saw that VGS command hung for some reason.
  18. J

    hibernate a VM via qm command

    Yes I tested that out also. Got an extra file in my zfs pool. Also it seems it wil dump only needed memory and size of the "hibernate" file depends on the current ram usage. So for faster standby/resume echo 3 >/proc/sys/vm/drop_caches is one way to go. I use it also to make migrations faster...
  19. J

    hibernate a VM via qm command

    For me the confusing part is that where is the hibernation data kept? We got some guest machines with 250GB+ ram, so where will this all pushed?
  20. J

    ZFS shared storage over FC

    Well... if you make (local) zfs storage with same name on both cluster nodes it will make zfs transfer from one node to another and you still have live migration. This works with local storages (or shared storage which gives separate luns for each cluster node). But this does not unfortunately...