Search results

  1. M

    PM 6.2 KVM Live migration failed (bug or ?)

    A VM with two disks live migration failed. It also died on source side. Offline (as it was dead anyway) migration worked and VM recovered after. I'm attaching the live migration log. Should I report a bug or..? Proxmox Virtual Environment 6.2-12 Virtual Machine 142 (XYZ) on node 'p37' Logs ()...
  2. M

    HA cluster on 2 servers with ZFS

    Ask your self, do you have quorum, with 2 node cluster, when one node dies?
  3. M

    KVE live migration on VM with lots of diry RAM pages (takes forever)

    Hi guys, thanks for replys and suggestions. Later I noticed, that dirty blocks are also on disk, not just in RAM and gave up, as the 1Gbps link would always be to slow. FYI some of the VMs actually migrated after a day or two of migrating. :-) But I was notified that live / online migration...
  4. M

    [SOLVED] Upgrade from 6.0 to 6.3 CPU usage increase by 100% :-)

    Hi guys, recently I upgraded two node cluster to four nodes from 6.0 to 6.3. I migrated VMs (some online, som offline) to the new nodes. On the new nodes we have the same series (E5 v2) CPUs, just around 1/5 faster per core and more cores. The thing I notice is, that CPU usage (with the same...
  5. M

    ZFS and NVMe

    I use NVMe as any other SAS or SATA drives in my ZFS pools.
  6. M

    KVE live migration on VM with lots of diry RAM pages (takes forever)

    Hi guys, i have a situation where migration link is slower than VMs RAM, which is constantly being changed. So when doing an online migration from 6.0 to 6.2, it never completes, just synces RAM forever and ever. Does anyone have any ideas for online migration in this case? VM disk in on a ZVOL.
  7. M

    Stop Replication

    Well, i have had created a replication job, which i wanted to stop and it would take forever if I did not kill it manually. It would be nice to have a GUI option to kill a running replication process. Pve-zsync does not have GUI, so there is no need to implement GUI option to stop it. Whould you...
  8. M

    How to regenerate /dev/zvol ZFS missing links? [SOLVED as there are no missing links]

    Oh, you are correct. I missed that. There are no missing links then. Tnx for the point out. :-)
  9. M

    Automatically set the Proxmoxx server on and off

    There are many, from the simplest of just buying a power switch which you can get from your local hardware store and then using cronjob to shut down cleanly or you could use wake on lan and dronjob to shutdown,... etc
  10. M

    How to regenerate /dev/zvol ZFS missing links? [SOLVED as there are no missing links]

    As title states, how to regenerate /dev/zvol ZFS missing links (without reboot)? Back story: I added two new nodes (6.2-15/48bd51b6) to 6.0-7/28984024 cluster. I wanted to live migrate VMs to them and then upgrade old nodes. There were quite a few issues which I worked around to make live...
  11. M

    Configure Proxmox to allow for 2 minutes of shared storage downtime?

    Well, can't you test by failing over a test NFS share? Or use a test device? Upgrading HA storage controllers should not present a downtime to clients. Unless it is not HA. NFS states and everything should be failed over.
  12. M

    ZFS Migration w/o Cluster

    There might be multiple version of conifg files on the destination, one for each snapshot. You probably used the wrong one. pve-zsync work perfect every time. You should do nothing special. Just read the instructions and use it accordingly. If you have problems, come and ask. Also you can run...
  13. M

    Stop Replication

    I guess this was never realized? Should I open a feature request?
  14. M

    ZFS Migration w/o Cluster

    What you describe is a pve-zsync command line utility which I use often to move VMs across clusters. https://pve.proxmox.com/wiki/PVE-zsync
  15. M

    "login failed, please try again" using known good credentials in the webui

    OK, let just cover the basics first. What is the Realm selected?
  16. M

    Migrating Proxmox (host config itself)

    Hi, if you do not touch the RAID array or the LVM data that resides on it, you will be able to access it with new install. Might even work, if you just bring over storage.cfg or whichever file has storages defined. In any case you can activate (vgchange -a y) and ad it manually. I never used...
  17. M

    Cannot migrate "directory based storage" LXC to another cluster node

    I think you should have a GUI option if you right click a LXC and there is also a command line one: root@leona:~# pct migrate 400 not enough arguments pct migrate <vmid> <target> [OPTIONS]
  18. M

    Migrating to PVE from overkill linux-ha cluster - HW spec / storage question

    Didn't read your whole post, but I felt sorry, 'cause i do not think people will want to do your work for you (employed to know this things :-), so here are my quick answers. I would always set up cluster with ZFS and replication on Enterprise SSD drives with NVMe as log and / or special...
  19. M

    Cannot migrate "directory based storage" LXC to another cluster node

    LXC does not support live migration, so PM does not either. (CRIU does not yet work.) Use KVM VMs if you need live migration.
  20. M

    Migration issue - storage 'zfs1-vps1' is not available on node

    Using pve-zsync is not a workaround. It is a standard CLI tool. BUt I get what you mean. I suggest you open a feature request and sponsor it.