Search results

  1. Cross-cluster Migration: Leaves VM on Old Host powered Off but still "Migrate Locked"

    Sounds like it's a good safety net! It makes sense to have it that way. I guess I interpret that icon as a VM in-flight, though it was a first-time experience so now the behaviour is normal. :)
  2. Cross-cluster Migration: Leaves VM on Old Host powered Off but still "Migrate Locked"

    Thanks for that, so it's intentional behaviour for it to leave the VM on the old-node in a locked state?
  3. [SOLVED] Migration between two clusters - no matching format found

    Proxmox is how open-source should be - focused on continuous improvement. Thank you muchly. :)
  4. Fingerprint error

    Thank you for this, I think we're good then. The update-fingerprints run went through OK on both hosts after clearing known hosts and running a connection firstly from each.
  5. Fingerprint error

    Either cloud-init or provisioning scripts not being fully disabled/deleted. Will poke around to figure out why it regenerated. Thanks for that. Will sort the extra root cause then focus back on getting this working well. :) Should it need cron-jobbing, or should the SSL/API fingerprints...
  6. Cross-cluster Migration: Leaves VM on Old Host powered Off but still "Migrate Locked"

    Hi there, Just checking if this is expected behaviour. I know there is --delete which lets you remove it on-successful-completion however it details that if not used, it should stop. So with the job finishing successfully as below, it seems odd that they are still locked with the paper plane...
  7. [SOLVED] Migration between two clusters - no matching format found

    Hmm, another qm remote-migrate question. We are getting the below response when trying to live-migrate, however the thin-pool and VG are both named as such on the gaining node. remote: storage 'raid10ssd' does not exist! Any idea why? We added a new storage pool, then created a Thin/Logical...
  8. Migrate VM to different host

    This is now possible via qm remote-migrate as of Proxmox VE 7.3:
  9. Fingerprint error

    Host key verification failed. So more than fingerprints need updating in order to retain connectivity? What needs updating? And will leveraging update-fingerprints on-cron then be sufficient to avoid comms breakage? Otherwise it tends to lose connection to other members every month or two...
  10. IPv6 - First-time configuration

    How do you accomplish this? Just reading this and below, trying to figure out v6.
  11. Fingerprint error

    We also see the error when trying to avoid the cluster losing sync with members. root@pmg-node1:~# pmgcm update-fingerprints 500 update fingerprints failed: unable to get remote node fingerprint from 'pmg-node2': command 'ssh -l root -o 'BatchMode=yes' -o 'HostKeyAlias=pmg-node2'...
  12. (LIR IPv6 /32 into 2x Routers) Desire: /48-per-Type, Routing to /64-per-VM & /128-per-Domain?

    Hi there, This is a bit of an edge case, though being so short on time it's taking me a while to understand the inner workings of IPv6. There's a reason they say "throw out your understanding of v4 as it has no bearing with v6", holy moly are they right! It's great but needs to be understood...
  13. Proxmox VE 7.4 released!

    All of us lovers of darkness thank you endlessly for this - THANK YOU!! :D
  14. 1x NIC to 2x NICs (Keep VMs/WAN on 1st & Move PVE/Corosync/etc to 2nd)

    We now have an active-backup bond, and a LACP bond. The former is fibre-first with LACP on fibre-fail; the latter is the LAG for 2x 1G RJ45. So the copper LAG bond is utilised as the backup component of the fibre bond and, as below, it doubles as the full-time PVE interface for cluster syncing...
  15. Best way to migrate between clusters?

    Good news! This is getting closer now - noticed this in the PVE 7.3 release notes: You can run it (experimentally) via CLI already. Hallelujah! See the man page for qm >> remote-migrate:
  16. [SOLVED] pve LACP with HP Switch

    Having done some research into this today, the L3+4 configuration isn't fully RFC-compliant as data can arrive out-of-order. By default, the HP switch configurations (where they can handle a hash policy at L4, but are not configured to) will use L3, with fallback to L2. So the...
  17. Official DarkMode

    No way, this is Proxmox we're talking about - @Dominic and the crew will have plenty of backups!
  18. Train Spam/Ham PMG

    This makes plenty of sense. SpamExperts (back when they had decent owners) had an Outlook plug-in, and a mailbox you could forward to (which understood forwards, thus only interpreted the data that mattered). PMG is effective though it does feel like it's tricky to get up & off the ground. I...
  19. Tracking center no successful delivered logs

    Per the reply from Stoiko within the bug, interest needs to be shown towards the bug for it to be prioritised at all. If you comment on Bugzilla then it will help the case in terms of having it be catered for. Honestly though, I'd say 1-2 years given the low levels of interest.
  20. Proxmox does not use nor see all memory

    Just want to mention that we had ECC RAM showing up in dmidecode all well and good, but not showing in Proxmox (nor free etc). Turned out to indeed be the memory. 2 of the 4 new sticks in the problem machine were duds even though they appeared OK. Replaced & all good.


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!