Recent content by Lymond

  1. L

    Shared Storage Backups slowing guests

    Well, not quite I guess. When the actual VM began backing up (rather than just being a VM on the node where other VMs were being backed up), IO wait spiked again. I'm reading that max-workers in /etc/vzdump.conf is configurable now. The default is 16, up from 1 in Proxmox 6. People in this...
  2. L

    Shared Storage Backups slowing guests

    Xrobau, thanks for the tips. After upping the default nfsd processes from 8 to 1024, things are looking very good. VM backups kicked off and our little email server hasn’t seen any extra IO wait. I’ll wait for the backups to finish and review the IO and rw graphs for the night but I’m...
  3. L

    Shared Storage Backups slowing guests

    Dunuin -- thanks for the reply. We're going to start by upping the number of nfsd processes. Our storage *should* be fast enough to handle the traffic in terms of what disks we're using -- whenever we load up the storage IO with something not NFS, the VMs don't blink. The info about...
  4. L

    Backup job is stuck and I cannot stop it or even kill it

    Just had this happen. Ideally you run vzdump -stop But that didn't stop it for me. Neither did kill, at least not immediately. kill -9 <process id> worked, but it took about 2 minutes for the backup to die.
  5. L

    Shared Storage Backups slowing guests

    I realize this topic has possibly been discussed to death. We had our setup working well until the upgrade from 6.4 to 7.3. Things are less good now. We have 5 front-end nodes connected via 40 Gb/s inifiniband to a ZFS file server. VM storage is mounted via NFS on each node and is referred...
  6. L

    [SOLVED] Proxmox hangs at machine-id check during upgrade from 6 to 7

    Solution: once we booted all the cluster nodes on the new fiber switch and all were talking over inifiniband, we retried dpkg --configure -a and it completed in 2 seconds. After that, all the upgrades to the other VM nodes went perfectly. So I think if we'd not tested the new inifiniband card...
  7. L

    [SOLVED] Proxmox hangs at machine-id check during upgrade from 6 to 7

    We are upgrading a single node of our cluster from 6.4 to 7.3. We run a Debian install with Proxmox on top. Our two test systems went fine, but this latest system, with a Supermicro motherboard X9DRW-iF, first hung at 99%. Upgrade ran for about 45 minutes then stopped at 99% for an hour...
  8. L

    Live Migration and Versions

    This turned out to be our fault. We'd made a change in Puppet to try to restrict users from signing in via SSH -- turning off TCP Forwarding -- which breaks live migration. We've changed this and now things work. The clue was the packet 92 which speaks specifically to SSH not working which...
  9. L

    Live Migration Failing

    I'm apologizing to the Proxmox admins first -- they were very quick to respond to my original post on this topic but I wasn't ready to do the testing they'd asked for. Now that I have the logs they've asked for, I'm hoping I can nudge a response. My forum post is below and I'd appreciate any...
  10. L

    Live Migration and Versions

    Updating entries. Live Migrations still failing. We have 6 front end nodes (vm5-vm10) with a shared backend storage over NFS (zstor3). Attached are pveversion -v from the transfer nodes vm8 sending vmID 120 to vm9 as well as verbose logs from each node.
  11. L

    Live Migration and Versions

    Finally getting back to this. Includes are the pveversions from both host nodes. We have a shared storage over NFS to both. Systems all seem to be fine chatting over SSH on the backend network. We also have a longer trace file for the migration attempt I can send -- it's 8 MB so it won't...
  12. L

    Live Migration and Versions

    We added a 6th node to our cluster and its version is one (or so) ahead of the rest. We hadn't live migrated anything in a bit, but live migrations are now failing (they've always been fine). We're wondering if, for live migrations to work between any two nodes in the cluster, if ALL the nodes...
  13. L

    Best way to clone a raw image

    You might consider using a LiveCD (mount it virtually and boot from it) of Clonezilla or some such. Clone the server as you would a physical server, then use the LiveCD to push the clone to a new VM. I believe Clonezilla (or Ghost or whatever) will only take the used space rather than the...
  14. L

    error on using iscsi with cluster

    I ran into this as well. I'm not 100% sure what caused it, but it may have something to do with removing your iSCSI target before removing the VMs related to it (if created as LVMs on the target). I wrote this up for my own logs: You may receive this error when creating VM guests on an LVM...
  15. L

    Migration of VMware image to KVM on LVM (raw)

    I realize there may be solutions above that work for some people, but I never got around the Boot from Hard Disk freeze after trying everything. Here's what finally got my system up. It's strange (essentially xcopying your windows drive to a freshly formatted drive then making the latter...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!