Search results

  1. G

    [SOLVED] LVM (over iscsi) volume stuck and cannot be created

    I managed to move the VM to another node, move the disk to the SAN. Then i could move the VM to a third node. But i still get this error when i try to move it back to the original node i encountered the error on. So that server is the only one with these issues.
  2. G

    [SOLVED] LVM (over iscsi) volume stuck and cannot be created

    Hi, Still get the same error. I moved the drive to another storage for now. The issue seems to be related to iscsi having something stuck somewhere. Pveversion (we use the enterprise repos): # pveversion -v proxmox-ve: 6.1-2 (running kernel: 5.3.13-3-pve) pve-manager: 6.1-7 (running version...
  3. G

    [SOLVED] LVM (over iscsi) volume stuck and cannot be created

    We have a 3 node cluster and a SAN that is connected via iscsi (using lvm over iscsi). I had to do some maintenance and move machines around and i noticed that one specific machine could not be migrated between nodes because of errors related to it's storage. Initially it gave me some errors...
  4. G

    [SOLVED] Certain VMs from a cluster cannot be backed up and managed

    Well in this case it most likely will make no difference. Just shut down the VM and start it up again when possible. Edit: It see the issue is lock related, you could try "qm unlock VMID"
  5. G

    [SOLVED] Certain VMs from a cluster cannot be backed up and managed

    Well i just checked and we are on qemu 4.0.0-5 yet... So i cannot tell you anthing for sure. In these cases we just shut down the machines and started up again. But the issue had pretty much gone away after we moved the cluster traffic to a separate dedicated network without any package update.
  6. G

    Split temporarily cluster in 2 clusters due to network instability

    1. You would have to reinstall them ideally. Strictly speaking it is possible to remove and readd them but you would have to make really sure stuff is - cleaned thoroughly from the distributed cluster config - not stuck in the cluster config - not cached on the node. I don't know for sure but...
  7. G

    Split temporarily cluster in 2 clusters due to network instability

    It seems to me that the new corosync using unicast is more finicky than the old one that used multicast. We havea 3 node cluster and we had corosync related issues after upgrading to PVE 6. What we did is we split the management network (4x 1gbit links) into 2 2x1gbit, one for management and...
  8. G

    Access proxmox gui with domain name instead of local ip and port

    You can use a LE certificate internally too. The browser doesn't care how the DNS name was resolved or what IP points to (so you can use the hosts file, too for this, you don't even need an internal DNS to handle stuff). But for generating the certificate you need internet connection and DNS...
  9. G

    [SOLVED] Certain VMs from a cluster cannot be backed up and managed

    Ok, good. I marked this as solved. Interesting is the fact that after we separated the cluster network from the management we did not have any issues like this anymore. This happened the day i submitted this thread or a day after that, since then i saw no problematic VMs and we did not update...
  10. G

    [SOLVED] Certain VMs from a cluster cannot be backed up and managed

    I know what was the issue, I started this thread... I tried to ask the Proxmox staff for details about what caused this issue since they said they fixed it.
  11. G

    [SOLVED] Certain VMs from a cluster cannot be backed up and managed

    When we had these issues, qemu agent was enabled and in use on most of the problematic VMs.
  12. G

    [SOLVED] Certain VMs from a cluster cannot be backed up and managed

    Maybe it is worth noting that after we moved our cluster traffic to a dedicated network we had no more such issues.
  13. G

    [SOLVED] Certain VMs from a cluster cannot be backed up and managed

    On the VMs that created issues for us we did not have ballooning enabled at all (we don't use ballooning anywhere). So this is not a factor.
  14. G

    [SOLVED] Certain VMs from a cluster cannot be backed up and managed

    VMs i had issues with did react to normal Shutdown commands from the GUI or cli. Edit: Alternatively you can log into the VM and issue the shutdown command.
  15. G

    Scaling beyond single server. Suggestion wanted.

    1. If a node ails it means you lost connection to it for whatever reason. This makes LIVE migration of any kind impossible because live migration needs both source and target servers running and communicating. HA is handled differently, on Proxmox you will have VMs respawning on live nodes after...
  16. G

    [SOLVED] PVE 5.4-11 + Corosync 3.x: major issues

    We had this issue with disintegrating cluster right after upgrading to PVE6. It also seemed to be related to some things happening in other parts of the network (we had a few vlans transported over the management links that span the whole network). But PVE 5 did not have this issue even with...
  17. G

    Does proxmox's kernel have support for Epyc Rome? (7002)

    Offtopic: we had some weird cluster crashes until we moved the coroync to it's own isolated vlan. Rock stable since.