We have the same issue with +- 800 ACLs.
pveum acl list --output-format yaml | grep path | wc -l
When we check the time for ticket and user we get those values:
#time pvesh create /access/ticket
#time pveum user permissions...
Yesterday I was migrating a VM from 1 hypervisor to another. But then this error was shown:
2022-11-07 00:27:40 use dedicated network address for sending migration traffic ()
2022-11-07 00:27:41 starting migration of VM 3934 to node '' ()
2022-11-07 00:27:41 found local, replicated disk...
Sure, hereby the full task log. Forgot to mention it before in order to make the VM work again a reset is not enough. VM stop VM Start is needed.
2022-11-04 16:12:59 use dedicated network address for sending migration traffic ()
2022-11-04 16:12:59 starting migration of VM 5105 to node 'hv01'...
Since upgrading to Proxmox VE 7 we see VMs hang after live migration on some hypervisors. The VM stops responding and we see CPU spikes. If we move VM from hypervisor 1 to 2 nothing happens. But if we move it migrate it back from hypervisor 2 to 1 it crashes.
The hypervisor 1...
We are also seeing that hot migration from server with kernel 5.15.39-1-pve stops working once it's migrated to the other hypervisor. Migrating from server with new kernels we do not see the same behavior.
When running pvesm status is shows this:
data nfs active xxx xxx xxx xx%
It's not but from what I can see in the logs this issue happens on server where we want to remove the snapshot. This goes wrong in most cases. Which command should we use instead of the qmp or...
It's a 400GB disk. The snapshot is made without RAM.
This is already done. The storage is added as a ZFS storage to the Proxmox cluster.
I'm aware of that, but what would be your advice to do when a snapshot needs to be created when using ZFS. You mentioned that we should not use the qmp...
We are currently running Proxmox version 6.4-14 with ZFS. We get frequently that VMs get locked during snapshot deletion. When I check the VM task history I see this:
TASK ERROR: VM <id> qmp command 'blockdev-snapshot-delete-internal-sync' failed - got timeout
Currently the only way to...
Last week we had a kernel crash during a kernel update. We are using UEFI with Proxmox VE 6.4-13 with zfs root partition, but we are unable to capture the kernel crash. The steps we followed can be found below.
1. Install kdump-tools:
echo "kexec-tools kexec-tools/load_kexec boolean...
We will start using NFS storage in our Proxmox environment. To use the snapshot feature it's required to use qcow2 format. But when we create a snapshot the snapshot is created with raw format is this expected behavior? The snapshot is created through the Proxmox GUI.