We have the same issue with +- 800 ACLs.
pveum acl list --output-format yaml | grep path | wc -l
798
When we check the time for ticket and user we get those values:
#time pvesh create /access/ticket
real 0m1.683s
user 0m1.505s
sys 0m0.169s
#time pveum user permissions...
Hello,
Yesterday I was migrating a VM from 1 hypervisor to another. But then this error was shown:
2022-11-07 00:27:40 use dedicated network address for sending migration traffic ()
2022-11-07 00:27:41 starting migration of VM 3934 to node '' ()
2022-11-07 00:27:41 found local, replicated disk...
In another ticket there was a reference to those two reports:
https://bugzilla.proxmox.com/show_bug.cgi?id=4073
https://bugzilla.proxmox.com/show_bug.cgi?id=4218
Can it be related to this issue?
Sure, hereby the full task log. Forgot to mention it before in order to make the VM work again a reset is not enough. VM stop VM Start is needed.
2022-11-04 16:12:59 use dedicated network address for sending migration traffic ()
2022-11-04 16:12:59 starting migration of VM 5105 to node 'hv01'...
Hello,
Since upgrading to Proxmox VE 7 we see VMs hang after live migration on some hypervisors. The VM stops responding and we see CPU spikes. If we move VM from hypervisor 1 to 2 nothing happens. But if we move it migrate it back from hypervisor 2 to 1 it crashes.
The hypervisor 1...
We are also seeing that hot migration from server with kernel 5.15.39-1-pve stops working once it's migrated to the other hypervisor. Migrating from server with new kernels we do not see the same behavior.
Is it expected behavior that a snapshot works very fast, but when we try to remove it takes up to 9 min? During these 9 minutes, the VM becomes unavailable.
When running pvesm status is shows this:
data nfs active xxx xxx xxx xx%
It's not but from what I can see in the logs this issue happens on server where we want to remove the snapshot. This goes wrong in most cases. Which command should we use instead of the qmp or...
It's a 400GB disk. The snapshot is made without RAM.
This is already done. The storage is added as a ZFS storage to the Proxmox cluster.
I'm aware of that, but what would be your advice to do when a snapshot needs to be created when using ZFS. You mentioned that we should not use the qmp...
We have 24 disks of 4TB disks.
Can you elaborate this? How can we achieve this?
It's a Windows Server with qcow2, what would your suggestion if you only have a ZFS pool and you want to make snapshots?
We were unable to resolve this issue yet. I hope someone can assist us on resolving this issue. The problems occur mostly in the snapshot delete process.
Hello,
We are currently running Proxmox version 6.4-14 with ZFS. We get frequently that VMs get locked during snapshot deletion. When I check the VM task history I see this:
TASK ERROR: VM <id> qmp command 'blockdev-snapshot-delete-internal-sync' failed - got timeout
Currently the only way to...
Hello,
Last week we had a kernel crash during a kernel update. We are using UEFI with Proxmox VE 6.4-13 with zfs root partition, but we are unable to capture the kernel crash. The steps we followed can be found below.
1. Install kdump-tools:
echo "kexec-tools kexec-tools/load_kexec boolean...
Hello everyone,
We will start using NFS storage in our Proxmox environment. To use the snapshot feature it's required to use qcow2 format. But when we create a snapshot the snapshot is created with raw format is this expected behavior? The snapshot is created through the Proxmox GUI.
VM
agent...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.