Hello,
Since upgrading to Proxmox VE 7 we see VMs hang after live migration on some hypervisors. The VM stops responding and we see CPU spikes. If we move VM from hypervisor 1 to 2 nothing happens. But if we move it migrate it back from hypervisor 2 to 1 it crashes.
The hypervisor 1...
We are also seeing that hot migration from server with kernel 5.15.39-1-pve stops working once it's migrated to the other hypervisor. Migrating from server with new kernels we do not see the same behavior.
Is it expected behavior that a snapshot works very fast, but when we try to remove it takes up to 9 min? During these 9 minutes, the VM becomes unavailable.
When running pvesm status is shows this:
data nfs active xxx xxx xxx xx%
It's not but from what I can see in the logs this issue happens on server where we want to remove the snapshot. This goes wrong in most cases. Which command should we use instead of the qmp or...
It's a 400GB disk. The snapshot is made without RAM.
This is already done. The storage is added as a ZFS storage to the Proxmox cluster.
I'm aware of that, but what would be your advice to do when a snapshot needs to be created when using ZFS. You mentioned that we should not use the qmp...
We have 24 disks of 4TB disks.
Can you elaborate this? How can we achieve this?
It's a Windows Server with qcow2, what would your suggestion if you only have a ZFS pool and you want to make snapshots?
We were unable to resolve this issue yet. I hope someone can assist us on resolving this issue. The problems occur mostly in the snapshot delete process.
Hello,
We are currently running Proxmox version 6.4-14 with ZFS. We get frequently that VMs get locked during snapshot deletion. When I check the VM task history I see this:
TASK ERROR: VM <id> qmp command 'blockdev-snapshot-delete-internal-sync' failed - got timeout
Currently the only way to...
Hello,
Last week we had a kernel crash during a kernel update. We are using UEFI with Proxmox VE 6.4-13 with zfs root partition, but we are unable to capture the kernel crash. The steps we followed can be found below.
1. Install kdump-tools:
echo "kexec-tools kexec-tools/load_kexec boolean...
Hello everyone,
We will start using NFS storage in our Proxmox environment. To use the snapshot feature it's required to use qcow2 format. But when we create a snapshot the snapshot is created with raw format is this expected behavior? The snapshot is created through the Proxmox GUI.
VM
agent...
Ok, so In order to find the actual size, I have to look into the task log. But for example one other Windows machine has the following usage:
INFO: starting new backup job: vzdump 2708 --remove 0 --storage pbs --mode snapshot --node hv01
INFO: Starting Backup of VM 2708 (qemu)
INFO: Backup...
Hereby the PVE task for 1389:
INFO: Starting Backup of VM 1389 (qemu)
INFO: Backup started at 2021-08-20 00:45:02
INFO: status = running
INFO: VM Name: ws1.example.com
INFO: include disk 'scsi0' 'ceph:vm-1389-disk-0' 100G
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: snapshots...
Here is the latest VM backup log which I pulled from the PBS:
2021-08-20T00:45:02+02:00: starting new backup on datastore 'vm-backup': "vm/1389/2021-08-19T22:45:02Z"
2021-08-20T00:45:02+02:00: download 'index.json.blob' from previous backup.
2021-08-20T00:45:02+02:00: register chunks in...
Sorry for the late reply Dominik. But the current storage usage of backups is a bit strange for me:
For test, we only backing up a few VMs but the storage usage is a bit strange. It seems like the storage:
The used backup storage seems to be equal to the VM allocated disk storage. It seems...