Is there a way to remove the disk snapshot? To reclaim the unused disk, there is a way to remove the disk and restore from the backup but this is not a preferred way to remove the snapshot from the disk.
Thanks for the info, will this remove all leftovers since after every snapshot fails the disk usage gets bigger and bigger? We have similar setups where we don't have this issue only on a few of the servers.
So in our case, if I understand it right, disable freeze will not work...
Hello,
I came acros a strange issue. One few servers we are unable to create or delete snapshot. In most cases snapshots work but the deletion makes the VM unvailble until the server gets a hard reset. Below some info
VM config
agent: 1,fstrim_cloned_disks=1
boot: order=scsi0
cores: 4
cpu...
But VM 300 should be turned of first then VM 200 and VM 100 should be turned off last in order ? But if I change the "Start/Shutdown order: 1" will VM 100 not be turned off first due to the start/shutdown order?
In Proxmox you can use VM -> Options -> Start/Shutdown order we you can configure the following values:
Start/Shutdown order:
Startup delay:
Shutdown timeout:
We have three VMs:
VM 100
VM 200
VM 300
The shutdown should be done from VM 300, VM 200, and VM 100 and when it starts it should...
Thanks for bringing this to my attention. It's changed to 6.2
Is this issue resolved in 6.2? Since i'm unable to update the firmware:
> Solidigm : This drive is no longer supported.
Hello,
I'm currently configuring a new Proxmox hypervisor with the following setup:
2x 1TB NVMe drives in RAID1 ZFS
8x 8TB NVMe drives in RAID10 ZFS
The 8TB NVMe drives we are using:
Intel DC4510 Series
In dmesg I see the following lines:
[Wed Apr 19 16:42:44 2023] nvme nvme0: I/O 321 QID...
We have the same issue with +- 800 ACLs.
pveum acl list --output-format yaml | grep path | wc -l
798
When we check the time for ticket and user we get those values:
#time pvesh create /access/ticket
real 0m1.683s
user 0m1.505s
sys 0m0.169s
#time pveum user permissions...
Hello,
Yesterday I was migrating a VM from 1 hypervisor to another. But then this error was shown:
2022-11-07 00:27:40 use dedicated network address for sending migration traffic ()
2022-11-07 00:27:41 starting migration of VM 3934 to node '' ()
2022-11-07 00:27:41 found local, replicated disk...
In another ticket there was a reference to those two reports:
https://bugzilla.proxmox.com/show_bug.cgi?id=4073
https://bugzilla.proxmox.com/show_bug.cgi?id=4218
Can it be related to this issue?
Sure, hereby the full task log. Forgot to mention it before in order to make the VM work again a reset is not enough. VM stop VM Start is needed.
2022-11-04 16:12:59 use dedicated network address for sending migration traffic ()
2022-11-04 16:12:59 starting migration of VM 5105 to node 'hv01'...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.