[SOLVED] Proxmox API: Unable to create VM - Already exists on node

dymattic

New Member
Dec 13, 2019
4
0
1
26
Hamburg, Germany
I am not sure whether this is the right place to address this, but we can figure that out together :)
When trying to create a VM via the API following error is returned: 500 unable to create VM 109 - CT 109 already exists on node ''

The PVE version I am using is: pve-manager/6.1-3/37248ce6 (running kernel: 5.3.13-1-pve)
This error only occurs after a VM has been deleted via the API and I start to get really confused, because querying the API
with pvesh get /nodes/<node>/qemu does not show any VM with that VMID.

I tried creating a VM with the same VMID using the Proxmox web Interface, but still getting the same result.
This makes sense because it uses the same API.
Now the confusing part: When I am trying to create a VM using pvesh it works just fine:

pvesh create /nodes/<node>/qemu --vmid 109
UPID:<node>:00007A3D:0C64BFA0:5E131E71:qmcreate:109:root@pam:

Now I assume that something goes wrong during the deletion, though no errors are shown.
Did anyone experience similar issues or got any idea where the Problem might lay?
 

dymattic

New Member
Dec 13, 2019
4
0
1
26
Hamburg, Germany
I just noticed those log entries appearing when trying to create the VM:

Code:
Jan 06 14:44:55  pvedaemon[8573]: Use of uninitialized value in string eq at /usr/share/perl5/PVE/Cluster.pm line 697.
Jan 06 14:44:55  pvedaemon[8573]: Use of uninitialized value in concatenation (.) or string at /usr/share/perl5/PVE/Cluster.pm line 698.

When opening the file and going to those lines:

Perl:
sub check_vmid_unused {
    my ($vmid, $noerr) = @_;

    my $vmlist = get_vmlist();

    my $d = $vmlist->{ids}->{$vmid};
    return 1 if !defined($d);

    return undef if $noerr;

    my $vmtypestr =  $d->{type} eq 'qemu' ? 'VM' : 'CT';              # First line where error occurs
    die "$vmtypestr $vmid already exists on node '$d->{node}'\n";     # Second line where error occurs
}

I am suspecting the value for $d->{node} and $d->{type} to be empty or something like that.
That's because in the initial API-Response 500 unable to create VM 109 - CT 109 already exists on node '' the node name was empty.

I guess because the error isn't caught properly it just dies assuming the VM already exists.

Though I have no idea why that error occurs in the first place...
 

dymattic

New Member
Dec 13, 2019
4
0
1
26
Hamburg, Germany
I edited the perl code, so that it dumps the values of $vmlist when the function triggers that a VM with given VMID has ben found.
This is what I get. Yes, the VMID is indeed still there.
Note that this is the state after the QEMU DELETE command has been triggered via API call:

Code:
pvedaemon[30619]: $VAR1 = {
                                                   'version' => 278,
                                                   'ids' => {
                                                             ...
                                                              '114' => {
                                                                         'type' => 'qemu',
                                                                         'node' => 'pm1',
                                                                         'version' => 11
                                                                       },
                                                              '109' => {},
                                                              '107' => {
                                                                         'version' => 12,
                                                                         'type' => 'qemu',
                                                                         'node' => 'pm1'
                                                                       },
                                                              '127' => {
                                                                         'node' => 'pm1',
                                                                         'type' => 'qemu',
                                                                         'version' => 15
                                                                       },
                                                              '123' => {
                                                                         'type' => 'lxc',
                                                                         'node' => 'pm1',
                                                                         'version' => 34
                                                                       }
                                                            }
                                                 };

The culprit is VM 109.
What bugs me is, that this behaviour isn't happening when destroying a VM via the Proxmox Web interface.
Am I missing something???
 

fabian

Proxmox Staff Member
Staff member
Jan 7, 2016
8,357
1,658
174
This looks like a side-effect of a perl quirk, and I think I found the concrete instance:
https://pve.proxmox.com/pipermail/pve-devel/2020-January/041096.html

could you try applying that patch (or waiting until a patched package hits the repositories) and report back whether it fixes the issue for you? if you apply manually, you need to restart pvedaemon and pveproxy for the change to take effect.
 

dymattic

New Member
Dec 13, 2019
4
0
1
26
Hamburg, Germany
This looks like a side-effect of a perl quirk, and I think I found the concrete instance:
https://pve.proxmox.com/pipermail/pve-devel/2020-January/041096.html

could you try applying that patch (or waiting until a patched package hits the repositories) and report back whether it fixes the issue for you? if you apply manually, you need to restart pvedaemon and pveproxy for the change to take effect.

Thanks a lot for your reply!

I have made a small adjustment to Cluster.pm in the check_vmid_unused() function changing my $d = $vmlist->{ids}->{$vmid};
to my $d = $vmlist->{ids}->{$vmid}->{node}; which fixed my problem, but that was nothing but a dirty fix and not at all at the root of the problem.

I have reverted my changes and applied the patch you provided and it seems to be working for now.
I will keep testing and report back if the problem arises again.

Thanks!!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!