remote restart via SOAP

the_bobara

New Member
Jun 8, 2008
5
0
1
Hello,
we are integrating some services in our new sms system, and now we want to make automated restarts.
In our controll panel we have client, gsm, vps server mother, openvz container id.
We can make restarts very easy with ssh command, but we prefer to use SOAP and to restart it from proxmox panel.
How can we achieve that? Any php/perl example will be helpfull

thanks
 
We can make restarts very easy with ssh command, but we prefer to use SOAP and to restart it from proxmox panel.
How can we achieve that?

Please use ssh for now.

The SOAP interface ist still not stable. I am just working to make the SOAP interface more robust, and added authentication to the SOAP server. I will guess this will get usable in one of the next releases.

Or are you interested in helping developement (define aand document a stable API)?

- Dietmar
 
yes, we are interested!

We are using ssh for now, but we want to make the integration within proxmox. If there is something that we could help, please tell it.
 
We are using ssh for now, but we want to make the integration within proxmox. If there is something that we could help, please tell it.

Please specify the functions you need.

The next release will allow you to connect to the SOAP server using username/password (although you still need a ssh tunnel, because the SOAP server only listens on localhost for security reasons).
 
we need functions such as:
restart, stop, shutdown (we use only openvz in production) and we need statistics like these in Proxmox administration (cpu, mem, disk).
Additionaly reinstall option will be great, but it is not necessary right now.

Using SOAP on localhost is normal without authentication, but when you have user/pass authentication you can bind the server on specific port on public IP and with iptables you can restrict it.
 
hello im writed ssh for php qm reset cannot be work and qm working only vkvm vps

for example

im created linux vps it is openvz server

im trying command
server:~# qm stop 101
unable to find configuration file for VM 101 - no such machine

thanks your helping.
 
Hi,

I would also be interested in the SOAP API.
I'd love to have:

  • Start
  • Stop
  • Restart
  • info
    • Status (Started / Stopped / Mounted)
    • IP
    • Hostname
    • CPU (Current / Maximum)
    • Memory (Current / Maximum)
    • Disk (Current / Maximum)
 
Last edited:
Using perl code in this thread I've got the next free veid number:

Code:
use strict;
use PVE::Utils;
use PVE::ConfigServer;
use PVE::Cluster;
 
my $secret = PVE::Utils::load_auth_secret();
my $ticket = PVE::Utils::create_auth_ticket ($secret, 'root', 'root');
my $conn = PVE::ConfigClient::connect ($ticket);

my $vzlist = $conn->cluster_vzlist()->result;
my $vmops = PVE::Config::read_file ("vmops");
my $nextveid = PVE::Cluster::get_nextid($vzlist, $vmops);

print $nextveid;

But there is a problem with cluster sync: sometimes straight after I create a machine on a slave node, it return the same VEID, it mean that master isn't update.

My question: how to force master update from perl script?
 
Last edited:
So you end up with duplicate IDs? Or the next create fails? There is no guarantee that the ID from get_nextid() is really 'free'. Instead, I would reserve a pool of ID and maintain a free/used ID list yourself inside your management application.
 
So you end up with duplicate IDs?

Yes it return the same id for 40-60 second after the creation of a new VE.

Instead, I would reserve a pool of ID and maintain a free/used ID list yourself inside your management application.

Yes it may be a solution, but i like better to use only Proxmox infrastructure to take decisions as new VE ids.
Is there a trick to force cluster update before call get_nextveid()?

Bye
 
Is there a trick to force cluster update before call get_nextveid()?

You need to call vzlist() on the node, then pass that list to cluster_vzlist(), which updates the vzlist file on the master. This is implemented in PVE::Cluster::vzlist_update()
 
You need to call vzlist() on the node, then pass that list to cluster_vzlist(), which updates the vzlist file on the master. This is implemented in PVE::Cluster::vzlist_update()

Thanks dietmar!

I write this code but there is something wrong:

Code:
use strict;
use PVE::Utils;
use PVE::ConfigServer;
use PVE::Cluster;
my $secret = PVE::Utils::load_auth_secret();
my $ticket = PVE::Utils::create_auth_ticket ($secret, 'root', 'root');
my $conn = PVE::ConfigClient::connect ($ticket);
my $vzlist = $conn->vzlist()->result;
print PVE::Cluster::vzlist_update ($vzlist, $ticket);

I run this code on every node after VE creation, but on master next_veid() return same id for some seconds.
 
Old thread, but it just solved a problem I was having.

I've cheated in my provisioning script-- my solution was to wrap fdb's perl function in a test condition that determines whether or not the returned VEID is the same one it used last time-- if it is, it sleeps for five seconds and loops back through the function until the VEID returned is different.