Saltstack LXC Deployment not working

mruepp

New Member
Dec 29, 2016
15
0
1
48
Hi,

we want to use salt cloud provider to deploy lxc & qemu. I am able to connect to the proxmox host, am able to list images and nodes, but when I try to deploy a container, I get following error:
[DEBUG ] MasterEvent PULL socket URI: /var/run/salt/master/master_event_pull.ipc
[DEBUG ] Initializing new IPCClient for path: /var/run/salt/master/master_event_pull.ipc
[DEBUG ] Sending event: tag = salt/cloud/testubuntu/requesting; data = {'_stamp': '2017-02-28T19:33:29.698622', 'event': 'requesting instance', 'kwargs': {'password': 'topsecret', 'hostname': 'testubuntu', 'vmid': 147, 'ostemplate': 'local:vztmpl/ubuntu-16.04-standard_16.04-1_amd64.tar.gz', 'net0': 'bridge=vmbr0,ip=192.168.100.155/24,name=eth0,type=veth'}}
[DEBUG ] Preparing to generate a node using these parameters: {'password': 'topsecret', 'hostname': 'testubuntu', 'vmid': 147, 'ostemplate': 'local:vztmpl/ubuntu-16.04-standard_16.04-1_amd64.tar.gz', 'net0': 'bridge=vmbr0,ip=192.168.100.155/24,name=eth0,type=veth'}
[DEBUG ] post: https://pve01.p.fir.io:8006/api2/json/nodes/pve01/lxc ({'password': 'topsecret', 'hostname': 'testubuntu', 'vmid': 147, 'ostemplate': 'local:vztmpl/ubuntu-16.04-standard_16.04-1_amd64.tar.gz', 'net0': 'bridge=vmbr0,ip=192.168.100.155/24,name=eth0,type=veth'})
[ERROR ] Error creating testubuntu on PROXMOX

The following exception was thrown when trying to run the initial deployment:
400 Client Error: Parameter verification failed.
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/salt/cloud/clouds/proxmox.py", line 547, in create
data = create_node(vm_, newid)
File "/usr/lib/python2.7/site-packages/salt/cloud/clouds/proxmox.py", line 728, in create_node
node = query('post', 'nodes/{0}/{1}'.format(vmhost, vm_['technology']), newnode)
File "/usr/lib/python2.7/site-packages/salt/cloud/clouds/proxmox.py", line 178, in query
response.raise_for_status()
File "/usr/lib/python2.7/site-packages/requests/models.py", line 834, in raise_for_status
raise HTTPError(http_error_msg, response=self)
HTTPError: 400 Client Error: Parameter verification failed.
Error: There was a profile error: Failed to deploy VM


I use the default cloud profile like on this page described: docs.saltstack.com/en/latest/topics/cloud/proxmox.html

The file lives in /etc/salt/cloud.profile.d and looks like this:

admin@salt ~ $ cat /etc/salt/cloud.profiles.d/b-pve-test-profile.conf
proxmox-ubuntu:
provider: b-pve-provider
image: local:vztmpl/ubuntu-16.04-standard_16.04-1_amd64.tar.gz
technology: lxc

# host needs to be set to the configured name of the proxmox host
# and not the ip address or FQDN of the server
host: pve01
ip_address: 192.168.100.155
password: topsecret

I thought, it would be possible to use all of the api2 commands in the conf file like trunks=vlanid, etc... pve.proxmox.com/pve-docs/api-viewer/index.html

Thanks,

Michael
 
is it possible to get the whole response from the server?
the error:
'parameter verification failed'
indicates, that some argument was invalid or missing, we also put the missing/wrong parameter in the response, so it would be good to see which one it is...

possible things from what i see:
vmid already exists, template does not exist, bridge does not exist
 
Seems I can not get the whole response from the debug or even trace message of salt-cloud. Is there any api log on the proxmox I can debug?

Thx
 
I filed an Issue on Github: github.com/saltstack/salt/issues/39755

When I use the fqdn of the deployment host instead of the hostname without domain, the error changes to:
HTTPError: 596 Server Error: ssl3_get_server_certificate: certificate verify failed
This occurs when I use "verify_ssl: False" in the provider file (which i expect should lead to ignoring the certificate check), and this error:
SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:579)
Occurs when I don´t use verify_ssl at all which leads to "True" as default.

Is there on the API Side the parameter to ignore SSL Certificate check or is this in Salt-Cloud?

Thanks,

Michael
 
HTTPError: 596 Server Error: ssl3_get_server_certificate: certificate verify failed
this would indicate that the PVE internal proxying mechanism fails to verify the certificate of a node, which is strange, because the request you included in your first post would not need any proxying..

is the target node a standealone node or part of a cluster? did you change anything certificate related?
 
Hi, the Certificates are the generated ones - see screenshot. The System is a cluster of 10 Nodes.
The difference in the Error is:
host: pve01
vs
host: pve01.p.fir.io
 

Attachments

  • Screen Shot 2017-03-02 at 13.26.40.png
    Screen Shot 2017-03-02 at 13.26.40.png
    41.2 KB · Views: 10
Hi, the Certificates are the generated ones - see screenshot. The System is a cluster of 10 Nodes.
The difference in the Error is:
host: pve01
vs
host: pve01.p.fir.io

and inter-cluster API proxying works? e.g., if you connect to the web interface of node2, you can control stuff on node1?
 
When I try to use another deployment host (api host where i connect is still pve01) by using:
host: pve02.p.fir.io i get the following error:

[ERROR ] Error creating testubuntu on PROXMOX

The following exception was thrown when trying to run the initial deployment:
500 Server Error: proxy loop detected
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/salt/cloud/clouds/proxmox.py", line 547, in create
data = create_node(vm_, newid)
File "/usr/lib/python2.7/site-packages/salt/cloud/clouds/proxmox.py", line 728, in create_node
node = query('post', 'nodes/{0}/{1}'.format(vmhost, vm_['technology']), newnode)
File "/usr/lib/python2.7/site-packages/salt/cloud/clouds/proxmox.py", line 178, in query
response.raise_for_status()
File "/usr/lib/python2.7/site-packages/requests/models.py", line 834, in raise_for_status
raise HTTPError(http_error_msg, response=self)
HTTPError: 500 Server Error: proxy loop detected
Error: There was a profile error: Failed to deploy VM
 
When I try to use another deployment host (api host where i connect is still pve01) by using:
host: pve02.p.fir.io i get the following error:

[ERROR ] Error creating testubuntu on PROXMOX

The following exception was thrown when trying to run the initial deployment:
500 Server Error: proxy loop detected
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/salt/cloud/clouds/proxmox.py", line 547, in create
data = create_node(vm_, newid)
File "/usr/lib/python2.7/site-packages/salt/cloud/clouds/proxmox.py", line 728, in create_node
node = query('post', 'nodes/{0}/{1}'.format(vmhost, vm_['technology']), newnode)
File "/usr/lib/python2.7/site-packages/salt/cloud/clouds/proxmox.py", line 178, in query
response.raise_for_status()
File "/usr/lib/python2.7/site-packages/requests/models.py", line 834, in raise_for_status
raise HTTPError(http_error_msg, response=self)
HTTPError: 500 Server Error: proxy loop detected
Error: There was a profile error: Failed to deploy VM

are you sure the /etc/hosts files, DNS and routes are correct on all the nodes? that should never happen unless you have a broken network setup somewhere..
 
Could there be a problem with ipv6? We do not configured ipv6 but proxmox configures local link addresses.

Tested Name Resolution, Connectivity, everything. Cluster communication works perfectly.



Interfaces file looks like:

auto lo
iface lo inet loopback

iface eth0 inet manual

iface eth1 inet manual

iface eth2 inet manual

iface eth3 inet manual

allow-vmbr0 bond0
iface bond0 inet manual
ovs_bonds eth2 eth3
ovs_type OVSBond
ovs_bridge vmbr0
ovs_options lacp=active bond_mode=balance-tcp

auto vmbr0
iface vmbr0 inet manual
ovs_type OVSBridge
ovs_ports bond0 mgmt

allow-vmbr0 mgmt
iface mgmt inet static
address 10.66.4.122
netmask 255.255.255.0
gateway 10.66.4.254
ovs_type OVSIntPort
ovs_bridge vmbr0
ovs_options tag=4

Routes:
admin@pve02:/etc/network$ ip route show table local
broadcast 10.66.4.0 dev mgmt proto kernel scope link src 10.66.4.122
local 10.66.4.122 dev mgmt proto kernel scope host src 10.66.4.122
broadcast 10.66.4.255 dev mgmt proto kernel scope link src 10.66.4.122
broadcast 127.0.0.0 dev lo proto kernel scope link src 127.0.0.1
local 127.0.0.0/8 dev lo proto kernel scope host src 127.0.0.1
local 127.0.0.1 dev lo proto kernel scope host src 127.0.0.1
broadcast 127.255.255.255 dev lo proto kernel scope link src 127.0.0.1

admin@pve02:/etc/network$ ip -6 route show table local
local ::1 dev lo proto kernel metric 256 pref medium
local ::1 dev lo proto none metric 0 pref medium
local fe80::ec4:7aff:fe87:3358 dev lo proto none metric 0 pref medium
local fe80::ec4:7aff:fe87:3359 dev lo proto none metric 0 pref medium
local fe80::b096:9cff:fe30:dcbb dev lo proto none metric 0 pref medium
local fe80::c82c:b9ff:fee7:36a2 dev lo proto none metric 0 pref medium
ff00::/8 dev bond0 metric 256 pref medium
ff00::/8 dev mgmt metric 256 pref medium
ff00::/8 dev eth2 metric 256 pref medium
ff00::/8 dev eth3 metric 256 pref medium

admin@pve02:/etc/network$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 0c:c4:7a:ca:0a:b8 brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 0c:c4:7a:ca:0a:b9 brd ff:ff:ff:ff:ff:ff
4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UP group default qlen 1000
link/ether 0c:c4:7a:87:33:58 brd ff:ff:ff:ff:ff:ff
inet6 fe80::ec4:7aff:fe87:3358/64 scope link
valid_lft forever preferred_lft forever
5: eth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UP group default qlen 1000
link/ether 0c:c4:7a:87:33:59 brd ff:ff:ff:ff:ff:ff
inet6 fe80::ec4:7aff:fe87:3359/64 scope link
valid_lft forever preferred_lft forever
6: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1
link/ether a6:3c:bf:61:2b:d4 brd ff:ff:ff:ff:ff:ff
7: vmbr0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1
link/ether 0c:c4:7a:87:33:58 brd ff:ff:ff:ff:ff:ff
8: bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1
link/ether b2:96:9c:30:dc:bb brd ff:ff:ff:ff:ff:ff
inet6 fe80::b096:9cff:fe30:dcbb/64 scope link
valid_lft forever preferred_lft forever
9: mgmt: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1
link/ether ca:2c:b9:e7:36:a2 brd ff:ff:ff:ff:ff:ff
inet 10.66.4.122/24 brd 10.66.4.255 scope global mgmt
valid_lft forever preferred_lft forever
inet6 fe80::c82c:b9ff:fee7:36a2/64 scope link
valid_lft forever preferred_lft forever
11: veth100i0@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master ovs-system state UP group default qlen 1000
link/ether fe:6b:93:1f:4a:ab brd ff:ff:ff:ff:ff:ff link-netnsid 0
12: tap104i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovs-system state UNKNOWN group default qlen 1000
link/ether 02:a1:95:8c:19:96 brd ff:ff:ff:ff:ff:ff
13: tap102i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovs-system state UNKNOWN group default qlen 1000
link/ether 4a:d2:a8:d3:bc:9d brd ff:ff:ff:ff:ff:ff
14: tap102i1: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovs-system state UNKNOWN group default qlen 1000
link/ether 9a:36:94:cf:0c:f5 brd ff:ff:ff:ff:ff:ff
15: tap108i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovs-system state UNKNOWN group default qlen 1000
link/ether 82:ad:7e:b5:99:e2 brd ff:ff:ff:ff:ff:ff
16: tap115i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovs-system state UNKNOWN group default qlen 1000
link/ether 1a:00:77:df:5c:4b brd ff:ff:ff:ff:ff:ff
20: tap135i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovs-system state UNKNOWN group default qlen 1000
link/ether aa:cf:db:e0:09:cb brd ff:ff:ff:ff:ff:ff
26: veth139i0@if25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master ovs-system state UP group default qlen 1000
link/ether fe:8f:3b:3b:46:88 brd ff:ff:ff:ff:ff:ff link-netnsid 1


admin@pve02:/etc/network$ ip -6 addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 state UNKNOWN qlen 1
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP qlen 1000
inet6 fe80::ec4:7aff:fe87:3358/64 scope link
valid_lft forever preferred_lft forever
5: eth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP qlen 1000
inet6 fe80::ec4:7aff:fe87:3359/64 scope link
valid_lft forever preferred_lft forever
8: bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UNKNOWN qlen 1
inet6 fe80::b096:9cff:fe30:dcbb/64 scope link
valid_lft forever preferred_lft forever
9: mgmt: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UNKNOWN qlen 1
inet6 fe80::c82c:b9ff:fee7:36a2/64 scope link
valid_lft forever preferred_lft forever

Also reverse lookup works to and from all nodes:

admin@pve02:/etc/network$ dig -x 10.66.4.122

; <<>> DiG 9.9.5-9+deb8u9-Debian <<>> -x 10.66.4.122
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 50509
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;122.4.66.10.in-addr.arpa. IN PTR

;; AUTHORITY SECTION:
4.66.10.in-addr.arpa. 758 IN SOA ns-1754.awsdns-27.co.uk. awsdns-hostmaster.amazon.com. 1 7200 900 1209600 86400

;; Query time: 0 msec
;; SERVER: 10.66.4.254#53(10.66.4.254)
;; WHEN: Fri Mar 03 14:49:39 CET 2017
;; MSG SIZE rcvd: 140
 
Also reverse lookup works to and from all nodes:

admin@pve02:/etc/network$ dig -x 10.66.4.122

; <<>> DiG 9.9.5-9+deb8u9-Debian <<>> -x 10.66.4.122
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 50509
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;122.4.66.10.in-addr.arpa. IN PTR

;; AUTHORITY SECTION:
4.66.10.in-addr.arpa. 758 IN SOA ns-1754.awsdns-27.co.uk. awsdns-hostmaster.amazon.com. 1 7200 900 1209600 86400

;; Query time: 0 msec
;; SERVER: 10.66.4.254#53(10.66.4.254)
;; WHEN: Fri Mar 03 14:49:39 CET 2017
;; MSG SIZE rcvd: 140

huh? that shows that reverse DNS isn't working.. (which is not a problem as long as you setup /etc/hosts correctly)

does "getent hosts HOSTNAME" "getent hosts FQDN" and "getent hosts IP" (replace the stuff in CAPS accordingly) agree for each host on every host? it should print the same line and the output should be correct:
Code:
root@pve:~# getent hosts pve
192.168.31.13   pve.proxmox.invalid pve pvelocalhost
root@pve:~# getent hosts pve.proxmox.invalid
192.168.31.13   pve.proxmox.invalid pve pvelocalhost
root@pve:~# getent hosts 192.168.31.13
192.168.31.13   pve.proxmox.invalid pve pvelocalhost
 
Sorry wrong copy/paste:

Also getent works:
admin@pve01:~$ getent hosts pve01
10.66.4.121 pve01.p.fir.io pve01 pvelocalhost
admin@pve01:~$ getent hosts pve01.p.fir.io
10.66.4.121 pve01.p.fir.io pve01 pvelocalhost
admin@pve01:~$ getent hosts 10.66.4.121
10.66.4.121 pve01.p.fir.io pve01 pvelocalhost
admin@pve01:~$ getent hosts pve02
10.66.4.122 pve02.p.fir.io
admin@pve01:~$ getent hosts pve02.p.fir.io
10.66.4.122 pve02.p.fir.io
admin@pve01:~$ getent hosts 10.66.4.122
10.66.4.122 pve01.p.fir.io

; <<>> DiG 9.9.5-9+deb8u9-Debian <<>> -x 10.66.4.121
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 1308
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 4, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;121.4.66.10.in-addr.arpa. IN PTR

;; ANSWER SECTION:
121.4.66.10.in-addr.arpa. 227 IN PTR pve01.p.fir.io.

;; AUTHORITY SECTION:
4.66.10.in-addr.arpa. 81734 IN NS ns-1494.awsdns-58.org.
4.66.10.in-addr.arpa. 81734 IN NS ns-1754.awsdns-27.co.uk.
4.66.10.in-addr.arpa. 81734 IN NS ns-299.awsdns-37.com.
4.66.10.in-addr.arpa. 81734 IN NS ns-545.awsdns-04.net.

;; Query time: 0 msec
;; SERVER: 10.66.4.254#53(10.66.4.254)
;; WHEN: Fri Mar 03 17:42:06 CET 2017
;; MSG SIZE rcvd: 221
 
Bump. Any ideas?

Found this: https://github.com/saltstack/salt/issues/28048

It suggests that host: value has to be NOT the FQDN, but the pve hostname.

So I come back to
The following exception was thrown when trying to run the initial deployment:
400 Client Error: Parameter verification failed.

Try now to deploy one manually over the api.
 
Now did a deployment try with the api.
curl --insecure --cookie "$(<cookie)" https://$APINODE:8006/api2/json/nodes/$TARGETNODE/status | jq '.' is returning meaningfull things like:
{
"data": {
"cpuinfo": {
"mhz": "2662.859",
"user_hz": 100,
"hvm": 1,
"cpus": 48,
"model": "Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz",
"sockets": 2
},
"ksm": {
"shared": 13169143808
},
...

When I try to deploy without param. "rootfs" i get this error:
curl --silent --insecure --cookie "$(<cookie)" --header "$(<csrftoken)" -X POST\
--data-urlencode password="topsecret" \
--data-urlencode hostname="testubuntu" \
--data-urlencode net0="bridge=vmbr0,ip=192.168.100.155/24,name=eth0,type=veth" \
--data-urlencode ostemplate="nfs:vztmpl/centos-7-default_20161207_amd64.tar.xz" \
--data vmid=147 \
https://$APINODE:8006/api2/json/nodes/$TARGETNODE/lxc
{"errors":{"storage":"storage 'local' does not support container directories"}

When I try to deploy like this, I get back {"data":null}:
curl --silent --insecure --cookie "$(<cookie)" --header "$(<csrftoken)" -X POST\
--data-urlencode password="topsecret" \
--data-urlencode hostname="testubuntu" \
--data-urlencode net0="bridge=vmbr0,ip=192.168.100.155/24,name=eth0,type=veth" \
--data-urlencode ostemplate="nfs:vztmpl/centos-7-default_20161207_amd64.tar.xz" \
--data-urlencode rootfs="volume=vmdata,size=16" \
--data vmid=147 \
https://$APINODE:8006/api2/json/nodes/$TARGETNODE/lxc

Nothing happens though

Really strange
 
Code:
--data-urlencode rootfs="volume=vmdata,size=16" \

that looks wrong, the syntax is

Code:
rootfs=VOLUME_ID,OTHEROPTIONS

if the volume with the ID VOLUME_ID already exists and should be overwritten (rarely what you want), or

Code:
rootfs=STORAGE_ID:SIZE,OTHEROPTIONS

to allocate a new volume of size SIZE Gb on the storage with ID STORAGE_ID. OTHER_OPTIONS can be stuff like acl, backup, ro, ... - but most of those make more sense for non-rootfs mount points. The "size" part of a volume parameter string actually works the other way round - it tells you how big a volume is ;) if you are unsure, check the request our web interface does when doing a certain operation - it uses the very same API under the hoods, so if you click the "Create" button in the "Create Container" wizard and have your browser's Developer Tools open, you should see the POST request with all the parameters. In Chromium, you can even directly copy the request as cURL command, including cookie and csrf token.
 
This is interesting, because in the API Documentation this is stated otherwise: http://pve.proxmox.com/pve-docs/api-viewer/index.html

yes and no ;)

I agree that the "size" parameter is confusing there (the fact that it is readonly is only visible in the "long form" of the documentation, like in "man pct.conf").

Code:
<volume>
has three possible values in general (as described in our Admin Guide):
  • "STORAGE_ID:VOLUME_ID" (for volumes managed by PVE)
  • "/dev/someblockdevice" (for mounting a block device directly as volume)
  • "/some/path/on/the/host" (for bind-mounting arbitrary paths from the host into the container)
there is a special fourth "shortcut" syntax which is intended for operations like "pct create", "pct restore" and "pct set" (and their API counterparts), where instead of
Code:
pvesm alloc STORAGEID CTID VOLUMENAME SIZE
pct set CTID -mp0 STORAGAEID:VOLUMENAME

you can do
Code:
pct set CTID -mp0 STORAGEID:SIZE

and, PVE will allocate and reference the volume automatically.

TL;DR: "<volume>" never refers to a storage alone (which I assume "vmdata" is in your case?), but has multiple possible types of values.

to make matters even more confusing, you can also just change the default storage when creating/restoring a container. PVE will then allocate the rootfs on that storage instead of the default ("local"). this is handled with the "storage" parameter, but only makes sense when you don't want to manually configure any of the mount points.

e.g., for "pct create"
Code:
pct create CTID OSTEMPLATE -storage STORAGEID
is identical to
Code:
pct create CTID OSTEMPLATE -rootfs STORAGEID:4
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!