Problem while adding VM on a cluster's node (unable to parse value)

paszczi

New Member
Aug 11, 2010
5
0
1
Hi!

I've recently configured Proxmox environment with 4 cluster nodes. On all of the but one I can create VM without any problems. However, when I'm trying to create VM on one of the nodes (slave node) I keep getting errros:
/usr/bin/ssh -t -t -n -o BatchMode=yes 192.168.200.4 /usr/sbin/qm create 118 --cdrom cdrom --name test --vlan0 'virtio=2E:14:BD:5B:E1:8E' --virtio0 'local:5,format=raw' --bootdisk virtio0 --ostype other --memory 512 --onboot no --sockets 1

unable to parse value for 'vlan0'
unable to parse value for 'virtio0'
Connection to 192.168.200.4 closed.
unable to apply VM settings -

No other node exposes the same behavior. If I try to run the very same command either from master node (via ssh) or directly on the node, VM machine is created. I've tried looking into the logs but I don't see anything meaningful.
 
Please do not post problems in the mailing list and the forum!



What version do you use on that node?

Please login into the node, then type:

# pveversion -v

and post the output here.
 
Sorry for double posting.

Slave node (not working one):
Code:
pve-manager: 1.5-10 (pve-manager/1.5/4822)
running kernel: 2.6.26-2-amd64
qemu-server: 1.1-16
pve-firmware: not correctly installed
libpve-storage-perl: 1.0-13
vncterm: 0.9-2
vzctl: 3.0.24-1pve2
vzdump: not correctly installed
vzprocps: not correctly installed
vzquota: 3.0.11-1

Slave node (working one):
Code:
pve-manager: 1.5-10 (pve-manager/1.5/4822)
running kernel: 2.6.32-5-amd64
qemu-server: 1.1-16
pve-firmware: not correctly installed
libpve-storage-perl: 1.0-13
vncterm: 0.9-2
vzctl: 3.0.24-1pve2
vzdump: not correctly installed
vzprocps: not correctly installed
vzquota: 3.0.12-3

Master node:
Code:
pve-manager: 1.5-10 (pve-manager/1.5/4822)
running kernel: 2.6.32-trunk-amd64
qemu-server: 1.1-16
pve-firmware: not correctly installed
libpve-storage-perl: 1.0-13
vncterm: 0.9-2
vzctl: 3.0.23-10
vzdump: not correctly installed
vzprocps: not correctly installed
vzquota: 3.0.11-2
 
Hi!

I've recently configured Proxmox environment with 4 cluster nodes. On all of the but one I can create VM without any problems. However, when I'm trying to create VM on one of the nodes (slave node) I keep getting errros:
/usr/bin/ssh -t -t -n -o BatchMode=yes 192.168.200.4 /usr/sbin/qm create 118 --cdrom cdrom --name test --vlan0 'virtio=2E:14:BD:5B:E1:8E' --virtio0 'local:5,format=raw' --bootdisk virtio0 --ostype other --memory 512 --onboot no --sockets 1

unable to parse value for 'vlan0'
unable to parse value for 'virtio0'
Connection to 192.168.200.4 closed.
unable to apply VM settings -

No other node exposes the same behavior. If I try to run the very same command either from master node (via ssh) or directly on the node, VM machine is created. I've tried looking into the logs but I don't see anything meaningful.

Can you post your /etc/network/interfaces and also the output of ifconfig on each node
 
I can but this is going to be loooooong :)

ip a on slave nodes (both non working and working have the same configuration excluding ip address)
Code:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP qlen 1000
    link/ether 00:40:d0:ac:17:70 brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP qlen 1000
    link/ether 00:40:d0:ac:17:70 brd ff:ff:ff:ff:ff:ff
4: bond0: <BROADCAST,MULTICAST,PROMISC,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
    link/ether 00:40:d0:ac:17:70 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::240:d0ff:feac:1770/64 scope link
       valid_lft forever preferred_lft forever
14: bond0.431@bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
    link/ether 00:40:d0:ac:17:70 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::240:d0ff:feac:1770/64 scope link
       valid_lft forever preferred_lft forever
15: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 00:40:d0:ac:17:70 brd ff:ff:ff:ff:ff:ff
    inet 192.168.200.4/24 brd 192.168.200.255 scope global vmbr0
    inet6 fe80::240:d0ff:feac:1770/64 scope link
       valid_lft forever preferred_lft forever
16: bond0.426@bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
    link/ether 00:40:d0:ac:17:70 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::240:d0ff:feac:1770/64 scope link
       valid_lft forever preferred_lft forever
17: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 00:40:d0:ac:17:70 brd ff:ff:ff:ff:ff:ff
    inet <ip address> brd <bcast> scope global vmbr1
    inet6 fe80::240:d0ff:feac:1770/64 scope link
       valid_lft forever preferred_lft forever
18: bond0.424@bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
    link/ether 00:40:d0:ac:17:70 brd ff:ff:ff:ff:ff:ff
    inet <ip address> brd <bcast> scope global bond0.424
    inet6 fe80::240:d0ff:feac:1770/64 scope link
       valid_lft forever preferred_lft forever
19: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 100
    link/[65534]
    inet <ip address> brd <bcast> scope global tun0
20: vmbr2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 00:40:d0:ac:17:70 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::240:d0ff:feac:1770/64 scope link
       valid_lft forever preferred_lft forever
21: bond0.425@bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
    link/ether 00:40:d0:ac:17:70 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::240:d0ff:feac:1770/64 scope link
       valid_lft forever preferred_lft forever

/etc/interfaces on slave nodes:
Code:
auto bond0
iface bond0 inet manual
        slaves eth0 eth1
        bond_mode 4
        bond_mmimon 100
        bond_downdelay 200
        bond_updelay 200


auto bond0.431
iface bond0.431 inet manual

auto vmbr0
iface vmbr0 inet static
        bridge_ports bond0.431
        bridge_stp off
        address 192.168.200.4
        netmask 255.255.255.0
        network 192.168.200.0


auto bond0.426
iface bond0.426 inet manual

auto vmbr1
iface vmbr1 inet static
        address ##
        netmask ##
        network ##
        gateway ##
        # dns-* options are implemented by the resolvconf package, if installed
        dns-nameservers ##
        dns-search ##
        bridge_ports bond0.426


auto bond0.425
iface bond0.425 inet manual

auto vmbr2
iface vmbr2 inet manual
        bridge_ports bond0.425
        bridge_stp off



auto bond0.424
iface bond0.424 inet static
        address ##
        netmask 255.255.255.0
        network ##
 
Last edited:
ip a on main node:
Code:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP qlen 1000
    link/ether 00:30:48:cf:0b:fa brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP qlen 1000
    link/ether 00:30:48:cf:0b:fa brd ff:ff:ff:ff:ff:ff
4: bond0: <BROADCAST,MULTICAST,PROMISC,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
    link/ether 00:30:48:cf:0b:fa brd ff:ff:ff:ff:ff:ff
    inet6 fe80::230:48ff:fecf:bfa/64 scope link
       valid_lft forever preferred_lft forever
5: bond0.431@bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
    link/ether 00:30:48:cf:0b:fa brd ff:ff:ff:ff:ff:ff
    inet6 fe80::230:48ff:fecf:bfa/64 scope link
       valid_lft forever preferred_lft forever
6: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 00:30:48:cf:0b:fa brd ff:ff:ff:ff:ff:ff
    inet 192.168.200.2/24 brd 192.168.200.255 scope global vmbr0
    inet6 fe80::230:48ff:fecf:bfa/64 scope link
       valid_lft forever preferred_lft forever
7: bond0.426@bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
    link/ether 00:30:48:cf:0b:fa brd ff:ff:ff:ff:ff:ff
    inet6 fe80::230:48ff:fecf:bfa/64 scope link
       valid_lft forever preferred_lft forever
8: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 00:30:48:cf:0b:fa brd ff:ff:ff:ff:ff:ff
    inet <ip> brd <bcast> scope global vmbr1
    inet6 fe80::230:48ff:fecf:bfa/64 scope link
       valid_lft forever preferred_lft forever
9: bond0.425@bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
    link/ether 00:30:48:cf:0b:fa brd ff:ff:ff:ff:ff:ff
    inet6 fe80::230:48ff:fecf:bfa/64 scope link
       valid_lft forever preferred_lft forever
10: vmbr2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 00:30:48:cf:0b:fa brd ff:ff:ff:ff:ff:ff
    inet <ip> brd <bcast> scope global vmbr2
    inet6 fe80::230:48ff:fecf:bfa/64 scope link
       valid_lft forever preferred_lft forever
11: bond0.430@bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
    link/ether 00:30:48:cf:0b:fa brd ff:ff:ff:ff:ff:ff
    inet <ip> brd <bcast> scope global bond0.430
    inet6 fe80::230:48ff:fecf:bfa/64 scope link
       valid_lft forever preferred_lft forever
12: bond0.10@bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
    link/ether 00:30:48:cf:0b:fa brd ff:ff:ff:ff:ff:ff
    inet6 fe80::230:48ff:fecf:bfa/64 scope link
       valid_lft forever preferred_lft forever
13: vmbr3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 00:30:48:cf:0b:fa brd ff:ff:ff:ff:ff:ff
    inet <ip> brd <bcast> scope global vmbr3
    inet6 fe80::230:48ff:fecf:bfa/64 scope link
       valid_lft forever preferred_lft forever
14: bond0.428@bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
    link/ether 00:30:48:cf:0b:fa brd ff:ff:ff:ff:ff:ff
    inet  <ip> brd <bcast> scope global bond0.428
    inet6 fe80::230:48ff:fecf:bfa/64 scope link
       valid_lft forever preferred_lft forever
15: bond0.XX@bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
    link/ether 00:30:48:cf:0b:fa brd ff:ff:ff:ff:ff:ff
    inet <ip> brd <bcast> scope global bond0.1500
    inet6 fe80::230:48ff:fecf:bfa/64 scope link
       valid_lft forever preferred_lft forever
16: bond0.XX@bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
    link/ether 00:30:48:cf:0b:fa brd ff:ff:ff:ff:ff:ff
    inet <ip> brd <bcast> scope global bond0.1501
    inet6 fe80::230:48ff:fecf:bfa/64 scope link
       valid_lft forever preferred_lft forever
19: vmtab103i3: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500
    link/ether 9a:00:d5:05:48:b8 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::9800:d5ff:fe05:48b8/64 scope link
       valid_lft forever preferred_lft forever
20: vmtab106i1: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500
    link/ether a2:e8:4e:55:5c:d5 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::a0e8:4eff:fe55:5cd5/64 scope link
       valid_lft forever preferred_lft forever
22: vmtab102i2: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500
    link/ether 66:ca:5a:4a:49:ce brd ff:ff:ff:ff:ff:ff
    inet6 fe80::64ca:5aff:fe4a:49ce/64 scope link
       valid_lft forever preferred_lft forever
23: vmtab105i3: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500
    link/ether 16:61:c5:1e:a8:2b brd ff:ff:ff:ff:ff:ff
    inet6 fe80::1461:c5ff:fe1e:a82b/64 scope link
       valid_lft forever preferred_lft forever
33: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 100
    link/[65534]
    inet <ip> brd <bcast> scope global tun0
34: tun1: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 100
    link/[65534]
    inet <ip> peer <ip> scope global tun1
161: vmtab101i2: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500
    link/ether d6:bb:fc:df:3f:d2 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::d4bb:fcff:fedf:3fd2/64 scope link
       valid_lft forever preferred_lft forever

/etc/network/interfaces on main node:
Code:
# network interface settings
auto lo
iface lo inet loopback

auto bond0
iface bond0 inet manual
        slaves eth0 eth1
        bond_mode 4
        bond_miimon 100
        bond_downdelay 200
        bond_updelay 200


auto bond0.431
iface bond0.431 inet manual

auto vmbr0
iface vmbr0 inet static
        bridge_ports bond0.431
        bridge_stp off
        address 192.168.200.2
        netmask 255.255.255.0
        network 192.168.200.0



auto bond0.426
iface bond0.426 inet manual

auto vmbr1
iface vmbr1 inet static
        bridge_ports bond0.426
        bridge_stp off
        address ##
        netmask ##
        network ##
        broadcast ##
        post-up ip r a ## dev vmbr1 src ## table 100
        post-up ip r a default via ## dev vmbr1 table 100
        post-up ip rule add from ## table 100


auto bond0.425
iface bond0.425 inet manual

auto vmbr2
iface vmbr2 inet static
        bridge_ports bond0.425
        bridge_stp off
    address  ##
    netmask  ##
    gateway  ##
        # dns-* options are implemented by the resolvconf package, if installed
        dns-nameservers 127.0.0.1 ##
        dns-search ##


auto bond0.430
iface bond0.430 inet static
        address  ##
        netmask  ##


auto bond0.10
iface bond0.10 inet manual

auto vmbr3
iface vmbr3 inet static
        bridge_ports bond0.10
        address ##
        netmask ##
        bridge_stp off


auto bond0.428
iface bond0.428 inet static
        address  ##
        netmask  ##
        post-up ip r add ## via ##


auto bond0.XX
iface bond0.XX inet static
        address  ##
        netmask  ##


auto bond0.XX
iface bond0.XX inet static
        address  ##
        netmask  ##
        post-up ip r add ## via ##
        post-up ip r add ## via ##
 
Last edited:
Sorry but kernel 'kernel: 2.6.26-2-amd64' is not a proxmox VE kernel.

Please install a kernel from our repository and test again.
 
Also, that node is not installed with the proxmox installer - seems many required packages are missing.

Well, only some packages are missing (vzdump) - but that does not really explain the error you get.
 
Last edited: