Installing Arista CloudVision Portal on Proxmox - safe to run KVM auto-deploy scripts?

victorhooi

Well-Known Member
Apr 3, 2018
256
20
58
39
Hi,

I'm trying to deploy Arista's CloudVision Portal software onto Proxmox.

Arista helpfully supplies it as a KVM-specific tarball which includes two qcow2 images and automated deployment scripts. Their installation is documented here.

https://www.arista.com/en/cg-cv/cv-deploying-cvp-on-kvm

My question is - what's the best way to get this onto Proxmox?

I assume I can create a new KVM-based VM, then add the disk images to that.

However, what about the `createNwBridges.py` and `generateXmlForKvm.py`/`cvpTemplate.xml` files?

Is it safe to run these on a Proxmox installation, or can I just skip them?

I took a skim through them, and it seems like I should be able to replicate all of this manually via the Proxmox GUI. (I assume settings thing manually will break something in Proxmox, right?) Any other advice around this?

Code:
python createNwBridges.py --help
usage: createNwBridges.py [-h] --device-bridge DEVICE_BRIDGE
                          [--device-nic DEVICE_NIC]
                          [--cluster-bridge CLUSTER_BRIDGE]
                          [--cluster-nic CLUSTER_NIC] [--swap-cluster-nic-ip]
                          [--swap-device-nic-ip] [-g GATEWAY] [-f] [--dry-run]

Setup Linux Bridges prior to running your VM

optional arguments:
  -h, --help            show this help message and exit
  --device-bridge DEVICE_BRIDGE
                        Name of device bridged network which connects to VM
                        port 1
  --device-nic DEVICE_NIC
                        Name of physical NIC (eth0/eth1) that we want to
                        connect to the device bridge. Port IPv4 over to device
                        bridge. WARNING - may break network connectivity
  --cluster-bridge CLUSTER_BRIDGE
                        Name of cluster bridged network which connects to VM
                        port 2
  --cluster-nic CLUSTER_NIC
                        Name of physical NIC (eth0/eth1) that we want to
                        connect to the cluster n/w bridge. Port IPv4 over to
                        clusterbr bridge. WARNING - may break network
                        connectivity
  --swap-cluster-nic-ip
                        EXPERIMENTAL - Move cluster NIC ipv4 over to clusterbr
                        WARNING - may break network connectivity
  --swap-device-nic-ip  EXPERIMENTAL - Move mgmtnic NIC ipv4 over to MgmtBr
                        WARNING - may break network connectivity
  -g GATEWAY, --gateway GATEWAY
                        Default GW needed when you pull IPs for your mgmt
                        network
  -f, --force           Strictly enforce changes.Must be specified for porting
                        IPs
  --dry-run             Does not change system configuration.Only prints the
                        generated commands.
root@proxmox:~# python createNwBridges.py --device-bridge br0 --dry-run
dry-run mode. No changes will be made
brctl addbr br0

Code:
python generateXmlForKvm.py --help
usage: generateXmlForKvm.py [-h] [-d] [-r] -n VMNAME --device-bridge
                            DEVICE_BRIDGE [--cluster-bridge CLUSTER_BRIDGE]
                            [-e EMULATOR] [-k IDENTIFIER] [-i INPUT]
                            [-o OUTPUT] [-c CDROM] -x DISK1 [-y DISK2]
                            [-b MEMORY] [-p CPU] [-t]

Get your VM going! Version 2.0

optional arguments:
  -h, --help            show this help message and exit
  -d, --debug           print debug messages to stdout
  -r, --dry-run         Does not write any changes to a file
  -n VMNAME, --vmname VMNAME
                        Name to be given to the VM
  --device-bridge DEVICE_BRIDGE
                        Name of device bridge network which connects to VM
                        port 1
  --cluster-bridge CLUSTER_BRIDGE
                        Name of the cluster control bridged network which
                        connects to VM port 2
  -e EMULATOR, --emulator EMULATOR
                        Fully qualified file system path to qemu-kvm binary
  -k IDENTIFIER, --identifier IDENTIFIER
                        Unique ID for virsh to use to identify the VM. Uses a
                        random value if left unspecified
  -i INPUT, --input INPUT
                        Path to XML template file
  -o OUTPUT, --output OUTPUT
                        Path to XML output file
  -c CDROM, --cdrom CDROM
                        Path to configuration ISO file for CVP and Aboot-veos-
                        serial.iso for CVX
  -x DISK1, --disk1 DISK1
                        Path to primary disk for CVP
  -y DISK2, --disk2 DISK2
                        Path to the data disk for CVP
  -b MEMORY, --memory MEMORY
                        Memory in Mega Bytes (MB)
  -p CPU, --cpu CPU     Number of CPUs to use
  -t, --bootcdrom       Boot from ISO/CDROM. Needed for CVX

Thoughts?

Regards,
Victor
 

Attachments