Tools to Manage and Update Images for the Virtual Machine
A number of tools are available to help manage and update images and insert ISO to the Virtual Machine (VM).
Upgrade the Host Image
Arista provides an executable that will update all packages in the CVA.
Single Node CloudVision Appliance
To upgrade a single node CVA, perform all the steps listed in Steps to Upgrade the CVA. After the CVA host comes up, and after rebooting the system from the last step of upgrade, allow 20 minutes for the CVP application to be accessible again.
Multi-Node CloudVision Appliance
Perform a rolling upgrade to update the CVA systems in multi-node configuration. Perform all the steps listed in section Steps to Upgrade the CVA from the start to finish on only one of the CVAs at a time. After the upgrade, wait for all the VMs, (CVP and CVX) to be fully up and running (CVP takes 20 minutes to be up from reboot). Verify that the CVP is accessible. After the verification, proceed to upgrade the second CVA host in a similar fashion and then the third CVA.
Steps to Upgrade the CVA
- Download the upgrade executable (upgradeCva-<version>) from from http://www.arista.com.
- Run the upgrade executable.
./upgradeCva-<version> --forceNote:The version can be verified after upgrade using the "version" command.
# version CVA Version: 188.8.131.52
Redeploy CVP VM Tool
- Something goes wrong during deployment.
- If you want to do a destructive upgrade. Use to delete the virtual CVP disks.
Complete the following steps:
- Locate the disks and tool package (cvp-<version>-kvm.tgz) in the CloudVision Portal folder for your version.
(You can download the package from https://www.arista.com.)
- SSH into the CV appliance Host OS.
- Backup CVP data using the CVP tool as documented in the CloudVision Configuration Guide under Upgrading CVP in the subsection titled Backup and Restore (recommended). (https://www.arista.com/en/support/product-documentation.)
- Copy wget cvp-<version>-kvm.tgz package into the CVA host OS under a new directory.
- tar -zxvf cvp-*-kvm.tgz disk1.qcow2
- Run it as follows: /cva/scripts/redeployCvpKvmVm.py --disk1 disk1.qcow2 [--mem <mem gb>] [--cpu <vcpu count>] [--data-size </data disk size>] [--cdrom <cvp config iso>]
usage: redeployCvpKvmVm.py [-h] [-c CDROM] --disk1 DISK1 [--mem MEM] [--cpu CPU] [--data-size DATA_SIZE]This script helps redeploy a CVP VM. After the VM is deployed, follow Setup Steps for Single Node CVP, or Setup Steps for Multi-node CVP Cluster by logging into the CVP VM console shell as cvpadmin.Note: Use caution before using redeployCvpKvmVm.py as this will stop and restart your VM and delete all your VM disks, i.e. data. BACKUP your VM data prior to running this, as suggested in step 3.
Disk Expansion and Conversion For CloudVision VM
CloudVision VMs virtual disks are shipped in qcow2 format with by default, a data disk size of 1TB. For better scale and performance, it is recommended that the virtual disks be converted to raw format, and data disk expanded to a size more appropriate to the expected cale. At max scale supported today we recommend a 4.5TB /data disk.
- status - Display current CloudVision VMs virtual disk information and information about the partition where the VM is placed.
- makeraw - Convert qcow2 format virtual disks in CloudVision VM to raw format.
/cva/scripts/cvaDiskUtil.py makeraw [--quiet]
- expand - Increase size of /data virtual disk in CloudVision VM.
/cva/scripts/cvaDiskUtil.py expand [--disk-size DISK_SIZE] DISK_SIZE can be in KB, MB, GB or TB