Tools to Manage and Update Images

A number of tools are available to help manage and update images and insert ISO to the Virtual Machine (VM).

Upgrade the CVP Image

The easiest way to upgrade the CVP Image is perform a CVP Fast Upgrade. This upgrade option does not require that the VMs be redeployed, and does not result in loss of the logs.

 

Note: Fast upgrades are supported only when upgrading from version 2016.1.1 (or later) of the CVP application. You cannot use fast upgrades if you are using a version of CVP prior to 2016.1.1.

 

To use the CVP fast upgrade option, complete the procedure in the “Fast Upgrades” section of the Backup & Restore, Upgrades, DNS / NTP Server Migration chapters in the CVP Configuration Guide.

If it is not possible for you to use the CVP fast upgrade option, use the procedure in Redeploy CVP VM Tool.

Redeploy CVP VM Tool

 

This tool allows redeployment of the CVP VM in the event:
  • Something goes wrong during deployment.
  • If you want to do a destructive upgrade. Used to delete the virtual CVP disks.

     

 

Note: You should backup the CVP data using CVP tool before using this method.

 

  1. Locate the disks and tool package (cvp-<version>-kvm.tgz) in the CloudVision Portal folder for your version. (You can download the package from https://www.arista.com/en/.)
  2. SSH into the CV appliance Host OS.
  3. Backup CVP data using the CVP tool as documented in the CloudVision Configuration Guide under Upgrading CVP in the subsection titled Backup and Restore (recommended).
  4. Copy wget cvp-<version>-kvm.tgz package into the CVA host OS under a new directory.
  5. Extract tar -zxvf cvp-*-kvm.tgz disk1.qcow2.
  6. Run it as follows:
    /cva/scripts/redeployCvpKvmVm.py --disk1 disk1.qcow2 [--mem <mem gb>] [--cpu <vcpu count>] 
    [--data-size </data disk size>] [--cdrom <cvp config iso>]usage: redeployCvpKvmVm.py [-h] [-c CDROM] 
    --disk1 DISK1 [--mem MEM] [--cpu CPU] [--data-size DATA_SIZE]

     

    This script helps redeploy a CVP VM. After the VM is deployed, follow Setup Steps for Single Node CVP, or Setup Steps for Multi-node CVP Cluster by logging into the CVP VM console shell as cvpadmin.

     

     

    Note:

    Use caution before using redeployCvpKvmVm.py as this will stop and restart your VM and delete all your VM disks i.e. data. BACKUP your VM data prior to running this, as suggested in step 3.

Redeploy CVX VM Tool

 

This tool enables you to redeploy CVX VMs. You typically redeploy CVX VMs if:
  • Something goes wrong during deployment.
  • You need to perform a destructive upgrade, which deletes the virtual CVX disks.

     

  1. Go to https://www.arista.com/en/.
  2. Locate and download:
    • The CVX disk and the Aboot .iso for the version of CVX you are using.
    • The tool package (arista-cv-<version>-mfg.tgz), which is in the CloudVision Portal folder for your version.

       

  3. SSH into the CV appliance Host OS.
  4. Backup CVX running configuration.
  5. Do one of the following:
    • Copy the packages, disks, and the .iso archives you downloaded in Step 2.
    • Run wget cvp-<version>-kvm.tgz to copy the package into the CVA host OS under a new directory.

       

  6. Extract the kvm.tgz to get the redeployCvxKvmVm.py script (tar -zxvf cvp-<version>-kvm.tgz).
  7. Copy the downloaded CVX disk and the Aboot disk to /data/cvx/ on the CVA host OS.
  8. /redeployCvxKvmVm.py --name cvx --cvxDisk EOS.qcow2 --abootDisk Aboot-veos-serial.iso.
    -bash-4.3$ ./redeployCvxKvmVm.py -h
    usage: redeployCvxKvmVm.py [-h] [-n NAME] --cvxDisk CVXDISK --abootDisk
    ABOOTDISK
    optional arguments:
    -h, --help show this help message and exit
    -n NAME, --name NAME Name of the CVX VM
    --cvxDisk CVXDISKPath to the Cvx/Eos disk
    --abootDisk ABOOTDISK
     Path to the Aboot disk

     

    This script is used to redeploy the CVX Vm on the CloudVision Appliance. It takes in the arguments of the CVX Disk images and the Aboot disk images if both are not found locally. The CVX/EOS disks and the Aboot images are available from https://www.arista.com/en/.

     

Disk Expansion and Conversion For CloudVision VM

Cloud Vision VMs virtual disks are shipped in qcow2 format and the data disk size is 1TB by default. For better scale and performance, it is recommended that the virtual disks be converted to raw format and data disk expanded to a size more appropriate to the expected scale. At max scale supported today, we recommend a 4.5TB /data disk.

The tool can be run as /cva/scripts/cvaDiskUtil.py. It supports three commands:

status - Displays the current CloudVision VMs virtual disk information and information about the partition where the VM is placed.
/cva/scripts/cvaDiskUtil.py[status]

 

makeraw - Convert qcow2 format virtual disks in CloudVision VM to raw format.
/cva/scripts/cvaDiskUtil.py makeraw [--quiet]

 

expand - Increase size of /data virtual disk in CloudVision VM.
/cva/scripts/cvaDiskUtil.py expand [--disk-size DISK_SIZE]
DISK_SIZE can be in KB, MB, GB or TB

 

 

Note: makeraw and expand commands will stop and start CVP VM.

 

Upgrade the Host Image

Arista provides an ISO with all updated packages and a tool to mount the images ISO and upgrade the system.

Make sure you use the correct upgrade procedure based on your current CV Appliance configuration. The two basic upgrade procedures are:

Single-node Configurations

Use the following procedure to upgrade a single-node CV Appliance configuration.

Note: Allow 20 minutes for the CVP application to be accessible again after the CV Appliance host comes up. The CV Appliance host will come up after the system reboots (running the upgrade script, which is done near the end of the procedure, automatically reboots the system).

Complete the following steps to upgrade single-node CV Appliance configurations.

  1. Go to https://www.arista.com/en/.
  2. Download the mfg tgz tools (arista-cv-<version>-mfg.tgz).
  3. Extract tar -xvf arista-cv-<version>-mfg.tgz. This ensures you have the new version of upgradeCva.py.
  4. Download the update ISO.
  5. Run the upgrade CV appliance tool.
    ./upgradeCva.py -i <Arista Cva Update Iso>
    $ ./upgradeCva.py -h 
    usage: upgradeCva.py [-h] [-i ISO] [--fixNw] [-vm] [-f FORCE]
    
    Upgrade CVA
    
    optional arguments:
    -h, --help show this help message and exit
    -i ISO, --iso ISOPath to ISO
    fixNwFixes CVA network config to what is expected Does not
     touch devicebr config.
    -vm,--vm Used for CVA VM emulation - NOT for HW CVA
    -f, --forceDo what I say. Used for bypassing yes/no question for
     reboot

     

  6. Use the # version command to verify that the upgrade was successful.
     

Multi-node Configurations

A rolling upgrade should be done when upgrading multi-node CV Appliance configurations.

The steps you use are the same as those used for single-node configurations, except that you must repeat the procedure for each node.

The basic steps involved in performing the rolling upgrade are:
  • Login to one of the CV Appliance hosts.
  • Complete the upgrade using the steps in the procedure. (Make sure you follow the rules in the Important! notice below when performing the upgrade.)
  • Wait until all CVX and CVP VMs are up and running before you begin the upgrade on the next host.

     

 

Important: To complete the rolling upgrade, you must:
  • Upgrade only one CV Appliance host (machine) at a time.
  • Wait until each host machine is upgraded and all CVX VMs and CVP VMs are fully up and running before upgrading the next host in the cluster.

CVP takes approximately 20 minutes to be fully accessible after the system reboot (running the upgrade script, which is done near the end of the procedure, automatically reboots the system). Verify that CVP is accessible before upgrading your multi-node cluster configuration's next CV Appliance host.

 

 

Complete the following steps to upgrade multi-node CV Appliance configurations.

  1. Go to https://www.arista.com/en/.
  2. Download the mfg tgz tools (arista-cv-<version>-mfg.tgz).
  3. Extract tar -xvf arista-cv-<version>-mfg.tgz. This ensures you have the new version of upgradeCva.py.
  4. Download the updated ISO.
  5. Run the upgrade CV Appliance tool (see the example below).
    upgradeCva.py -i <Arista Cva Update Iso> 
    $ ./upgradeCva.py -h
    usage: upgradeCva.py [-h] [-i ISO] [--fixNw] [--useLacp] [--reboot] [-f] [-r]
     
    Upgrade base image
     
    optional arguments:
    -h, --help show this help message and exit
    -i ISO, --iso ISOPath to ISO
    fixNwFixes CVA network config to what is expected Does not
     touch devicebr config.
    -vm,--vm Used for CVA VM emulation - NOT for HW CVA
    -f, --forceDo what I say. Used for bypassing yes/no question for
     reboot

     

  6. Wait until all CVX VMs and CVP VMs are fully up and running. (CVP takes approximately 20 minutes to be fully accessible after the system reboot.)
  7. Use the # version command to verify that the upgrade was successful.
  8. Repeat Step 2 through Step 7 on the remaining CV Appliance hosts you want to upgrade.