Virtual Edge Deployment

The Virtual Edge is available as a virtual machine that can be installed on standard hypervisors. This section discusses the prerequisites and the installation procedure for deploying a VeloCloud Virtual Edge on KVM and ESXi hypervisors.

Deployment Prerequisites for Virtual Edge

It discusses the requirements for Virtual Edge deployment as follows.

Virtual Edge Requirements

Keep in mind the following requirements before you deploy a Virtual Edge:
  • Supports 2, 4, 8, and 10 vCPU assignment.
     
      2 vCPU 4v CPU 8 vCPU 10 vCPU
    Minimum Memory (DRAM) 8 GB 16 GB 32 GB 32 GB
    Minimum Storage (Virtual Disk) 8 GB 8 GB 16 GB 16 GB

     

  • AES-NI CPU capability must pass to the Virtual Edge appliance.
  • Up to 8 vNICs (default is GE1 and GE2 LAN ports, and GE3-GE8 WAN ports).
    Note: Over-subscription of Virtual Edge resources such as CPU, memory, and storage is not supported.

Recommended Server Specifications

 
NIC Chipset Hardware Specification
Intel 82599/82599ES HP DL380G9 HP DL 380 Datasheet
Intel X710/XL710 Dell PowerEdge R640 Dell PowerEdge R640
  • CPU Model and Cores - Dual Socket Intel(R) Xeon(R) Gold 5218 CPU @ 2.30GHz with 16 cores each
  • Memory - 384 GB RAM
Intel X710/XL710 Supermicro SYS-6018U-TRTP+ Supermicro SYS-6018U-TRTP+
  • CPU Model and Cores - Dual Socket Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz with 10 Cores each
  • Memory - 256 GB RAM

 

Recommended NIC Specifications

 
Hardware Manufacturer Firmware Version Host Driver for Ubuntu 20.04.6 Host Driver for Ubuntu 22.04.2 Host Driver for ESXi 7.0U3 Host Driver for ESXi 8.0U1a
Dual Port Intel Corporation Ethernet Controller XL710 for 40GbE QSFP+ 7.10 2.20.12 2.20.12 1.11.2.5 and 1.11.3.5 1.11.2.5 and 1.11.3.5
Dual Port Intel Corporation Ethernet Controller X710 for 10GbE SFP+ 7.10 2.20.12 2.20.12 1.11.2.5 and 1.11.3.5 1.11.2.5 and 1.11.3.5
Quad Port Intel Corporation Ethernet Controller X710 for 10GbE SFP+ 7.10 2.20.12 2.20.12 1.11.2.5 and 1.11.3.5 1.11.2.5 and 1.11.3.5

Supported Operating Systems

  • Ubuntu Linux Distribution:
    • Ubuntu 20.04.6 LTS
    • Ubuntu 22.04.2 LTS
  • VMware ESXi:
    • VMware ESXi 7.0 U3 with vSphere Web Client 7.0.
    • VMware ESXi 8.0 U1a with vSphere Web Client 8.0.

Firewall/NAT Requirements

If the Virtual Edge deploys behind a Firewall or a NAT device, the following requirements apply:
  • The Firewall must allow outbound traffic from the Virtual Edge to TCP/443 (for communication with the Orchestrator).
  • The Firewall must allow traffic outbound to Internet on ports UDP/2426 (VCMP).

CPU Flags Requirements

For detailed information about CPU flags requirements to deploy Virtual Edge, see Special Considerations for Virtual Edge Deployment.

Special Considerations for Virtual Edge Deployment

It discusses the special considerations for Virtual Edge deployment.
  • The SD-WAN Edge is a latency-sensitive application. Refer to the Arista Documentation to adjust the Virtual Machine (VM) as a latency-sensitive application.
  • Recommended Host settings:
    • BIOS settings to achieve highest performance:
      • CPUs at 2.0 GHz or higher.
      • Enable Intel Virtualization Technology (Intel VT).
      • Deactivate Hyper-threading.
      • Virtual Edge supports paravirtualized vNIC VMXNET 3 and passthrough vNIC SR-IOV:
      • Deactivate power savings on CPU BIOS for maximum performance.
      • Activate CPU turbo.
      • CPU must support the AES-NI, SSSE3, SSE4, RDTSC, RDSEED, RDRAND instruction sets.
      • Recommend reserving 2 cores for Hypervisor workloads. For example, for a 10-core CPU system, recommend running one 8-core virtual edge or two 4-core virtual edges and reserve 2 cores for Hypervisor processes.
    • For a dual socket host system, ensure the Hypervisor assigns network adapters, memory, and CPU resources within the same socket (NUMA) boundary as the vCPUs assigned.
  • Recommended VM settings:
    • CPU should be '100% reserved.'
    • CPU shares should be High.
    • Memory should be ‘100% reserved.’
    • Latency sensitivity should be High.
  • The default username for the SD-WAN Edge SSH console is root.

Cloud-init Creation

cloud-init is a Linux package responsible for handling early initialization of instances. If available in the distributions, it allows for configuration of many common parameters of the instance directly after installation. This creates a fully functional instance that is configured based on a series of inputs. The cloud-init config is composed of two main configuration files, the metadata file and the user-data file. The meta-data contains the network configuration for the Edge, and the user-data contains the Edge Software configuration. The cloud-init file provides information that identifies the instance of the Virtual Edge being installed.

The Cloud-init behavior can be configured through the user-data. User-data can be given by the user at the time of launching the instance. This is typically done by attaching a secondary disk in ISO format that cloud-init looks for at first boot time. This disk contains all early configuration data that apply at that time.

The Virtual Edge supports cloud-init and all essential configurations packaged in an ISO image.

Create the Cloud-init Metadata and User-data Files

The final installation configuration options are set with a pair of cloud-init configuration files. The first installation configuration file contains the metadata. Create this file with a text editor and name it meta-data. This file provides information that identifies the instance of the Virtual Edge being installed. The instance-id can be any identifying name, and the local-hostname should be a host name that follows your site standards.

  1. Create the meta-data file that contains the instance:
    • name.instance-id: vedge1
    • local-hostname: vedge1
  2. Add the network-interfaces section, shown below, to specify the WAN configuration. By default, all Edge WAN interfaces are configured for DHCP. Multiple interfaces can be specified.
    
    root@ubuntu# cat meta-data
    instance-id: Virtual-Edge
    local-hostname: Virtual-Edge
    network-interfaces:
    GE1:
     mac_address: 52:54:00:79:19:3d
    GE2:
     mac_address: 52:54:00:67:a2:53
    GE3:
     type: static
     ipaddr: 11.32.33.1
     mac_address: 52:54:00:e4:a4:3d
     netmask: 255.255.255.0
     gateway: 11.32.33.254
    GE4:
     type: static
     ipaddr: 11.32.34.1
     mac_address: 52:54:00:14:e5:bd
     netmask: 255.255.255.0
     gateway: 11.32.34.254
  3. Create the user-data file. This file contains three main modules: Orchestrator, Activation Code, and Ignore Certificates Errors.
     
    Module Description
    vco IP Address/URL of the Orchestrator.
    activation_code Activation code for the Virtual Edge. The activation code is generated while creating an Edge instance on the Orchestrator.
    vco_ignore_cert_errors Option to verify or ignore any certificate validity errors.

    The activation code is generated while creating an Edge instance on the Orchestrator.

    Important: There is no default password in Edge image. The password must be provided in cloud-config:
    #cloud-config 
    password: passw0rd 
    chpasswd: { expire: False }
    ssh_pwauth: True
    velocloud:
    vce: 
    vco: 10.32.0.3 
    activation_code: F54F-GG4S-XGFI 
    vco_ignore_cert_errors: true 

Create the ISO File

Once you have completed your files, they need to be packaged into an ISO image. This ISO image is used as a virtual configuration CD with the virtual machine. This ISO image (called seed.iso in the example below), is created with the following command on Linux system:
genisoimage -output seed.iso -volid cidata -joliet -rock user-data meta-data network-data

Including the network-interfaces section is optional. If the section is not present, the DHCP option is used by default.

After the ISO image is generated, transfer the image to a datastore on the host machine.

Install Virtual Edge

You can install Virtual Edge on KVM and ESXi using a cloud-init config file. The cloud-init config contains interface configurations and the activation key of the Edge.

Ensure you created the cloud-init meta-data and user-data files and packaged the files into an ISO image file. For steps, see Cloud-init Creation.

KVM provides multiple ways to provide networking to virtual machines. Arista recommends the following options:
  • SR-IOV
  • Linux Bridge
  • OpenVSwitch Bridge
If you decide to use SR-IOV mode, enable SR-IOV on KVM and ESXi. For steps, see the following:
To install Virtual Edge:

Activate SR-IOV on KVM

To enable the SR-IOV mode on KVM, perform the following steps.

Before you begin: This requires a specific NIC card. The following chip sets are certified to work with the Gateway and Edge.
  • Intel 82599/82599ES
  • Intel X710/XL710
Note: Before using the Intel X710/XL710 cards in SR-IOV mode on KVM, make sure the supported Firmware and Driver versions specified in the Deployment Prerequisites section are installed correctly.
Note: SR-IOV mode is not supported if the KVM Virtual Edge is deployed with a High-Availability topology. For High-Availability deployments, ensure that SR-IOV is not enabled for that KVM Edge pair.
To enable SR-IOV on KVM:
  1. Enable SR-IOV in BIOS. This will be dependent on your BIOS. Login to the BIOS console and look for SR-IOV Support/DMA. You can verify support on the prompt by checking that Intel has the correct CPU flag.
    cat /proc/cpuinfo | grep vmx
  2. Add the options on Boot (in /etc/default/grub).
    GRUB_CMDLINE_LINUX="intel_iommu=on"
    1. Run the following commands: update-grub and update-initramfs -u.
    2. Reboot
    3. Make sure iommu is enabled.
      velocloud@KVMperf3:~$ dmesg | grep -i IOMMU
       [ 0.000000] Command line: BOOT_IMAGE=/vmlinuz-3.13.0-107-generic root=/dev/mapper/qa--multiboot--002--vg-root ro intel_iommu=on splash quiet vt.handoff=7 
       [ 0.000000] Kernel command line: BOOT_IMAGE=/vmlinuz-3.13.0-107-generic root=/dev/mapper/qa--multiboot--002--vg-root ro intel_iommu=on splash quiet vt.handoff=7 
       [ 0.000000] Intel-IOMMU: enabled
       ….
       velocloud@KVMperf3:~$ 
  3. Based on the NIC chip set used, add a driver as follows:
    • For the Intel 82599/82599ES cards in SR-IOV mode:
      1. Download and install ixgbe driver from the Intel website.
      2. Configure ixgbe config (tar and sudo make install).
        velocloud@KVMperf1:~$ cat /etc/modprobe.d/ixgbe.conf
      3. If the ixgbe config file does not exist, you must create the file as follows.
        options ixgbe max_vfs=32,32
        options ixgbe allow_unsupported_sfp=1
        options ixgbe MDD=0,0
        blacklist ixgbevf
      4. Run the update-initramfs -u command and reboot the Server.
      5. Use the modinfo command to verify if the installation is successful.
        velocloud@KVMperf1:~$ modinfo ixgbe and ip link
         filename: /lib/modules/4.4.0-62-generic/updates/drivers/net/ethernet/intel/ixgbe/ixgbe.ko
         version: 5.0.4
         license: GPL
         description: Intel(R) 10GbE PCI Express Linux Network Driver
         author: Intel Corporation, <This email address is being protected from spambots. You need JavaScript enabled to view it.>
         srcversion: BA7E024DFE57A92C4F1DC93
    • For the Intel X710/XL710 cards in SR-IOV mode:
      1. Download and install i40e driver from the Intel website.
      2. Create the Virtual Functions (VFs).
        echo 4 > /sys/class/net/device name/device/sriov_numvfs
      3. To make the VFs persistent after a reboot, add the command from the previous step to the "/etc/rc.d/rc.local" file.
      4. Deactivate the VF driver.
        echo “blacklist i40evf” >> /etc/modprobe.d/blacklist.conf
      5. Run the update-initramfs -u command and reboot the Server.

Validating SR-IOV (Optional)

You can quickly verify if your host machine has SR-IOV enabled by using the following command:
lspci | grep -i Ethernet
Verify if you have Virtual Functions:
01:10.0 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function(rev

Install Virtual Edge on KVM

Discusses how to install and activate the Virtual Edge on KVM using a cloud-init config file.

If you decide to use SR-IOV mode, enable SR-IOV on KVM. For steps, see Activate SR-IOV on KVM.

Note: SR-IOV mode is not supported if the KVM Virtual Edge is deployed with a High-Availability topology. For High-Availability deployments, ensure that SR-IOV is not enabled for that KVM Edge pair.

To run Virtual Edge on KVM using the libvirt:

  1. Use gunzip to extract the qcow2 file to the image location (for example, /var/lib/libvirt/images).
  2. Create the Network pools that you are going to use for the device, using SR-IOV and OpenVswitch.

    Using SR-IOV:

    The following is a sample network interface template specific to Intel X710/XL710 NIC cards using SR-IOV.
    
    <interface type='hostdev' managed='yes'>
    <mac address='52:54:00:79:19:3d'/>
    <driver name='vfio'/>
    <source>
    <address type='pci' domain='0x0000' bus='0x83' slot='0x0a' function='0x0'/>
    </source>
    <model type='virtio'/>
    </interface>
    
    Using OpenVSwitch:
    <network>
    <name>passthrough</name>
    <model type='virtio'/>
    <forward mode="bridge"/>
    <bridge name="passthrough"/>
    <virtualport type='openvswitch'/>
    <vlan trunk='yes'>
    <tag id='33' nativeMode='untagged'/>
    <tag id='200'/>
    <tag id='201'/>
    <tag id='202'/>
    </vlan>
    </network>
    
    <network>
    <name>passthrough</name>
    <model type='virtio'/>
    <forward mode="bridge"/>
    </network>
    
    <domain type='kvm'>
    <name>vedge1</name>
    <memory unit='KiB'>4194304</memory>
    <currentMemory unit='KiB'>4194304</currentMemory>
    <vcpu placement='static'>2</vcpu>
    <resource>
    <partition>/machine</partition>
    </resource>
    <os>
    <type arch='x86_64' machine='pc-i440fx-trusty'>hvm</type>
    <boot dev='hd'/>
    </os>
    <features>
    <acpi/>
    <apic/>
    <pae/>
    </features>
    <!-- Set the CPU mode to host model to leverage all the available features on the host CPU -->
    <cpu mode='host-model'>
    <model fallback='allow'/>
    </cpu>
    <clock offset='utc'/>
    <on_poweroff>destroy</on_poweroff>
    <on_reboot>restart</on_reboot>
    <on_crash>restart</on_crash>
    <devices>
    <emulator>/usr/bin/kvm-spice</emulator>
    <!-- Below is the location of the qcow2 disk image -->
    <disk type='file' device='disk'>
    <driver name='qemu' type='qcow2'/>
    <source file='/var/lib/libvirt/images/edge-VC_KVM_GUEST-x86_64-2.3.0-18-R23-20161114-GA-updatable-ext4.qcow2'/>
    <target dev='sda' bus='sata'/>
    <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <!-- If using cloud-init to boot up virtual edge, attach the 2nd disk as CD-ROM -->
    <disk type='file' device='cdrom'>
    <driver name='qemu' type='raw'/>
    <source file='/home/vcadmin/cloud-init/vedge1/seed.iso'/>
    <target dev='sdb' bus='sata'/>
    <readonly/>
    <address type='drive' controller='1' bus='0' target='0' unit='0'/>
    </disk>
    <controller type='usb' index='0'>
    <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'/>
    <controller type='sata' index='0'>
    <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </controller>
    <controller type='ide' index='0'>
    <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
    </controller>
    <!-- The first two interfaces are for the default L2 interfaces, NOTE VLAN support just for SR-IOV and OpenvSwitch -->
    <interface type='network'>
    <model type='virtio'/>
    <source network='LAN1'/>
    <vlan><tag id='#hole2_vlan#'/></vlan>
    <alias name='LAN1'/>
    <address type='pci' domain='0x0000' bus='0x00' slot='0x12' function='0x0'/>
    </interface>
    <interface type='network'>
    <model type='virtio'/>
    <source network='LAN2'/>
    <vlan><tag id='#LAN2_VLAN#'/></vlan>
    <alias name='hostdev1'/>
    <address type='pci' domain='0x0000' bus='0x00' slot='0x13' function='0x0'/>
    </interface>
    <!-- The next two interfaces are for the default L3 interfaces. Note that additional 6 routed interfaces are supported for a combination of 8 interfaces total -->
    <interface type='network'>
    <model type='virtio'/>
    <source network='WAN1'/>
    <vlan><tag id='#hole2_vlan#'/></vlan>
    <alias name='LAN1'/>
    <address type='pci' domain='0x0000' bus='0x00' slot='0x12' function='0x0'/>
    </interface>
    <interface type='network'>
    <model type='virtio'/>
    <source network='LAN2'/>
    <vlan><tag id='#LAN2_VLAN#'/></vlan>
    <alias name='hostdev1'/>
    <address type='pci' domain='0x0000' bus='0x00' slot='0x13' function='0x0'/>
    </interface>
    <serial type='pty'>
    <target port='0'/>
    </serial>
    <console type='pty'>
    <target type='serial' port='0'/>
    </console>
    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <graphics type='vnc' port='-1' autoport='yes' listen='127.0.0.1'>
    <listen type='address' address='127.0.0.1'/>
    </graphics>
    <sound model='ich6'>
    <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </sound>
    <video>
    <model type='cirrus' vram='9216' heads='1'/>
    <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
    <memballoon model='virtio'>
    <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </memballoon>
    </devices>
    </domain>
  3. Save the domain XML file that defines the VM (for example, vedge1.xml created in Step 2).
  4. Launch the VM by performing the following steps:
    1. Create VM.
      virsh define vedge1.xml
    2. Start VM.
      virsh start vedge1
    Note: vedge1 is the name of the VM defined in the name element of the domain XML file. Replace vedge1 with the name you specify in the name element.
  5. If you are using SR-IOV mode, after launching the VM, set the following on the Virtual Functions (VFs) used:
    1. Set the spoofchk off.
      ip link set eth1 vf 0 spoofchk off
    2. Set the Trusted mode on.
      ip link set dev eth1 vf 0 trust on
    3. Set the VLAN, if required.
      ip link set eth1 vf 0 vlan 3500
    Note: The Virtual Functions configuration step is not applicable for OpenVSwitch (OVS) mode.
  6. Console into the VM.
    virsh list
    Id Name State
    ----------------------------------------------------
    25 test_vcg running
    velocloud@KVMperf2$ virsh console 25
    Connected to domain test_vcg
    Escape character is ^]

The cloud-init already includes the activation key, generated while creating a new Virtual Edge on the Orchestrator. The Virtual Edge is configured with the config settings from the cloud-init file. This configures the interfaces as the Virtual Edge powers up. Once the Virtual Edge is online, it activates with the Orchestrator using the activation key. The Orchestrator IP address and the activation key have been defined in the cloud-init file.

Activate SR-IOV on ESXi

Enabling SR-IOV on ESXi is an optional configuration.

Prerequisites: This requires a specific NIC card. The following chipsets are certified to work with the Gateway.
  • Intel 82599/82599ES
  • Intel X710/XL710
Note: Before using the Intel X710/XL710 cards in SR-IOV mode, make sure the supported Firmware and Driver versions described in the Deployment Prerequisites section are installed correctly.
To enable SR-IOV on ESXi:
  1. Make sure that your NIC card supports SR-IOV. Check the Hardware Compatibility List (HCL) at Arista Documentation.
    • Brand Name: Intel
    • I/O Device Type: Network
    • Features: SR-IOV
    Figure 1. Compatibility Guide
  2. Once you have a support NIC card, go to the specific Arista host, select the Configure tab, and then choose Physical adapters.
    Figure 2. Configure Physical Adapter
  3. Select Edit Settings. Change Status to Enabled and specify the number of virtual functions required. This number varies by the type of NIC card.
    Figure 3. Edit Settings
  4. Reboot the hypervisor.
  5. If SR-IOV is successfully enabled, the number of Virtual Functions (VFs) will show under the particular NIC after ESXi reboots.
    Figure 4. Virtual Functions List
Note: To support VLAN tagging on SR-IOV interfaces, user must configure VLAN ID 4095 (Allow All) on the Port Group connected to the SR-IOV interface. For more information, see VLAN Configuration.

Install Virtual Edge on ESXi

Describes how to install Virtual Edge on ESXi.

If you decide to use SR-IOV mode, enable SR-IOV on ESXi. For steps, see Activate SR-IOV on ESXi.

To install Virtual Edge on ESXi:
  1. Use the vSphere client to deploy an OVF template, and then select the Edge OVA file.
    Figure 5. vSphere Web Client
  2. Select an OVF template from an URL or Local file.
    Figure 6. Deploy OVF Template
  3. Select a name and location of the virtual machine.
  4. Select a resource.
  5. Verify the template details.
    Figure 7. Review Details
  6. Select the storage location to store the files for the deployment template.
    Figure 8. Select Storage
  7. Configure the networks for each of the interfaces.
    Note: Skip this step if you are using a cloud-init file to provision the Virtual Edge on ESXi.
    Figure 9. Select Networks
  8. Customize the template by specifying the deployment properties. The following image highlights:
    1. From the Orchestrator UI, retrieve the URL/IP Address. You will need this address for Step c below.
    2. Create a new Virtual Edge for the Enterprise. Once the Edge is created, copy the Activation Key. You will need the Activation Key for Step c below.
      Figure 10. Create Virtual Edge
    3. On the customize template page shown in the image below, type in the Activation Code that you retrieved in Step b above, and the Orchestrator URL/IP Address retrieved in Step a above, into the corresponding fields.
      Figure 11. Customize Template

       

      Figure 12. Customize Template - Edit Interface
  9. Review the configuration data.
    Figure 13. Deployment Completion
  10. Power on the Virtual Edge.
    Figure 14. Virtual Edge Installation

Once the Edge powers up, it establishes connectivity to the Orchestrator.