Install Partner Gateway

This document discusses the steps needed to install and deploy VeloCloud Gateway as a Partner Gateway. It also covers how to configure the VRF/VLAN and BGP configuration necessary on the Orchestrator on the Partner Gateway.

Installation Overview

This section provides an overview ofPartner Gateway installation.

About Partner Gateways

Partner Gateways are Gateways tailored to an on-premise operation in which the Gateway is installed and deployed with two interfaces.
  • One interface is facing the private and/or public WAN network and is dedicated to receiving VCMP encapsulated traffic from the remote edges, as well as standard IPsec traffic from Non SD-WAN Destinations.
  • Another interface is facing the data center and provides access to resources or networks attached to a PE router, which the Partner Gateway is connected to. The PE router typically affords access to shared managed services that are extended to the branches, or access to a private (MPLS / IP-VPN) core network in which individual customers are separated.

The following distributions are provided:

Table 1. Distributions and Examples
Provided Description Example
Arista Gateway OVA package. velocloud-vcg-X.X.X-GA.ova
KVM Gateway qcow2 disk image. velocloud-vcg-X.X.X-GA.qcow2

Minimum Hypervisor Hardware Requirements

The Gateway runs on a standard hypervisor (KVM or ESXi).

Note: Starting from the 6.0.0 release, Intel E810 NIC is supported on a Gateway using SR-IOV on KVM 22.04, for high performance Data Plane throughput.

Minimum Server Requirements

To run the hypervisor:
  • CPU: Intel XEON (10 cores minimum to run a single 8-core gateway VM) with minimum clock speed of 2.0 Ghz is required to achieve maximum performance.
    • ESXi vmxnet3 network scheduling functions must have 2 cores reserved per Gateway virtual machine (VM), regardless of the number of cores assigned to the Gateway.
      • Example: Assume there is a 24-core server running ESXi+vmxnet3. You can deploy 2- (8 core) Gateways. i.e. 2 gateways multiplied by 8 cores requires 16 cores reserved for gateway application and leaves 8 free cores. By using the formula above, in order to support these two Gateways running at peak performance scale the ESXi/vmxnet3 system requires an additional 4 cores (two cores for each of the two Gateways deployed). That is a total of 20 cores required to run 2 gateways on a 24 core system.
        Note: When using SR-IOV, the network scheduling function is offloaded to the pNIC to achieve higher performance. However, the hypervisor must still perform other scheduling functions like CPU, memory, NUMA allocation management. It is required to always keep two free cores for hypervisor usage.
  • The CPU must support and activate the following instruction sets: AES-NI, SSSE3, SSE4, RDTSC, RDSEED, RDRAND, AVX/AVX2/AVX512.
  • A minimum of 4GB free RAM must be available to the server system aside from the memory assigned to the PGW VMs. One Gateway VM requires 16GB RAM, or 32GB RAM if certificate-based authentication is activated.
  • Minimum of 150GB magnetic or SSD based, persistent disk volume (One Gateway VM requires 64GB or 96GB Disk Volume, if certificate-based authentication is activated).
  • Minimum required IOPS performance: 200 IOPS.
  • Minimum 1x10Ge network interface ports and 2 ports is preferred when enabling the Gateway partner hand-off interface (1Ge NICs are supported, but will bottleneck performance). The physical NIC cards supporting SR-IOV are Intel 82599/82599ES and Intel X710/XL710 chipsets. (See the ‘Enable SR-IOV’ guide).
    Note: SR-IOV does not support NIC bonding. For redundant uplinks, use ESXi vSwitch.
  • VeloCloud Gateway is a data-plane intensive workload that requires dedicated CPU cycles to ensure optimal performance and reliability. Meeting these defined settings are required to ensure the Gateway VM is not oversubscribing the underlying hardware and causing actions that can destabilize the Gateway service (e.g. NUMA boundary crossing, memory, and/or vCPU over-subscription).
  • Ensure that the SD-WAN Partner Gateway VM and the resources such as network interfaces, memory, physical CPUs used to support it fit within a single NUMA node.
  • Note: Configure the host BIOS settings as follows:
    • Hyper-threading- Turned off
    • Power Savings- Turned off
    • CPU Turbo- Enabled
    • AES-NI- Enabled
    • NUMA Node Interleaving- Turned off
    • Use ESXi host version: ESXi-6.7.0-14320388-standard or above
    • Upgrade VM compatibility should be set before starting the GatewaySD-WAN Gateway instance

Example Server Specifications

Table 2. Server Specifications
NIC Chipset Hardware Specification
Intel 82599/82599ES HP DL380G9 http://www.hp.com/hpinfo/newsroom/press_kits/2014/ComputeEra/HP_ProLiantDL380_DataSheet.pdf
Intel X710/XL710 Dell PowerEdge R640 https://www.dell.com/en-us/work/shop/povw/poweredge-r640
  • CPU Model and Cores- Dual Socket Intel(R) Xeon(R) Gold 5218 CPU @ 2.30GHz with 16 cores each
  • Memory- 384 GB RAM
Intel X710/XL710 Supermicro SYS-6018U-TRTP+ https://www.supermicro.com/en/products/system/1U/6018/SYS-6018U-TRTP_.cfm
  • CPU Model and Cores- Dual Socket Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz with 10 Cores each
  • Memory- 256 GB RAM
Intel E810-CQDA2 Dell PowerEdge R640 https://www.dell.com/en-us/work/shop/povw/poweredge-r640
    • CPU Model and Cores- Dual Socket Intel(R) Xeon(R) Gold 5218 CPU @ 2.30GHz with 16 cores each
    • Memory- 384 GB RAM

Required NIC Specifications for SR-IOV Support

Table 3. NIC Specifications for SR-IOV Support
Hardware Manufacturer Firmware Version Host Driver for Ubuntu 20.04.6 Host Driver for Ubuntu 22.04.2 Host Driver for ESXi 7.0U3 Host Driver for ESXi 8.0U1a
Dual Port Intel Corporation Ethernet Controller XL710 for 40GbE QSFP+ 7.10 2.20.12 2.20.12 1.11.2.5 and 1.11.3.5 1.11.2.5 and 1.11.3.5
Dual Port Intel Corporation Ethernet Controller X710 for 10GbE SFP+ 7.10 2.20.12 2.20.12 1.11.2.5 and 1.11.3.5 1.11.2.5 and 1.11.3.5
Quad Port Intel Corporation Ethernet Controller X710 for 10GbE SFP+ 7.10 2.20.12 2.20.12 1.11.2.5 and 1.11.3.5 1.11.2.5 and 1.11.3.5
Dell rNDC X710/350 card nvm 7.10 and FW 19.0.12 2.20.12 2.20.12 1.11.2.5 and 1.11.3.5 1.11.2.5 and 1.11.3.5
Dual Port Intel Corporation Ethernet Controller E810-CQDA2 for 100GbE QSFP 4.20 ICE 1.11.14 ICE 1.11.14 Not supported yet Not supported yet

Supported Hypervisor Versions

Table 4. Hypervisor Versions
Hypervisor Supported Versions
Arista
  • Intel 82599/82599ES- ESXi 6.7 U3, ESXi 7.0U3, ESXi 8.0U1a. To use SR-IOV, the vCenter and the vSphere Enterprise Plus license are required.
  • Intel X710/XL710- ESXi 6.7 U3 with Arista vSphere Web Client 6.7.0 up to ESXi 8.0 U1a with Arista vSphere Web Client 8.0.
KVM
  • Intel 82599/82599ES- Ubuntu 20.04.6 LTS, Ubuntu 22.04.2
  • Intel X710/XL710- Ubuntu 20.04.6 LTS, Ubuntu 22.04.2
  • Intel E810-CQDA2- Ubuntu 22.04.2

Gateway Virtual Machine (VM) Specification

For Arista, the OVA already specifies the minimum virtual hardware specification. For KVM, an example XML file is provided. The minimum virtual hardware specifications are:
  • If using Arista ESXi:
    • Latency Sensitivity must be set to 'High'.
      • Procedure (Adjust Latency Sensitivity)
        1. Browse to the virtual machine in the vSphere Client.
          1. To find a virtual machine, select a data center, folder, cluster, resource pool, or host.
          2. Select the VMs tab.
        2. Right-select the virtual machine, and then select Edit Settings.
        3. Select VM Options and select Advanced.
        4. Select a setting from the Latency Sensitivity drop-down menu.
        5. Select OK.
      • CPU reservation set to 100%.
      • CPU shares set to high.
      • CPU Limit must be set to Unlimited.
      • 8 vCPUs (4vCPUs are supported but expect lower performance).
        Important: All vCPU cores should be mapped to the same socket with the Cores per Socket parameter set to either 8 with 8 vCPUs, or 4 where 4 vCPUs are used.
        Note: Hyper-threading must be deactivated to achieve maximum performance.
      • Procedure for Allocate CPU Resources:
        1. Select Virtual Machines in the Arista Host Client inventory.
        2. Right-click a virtual machine from the list and select Edit settings from the pop-up menu.
        3. On the Virtual Hardware tab, expand CPU, and allocate CPU capacity for the virtual machine.
        Table 5. Virtual Machines Option Descriptions
        Option Description
        Reservation Guaranteed CPU allocation for this virtual machine.
        Limit Upper limit for this virtual machine’s CPU allocation. Select Unlimited to specify no upper limit.
        Shares CPU shares for this virtual machine in relation to the parent’s total. Sibling virtual machines share resources according to their relative share values bounded by the reservation and limit. Select Low, Normal, or High, which specify share values respectively in a 1:2:4 ratio. Select Custom to give each virtual machine a specific number of shares, which express a proportional weight.

         

    • CPU affinity must be activated. Follow the steps below:
      1. In the vSphere Web Client go to the VM Settings tab.
      2. Choose the Options tab and select Advanced General > Configuration Parameters .
      3. Add entries for numa.nodeAffinity=0, 1, ..., where 0 and 1 are the processor socket numbers.
    • vNIC must be of type 'vmxnet3' (or SR-IOV, see SR-IOV section for support details).
    • Minimum of any one of the following vNICs:
      • The First vNIC is the public (outside) interface, which must be an untagged interface.
      • The Second vNIC is optional and acts as the private (inside) interface that can support VLAN tagging dot1q and Q-in-Q. This interface typically faces the PE router or L3 switch.
    • Optional vNIC (if a separate management/OAM interface is required).
    • Memory reservation is set to ‘maximum.’
      • 16GB of memory (32GB RAM is required when enabling certificate-based authentication).
    • 64 GB of virtual disk (96GB disk is required when enabling certificate- based authentication).
      Note: Arista uses the above defined settings to obtain scale and performance numbers. Settings that do not align with the above requirements are not tested by Arista and can yield unpredictable performance and scale results.
  • If using KVM:
    • vNIC must be of 'Linux Bridge' type. (SR-IOV is required for high performance, see SR-IOV section for support details).
    • 8 vCPUs (4vCPUs are supported but expect lower performance).
      Important: All vCPU cores should be mapped to the same socket with the Cores per Socket parameter set to either 8 with 8 vCPUs, or 4 where 4 vCPUs are used.
      Note: Hyper-threading must be deactivated to achieve maximum performance.
    • 16GB of memory (32GB RAM is required when enabling certificate- based authentication)
    • Minimum of any one of the following vNICs:
      • The First vNIC is the public (outside) interface, which must be an untagged interface.
      • The Second vNIC is optional and acts as the private (inside) interface that can support VLAN tagging dot1q and Q-in-Q. This interface typically faces the PE router or L3 switch.
    • Optional vNIC (if a separate management/OAM interface is required).
    • 64 GB of virtual disk (96GB disk is required when enabling certificate- based authentication).

Firewall/NAT Requirements

Note: These requirements apply if the Gateway is deployed behind a Firewall and/or NAT device.
  • The firewall needs to allow outbound traffic from the Gateway to TCP/443 (for communication with Orchestrator).
  • The firewall needs to allow inbound traffic from the Internet to UDP/2426 (VCMP), UDP/4500, and UDP/500. If NAT is not used, then the firewall needs to also allow IP/50 (ESP).
  • If NAT is used, the above ports must be translated to an externally reachable IP address. Both the 1:1 NAT and port translations are supported.

Use of DPDK on Gateways

To improve packet throughput performance, Gateways take advantage of Data Plane Development Kit (DPDK) technology. DPDK is a set of data plane libraries and drivers provided by Intel for offloading TCP packet processing from the operating system kernel to processes running in user space and results in higher packet throughput. For additional details, see https://www.dpdk.org.

On Arista hosted Gateways and Partner Gateways, DPDK is used on interfaces that manage data plane traffic and is not used on interfaces reserved for management plane traffic. For example, on a typical Arista hosted Gateway, eth0 is used for management plane traffic and would not use DPDK. In contrast, eth1, eth2, and eth3 are used for data plane traffic and use DPDK.

Gateway Installation Procedures

This section discusses the Gateway installation procedures.

In general, installing the Gateway involves the following steps:

  1. Create Gateway on Orchestrator and make a note of the activation key.
  2. Configure Gateway on Orchestrator.
  3. Create the cloud-init file.
  4. Create the VM in ESXi or KVM.
  5. Boot the Gateway VM and ensure the Gateway cloud-init initializes properly. At this stage, the Gateway should already activate itself against the Orchestrator.
  6. Verify connectivity and deactivate cloud-init.
    Important: Gateway supports both the virtual switch and SR-IOV. This guide specifies the SR-IOV as an optional configuration step.

Pre-Installation Considerations

The Partner Gateway provides different configuration options. A worksheet should be prepared before the installation of the Gateway.

Table 6. Worksheet
Gateway
  • Version
  • OVA/QCOW2 file location
  • Activation Key
  • Orchestrator (IP ADDRESS/vco-fqdn-hostname)
  • Hostname
Hypervisor Address/Cluster name
Storage Root volume data store (>40GB recommended)
CPU Allocation CPU Allocation for KVM/Arista.
Installation Selections DPDK—This is optional and enabled by default for higher throughput. If you choose to deactivate DPDK, contact Arista Customer Support
OAM Network
  • DHCP
  • OAM IPv4 Address
  • OAM IPv4 Netmask
  • DNS server- primary
  • DNS server- secondary
  • Static Routes
ETH0 – Internet Facing Network
  • IPv4 Address
  • IPv4 Netmask
  • IPv4 Default gateway
  • DNS server- primary
  • DNS server- secondary
Handoff (ETH1)- Network
  • MGMT VRF IPv4 Address
  • MGMT VRF IPv4 Netmask
  • MGMT VRF IPv4 Default gateway
  • DNS server- primary
  • DNS server- secondary
  • Handoff (QinQ (0x8100), QinQ (0x9100), none, 802.1Q, 802.1ad)
  • C-TAG
  • S-TAG
Console access
  • Console_Password
  • SSH:
    • Enabled (yes/no)
    • SSH public key
NTP
  • Public NTP:
    • server 0.ubuntu.pool.ntp.org
    • server 1.ubuntu.pool.ntp.org
    • server 2.ubuntu.pool.ntp.org
    • server 3.ubuntu.pool.ntp.org
  • Internal NTP server- 1
  • Internal NTP server- 2

Gateway Section

Most of the Gateway section is self-explanatory.

Table 7. Gateway Description
Gateway
  • Version- Should be same or lower than Orchestrator
  • OVA/QCOW2 file location- Plan ahead the file location and disk allocation
  • Activation Key
  • Orchestrator (IP ADDRESS/vco-fqdn-hostname)
  • Hostname- Valid Linux Hostname “RFC 1123”

Creating a Gateway and Getting the Activation Key

  1. In the Operator portal, select the Gateway Management tab and go to Gateway Pools in the left navigation pane. The Gateway Pools page appears. Create a new Gateway pool. For running Gateway in the Service Provider network, check the Allow Partner Gateway check box. This will enable the option to include the partner gateway in this gateway pool.
    Figure 1. Gateway Pool
  2. In the Operator portal, select Gateway Management > Gateways and create a new gateway and assign it to the pool. The IP address of the gateway entered here must match the public IP address of the gateway. If unsure, you can run curl ipinfo.io/ip from the Gateway which will return the public IP of the Gateway.
    Figure 2. New Gateway
  3. Make a note of the activation key and add it to the worksheet.

Activate Partner Gateway Mode

  1. In the Operator portal, select Gateway Management > Gateways and select the Gateway. Check the Partner Gateway check box to activate the Partner Gateway.
    Figure 3. Partner Gateway
  2. There are additional parameters that can be configured. The most common are the following:
    1. Advertise 0.0.0.0/0 with no encrypt – This option will enable the Partner Gateway to advertise a path to Cloud traffic for the SAAS Application. Since the Encrypt Flag is off, it will be up to the customer configuration on the business policy to use this path or not.
    2. The second recommend option is to advertise the Orchestrator IP as a /32 with encrypt.

      This will force the traffic that is sent from the Edge to the Orchestrator to take the Gateway Path. This is recommended since it introduces predictability to the behavior that the Edge takes to reach the Orchestrator.

Networking

Important: The following procedure and screenshots focus on the most common deployment, which is the 2-ARM installation for the Gateway. The addition of an OAM network is considered in the section titled, OAM Interface and Static Routes.
Figure 4. Gateway in a 2-ARM Deployment

The diagram above is a representation of the Gateway in a 2-ARM deployment. In this example, we assume eth0 is the interface facing the public network (Internet) and eth1 is the interface facing the internal network (handoff or VRF interface).

Note: A Management VRF is created on the Gateway and is used to send a periodic ARP refresh to the default gateway IP to check that the handoff interface is physically up and speed ups the failover time. It is recommended that a dedicated VRF is set up on the PE router for this purpose. Optionally, the same management VRF can also be used by the PE router to send an IP SLA probe to the Gateway to check for Gateway status ( Gateway has a stateful ICMP responder that will respond to ping only when its service is up).If a dedicated Management VRF is not set up, then you can use one of the customer VRFs as a Management VRF, although this is not recommended.

For the Internet Facing network, you only need the basic network configuration.

Table 8. Network Configuration
ETH0 – Internet Facing Network
  • IPv4_Address
  • IPv4_Netmask
  • IPv4_Default_gateway
  • DNS_server_primary
  • DNS_server_secondary

For the Handoff interface, you must know which type of handoff you want to configure and the Handoff configuration for the Management VRF.

Table 9. Handoff Configuration
ETH1 – HANDOFF Network
  • MGMT_IPv4_Address
  • MGMT_IPv4_Netmask
  • MGMT_IPv4_Default gateway
  • DNS_Server_Primary
  • DNS_Server_Secondary
  • Handoff (QinQ (0x8100), QinQ (0x9100), none, 802.1Q, 802.1ad)
  • C_TAG_FOR_MGMT_VRF
  • S_TAG_FOR_MGMT_VRF

Console Access

Table 10. Console Access
Console access
  • Console_Password
  • SSH:
    • Enabled (yes/no)
    • SSH public key

In order to access the Gateway, a console password and/or an SSH public key must be created.

Cloud-Init Creation

The configuration options for the gateway that we defined in the worksheet are used in the cloud-init configuration. The cloud-init config is composed of two main configuration files, the meta data file and the user-data file. The meta-data contains the network configuration for the Gateway, and the user-data contains the Gateway Software configuration. This file provides information that identifies the instance of the Gateway being installed.

Below are the templates for both meta_data and user_data files. Network-config can be omitted and network interfaces will be configured via DHCP by default.

Fill the templates with the information in the worksheet. All #_VARIABLE_# must be replaced, and check any #ACTION#.

Important: The template assumes you are using static configuration for the interfaces. It also assumes that you are either using SR-IOV for all interfaces or none. For additional information, see OAM - SR-IOV with vmxnet3 or SR-IOV with VIRTIO.
meta-data file:
instance-id: #_Hostname_# local-hostname: #_Hostname_#
network-config file (leading spaces are important!)
Note: The network-config examples below describe configuring the virtual machine with two network interfaces, eth0 and eth1, with static IP addresses. eth0 is the primary interface with a default route and a metric of 1. eth1 is the secondary interface with a default route and a metric of 13. The system will be configured with password authentication for the default user (vcadmin). In addition, the SSH authorized key will be added for the vcadmin user. The SD-WAN Gateway will be automatically activated to the Orchestrator with the provided activation_code.
version: 2 ethernets: eth0: addresses: - #_IPv4_Address_/mask# gateway4: #_IPv4_Gateway_# nameservers: addresses: - #_DNS_server_primary_# - #_DNS_server_secondary_# search: [] routes: - to: 0.0.0.0/0 via: #_IPv4_Gateway_# metric: 1 eth1: addresses: - #_MGMT_IPv4_Address_/Mask# gateway4: 192.168.152.1 nameservers: addresses: - #_DNS_server_primary_# - #_DNS_server_secondary_# search: [] routes: - to: 0.0.0.0/0 via: #_MGMT_IPv4_Gateway_# metric: 13
user-data file:
#cloud-config hostname: #_Hostname_# password: #_Console_Password_# chpasswd: {expire: False} ssh_pwauth: True ssh_authorized_keys: - #_SSH_public_Key_# velocloud: vcg: vco: #_VCO_# activation_code: #_Activation_Key# vco_ignore_cert_errors: false

The default username for the password that is configured in the user-data file is 'vcadmin'. Use this default username to login to the Gateway for the first time.

Important: Always validate user-data and meta data, using http://www.yamllint.com/ network-config should also be a valid network configuration ( https://cloudinit.readthedocs.io/en/19.4/topics/network-config.html). Sometimes when working with the Windows/Mac copy paste feature, there is an issue of introducing Smart Quotes which can corrupt the files. Run the following command to make sure you are smart quote free.
sed s/[”“]/'"'/g /tmp/user-data > /tmp/user-data_new

Create ISO File

Once you have completed your files, they need to be packaged into an ISO image. This ISO image is used as a virtual configuration CD with the virtual machine. This ISO image, called vcg01-cidata.iso, is created with the following command on a Linux system:

genisoimage -output vcg01-cidata.iso -volid cidata -joliet -rock user-data meta-data network-config

If you are on a MAC OSX, use the command below instead:

mkisofs -output vcg01-cidata.iso -volid cidata -joliet -rock {user-data,meta-data,network-config}

This ISO file which we will call select is going to be used in both OVA and Arista installations.

Install Gateway

You can install Gateway on Arista and KVM.

KVM provides multiple ways to provide networking to virtual machines. Arista recommends the following options:
  • SR-IOV
  • Linux Bridge
  • OpenVSwitch Bridge
If you decide to use SR-IOV mode, enable SR-IOV on KVM and Arista. For steps, see:
To install Gateway:

Enable SR-IOV on Arista

Enabling SR-IOV on Arista is an optional configuration.

Prerequisites

This requires a specific NIC card. The following chipsets are certified by Arista to work with the Gateway.
  • Intel 82599/82599ES
  • Intel X710/XL710
Note: Before using the Intel X710/XL710 cards in SR-IOV mode on Arista, make sure the supported Firmware and Driver versions described in the Deployment Prerequisites section are installed correctly.

To enable SR-IOV on Arista, perform the following steps:

  1. Make sure that your NIC card supports SR-IOV. Check the Arista Hardware Compatibility List (HCL)

    Brand Name: Intel

    I/O Device Type: Network

    Features: SR-IOV

    Figure 5. Arista Compatibility
    Refer Arista KB article on details of how to enable SR-IOV on the supported NIC.
  2. Once you have a support NIC card, go to the specific Arista host, select the Configure tab, and then choose Physical adapters
    Figure 6. Physical Adapters
  3. Select Edit Settings. Change Status to Enabled and specify the number of virtual functions required. This number varies by the type of NIC card.
  4. Reboot the hypervisor.
    Figure 7. Edit Settings
  5. If SR-IOV is successfully enabled, the number of Virtual Functions (VFs) will show under the particular NIC after ESXi reboots.
    Figure 8. Adapters
    Note: To support VLAN tagging on SR-IOV interfaces, user must configure VLAN ID 4095 (Allow All) on the Port Group connected to the SR-IOV interface. For additional information, see VLAN Configuration.

Install Gateway on Arista

Discusses how to install the Gateway OVA on Arista.
Note: This deployment is tested on ESXi versions 6.7, 6.7U3, 7.0, 7.0U3 and 8.0.1.
Important: When you are done with the OVA installation, do not start the VM until you have the cloud-init iso file and mount as CD-ROM to the Gateway VM. Otherwise, you will need to re-deploy the VM again.

If you decide to use SR-IOV mode, then you can optionally enable SR-IOV on Arista. To enable the SR-IOV on Arista, see Enable SR-IOV on Arista.

To install the Gateway OVA on Arista, perform the following steps:

  1. Select the ESXi host, go to Actions, and then Deploy OVF Template. Select the Gateway OVA file provided by Arista and select Next.
    Figure 9. Select Templates

    Review the template details in Step 4 ( Review details) of the Deploy OVA/OVF Template wizard as shown in the image below.

    Figure 10. Review Details
  2. For the Select networks step, the OVA comes with two pre-defined networks (vNICs).
    Table 11. Select Networks vNIC Descriptions
    vNIC Description
    Inside This is the vNIC facing the PE router and is used for handoff traffic to the MPLS PE or L3 switch. This vNIC is normally bound to a port group that does a VLAN pass-through (VLAN=4095 in vswitch configuration).
    Outside This is the vNIC facing the Internet. This vNIC expects a non-tagged L2 frame and is normally bound to a different port group from the Inside vNIC.

     

    Figure 11. Select Networks
  3. For the Customize template step, do not change anything. This is when you use vApp to configure the VM. We will not use vApp in this example. Click Next to continue with deploying the OVA.
    Figure 12. Customize Template
  4. Once the VM is successfully deployed, return to the VM and select Edit Settings. Two vNICs are created with adapter type = vmxnet3.
    Figure 13. Edit settings
  5. (Optional for SR-IOV) This step is required only if you plan to use SR-IOV. Because the OVA by default creates the two vNICs as vmxnet3, we will need to remove the two vNICs and re-add them as SR-IOV.
    Figure 14. Re-add vNICs

    When adding the two new SR-IOV vNICs, use the same port group as the original two vmxnet3 vNICs. Make sure the Adapter Type is SR-IOV passthrough. Select the correct physical port to use and set the Guest OS MTU Change to Allow. After you add the two vNICs, select OK.

    Figure 15. SR-IOV Passthrough
  6. As Gateway is a real-time application, you need to configure the Latency Sensitivity to High.
Figure 16. Latency Sensitivity
  1. Refer to Cloud-init Creation. The Cloud-init file is packaged as a CD-ROM (iso) file. You need to mount this file as a CD-ROM.
    Note: You must upload this file to the data store.
    Figure 17. CD-ROM File Mount
  2. Start the VM.

Activate SR-IOV on KVM

To enable the SR-IOV mode on KVM, perform the following steps.
This requires a specific NIC card. The following chipsets are certified by Arista to work with the Gateway and Edge.
  • Intel 82599/82599ES
  • Intel X710/XL710
Note:
  • Before using the Intel X710/XL710 cards in SR-IOV mode on KVM, make sure the supported Firmware and Driver versions specified in the Deployment Prerequisites section are installed correctly.
  • SR-IOV mode is not supported if the KVM Virtual Edge is deployed with a High-Availability topology. For High-Availability deployments, ensure that SR-IOV is not enabled for that KVM Edge pair.

To enable SR-IOV on KVM, perform the following steps:

  1. Enable SR-IOV in BIOS. This will be dependent on your BIOS. Login to the BIOS console and look for SR-IOV Support/DMA. You can verify support on the prompt by checking that Intel has the correct CPU flag.
    cat /proc/cpuinfo | grep vmx
  2. Add the options on Bboot (in /etc/default/grub).
    GRUB_CMDLINE_LINUX="intel_iommu=on"
    1. Run the following commands: update-grub and update-initramfs -u.
    2. Reboot
    3. Make sure iommu is enabled.
      Make sure iommu is enabled.
      velocloud@KVMperf3:~$ dmesg | grep -i IOMMU [ 0.000000] Command line: BOOT_IMAGE=/vmlinuz-3.13.0-107-generic root=/dev/mapper/qa--multiboot--002--vg-root ro intel_iommu=on splash quiet vt.handoff=7 [ 0.000000] Kernel command line: BOOT_IMAGE=/vmlinuz-3.13.0-107-generic root=/dev/mapper/qa--multiboot--002--vg-root ro intel_iommu=on splash quiet vt.handoff=7 [ 0.000000] Intel-IOMMU: enabled …. velocloud@KVMperf3:~$
  3. Based on the NIC chipset used, add a driver as follows:

    For the Intel 82599/82599ES cards in SR-IOV mode:

    1. Download and install ixgbe driver from the Intel website.
    2. Configure ixgbe config (tar and sudo make install).
      velocloud@KVMperf1:~$ cat /etc/modprobe.d/ixgbe.conf
    3. If the ixgbe config file does not exist, you must create the file as follows.
      options ixgbe max_vfs=32,32 options ixgbe allow_unsupported_sfp=1 options ixgbe MDD=0,0 blacklist ixgbevf
    4. Use the modinfo command to verify if the installation is successful.
      velocloud@KVMperf1:~$ modinfo ixgbe and ip link filename: /lib/modules/4.4.0-62-generic/updates/drivers/net/ethernet/intel/ixgbe/ixgbe.ko version: 5.0.4 license: GPL description: Intel(R) 10GbE PCI Express Linux Network Driver author: Intel Corporation, <このメールアドレスはスパムボットから保護されています。閲覧するにはJavaScriptを有効にする必要があります。> srcversion: BA7E024DFE57A92C4F1DC93
      For the Intel X710/XL710 cards in SR-IOV mode:
    5. Download and install the i40e driver from the Intel website.
    6. Create the Virtual Functions (VFs).
      echo 4 > /sys/class/net/device name/device/sriov_numvfs
    7. To make the VFs persistent after a reboot, add the command from the previous step to the /etc/rc.d/rc.local file.
    8. To make the VFs persistent after a reboot, add the command from the previous step to the /etc/rc.d/rc.local file.
    9. Deactivate the VF driver.
      echo “blacklist i40evf” >> /etc/modprobe.d/blacklist.conf
    10. Run the update-initramfs -u command and reboot the Server.
Validating SR-IOV (Optional)
You can quickly verify if your host machine has SR-IOV enabled by using the following command:
lspci | grep -i Ethernet

Verify if you have Virtual Functions:

01:10.0 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function(rev 01)

Install Gateway on KVM

Discusses how to install the Gateway qcow on KVM.

Note: This deployment is tested on KVM Ubuntu 16.04 and 18.04.

Pre-Installation Considerations

KVM provides multiple ways to provide networking to virtual machines. The networking in libvirt should be provisioned before the VM configuration. There are multiple ways to configure networking in KVM. For a full configuration of options on how to configure Networks on libvirt, see the following link:

https://libvirt.org/formatnetwork.html.

From the full list of options, Arista recommends the following modes:
  • SR-IOV (This mode is required for the Gateway to deliver the maximum throughput specified by Arista)
  • OpenVSwitch Bridge

If you decide to use SR-IOV mode, enable SR-IOV on KVM. To enable the SR-IOV on KVM, see Activate SR-IOV on KVM.

Gateway Installation Steps on KVM
  1. Copy the QCOW and the Cloud-init files created in the Cloud-Init Creation section to a new empty directory.
  2. Create the Network interfaces that you are going to use for the device.

    Using SR-IOV: The following is a sample network interface template specific to Intel X710/XL710 NIC cards using SR-IOV.

    <interface type='hostdev' managed='yes'> <mac address='52:54:00:79:19:3d'/> <driver name='vfio'/> <source> <address type='pci' domain='0x0000' bus='0x83' slot='0x0a' function='0x0'/> </source> <model type='virtio'/> </interface>
    Using OpenVSwitch: The following are the sample templates of a network interface using OpenVSwitch.
    git ./vcg/templates/KVM_NETWORKING_SAMPLES/template_outside_openvswitch.xml
    <?xml version="1.0" encoding="UTF-8"?> <network> <name>public_interface</name> <!--This is the network name--> <model type="virtio" /> <forward mode="bridge" /> <bridge name="publicinterface" /> <virtualport type="openvswitch" /> <vlan trunk="yes"> <tag id="50" /> <!--Define all the VLANS for this Bridge --> <tag id="51" /> <!--Define all the VLANS for this Bridge --> </vlan> </network>

    Create a network for inside_interface:

    git ./vcg/templates/KVM_NETWORKING_SAMPLES/template_inside_openvswitch.xml
    <network> <name>inside_interface</name> <!--This is the network name--> <model type='virtio'/> <forward mode="bridge"/> <bridge name="insideinterface"/> <virtualport type='openvswitch'></virtualport> <vlan trunk='yes'></vlan> <tag id='200'/> <!—Define all the VLANS for this Bridge --> <tag id='201'/> <!—Define all the VLANS for this Bridge --> <tag id='202'/> <!—Define all the VLANS for this Bridge --> </network>
    If you are using OpenVSwitch mode, then you have to verify if the basic networks are created and active before launching the VM.
    Note: This validation step is not applicable for SR-IOV mode as you do not create any network before the VM is launched.
    Figure 18. Validation Step
  3. Edit the VM XML file. There are multiple ways to create a Virtual Machine in KVM. You can define the VM in an XML file and create it using libvirt, using the sample VM XML template specific to OpenVSwitch mode and SR-IOV mode.
    vi my_vm.xml
    The following is a sample template of a VM which uses OpenVSwitch interfaces. Use this template by making edits, wherever applicable.
    <?xml version="1.0" encoding="UTF-8"?> <domain type="kvm"> <name>#domain_name#</name> <memory unit="KiB">8388608</memory> <currentMemory unit="KiB">8388608</currentMemory> <vcpu>8</vcpu> <cputune> <vcpupin vcpu="0" cpuset="0" /> <vcpupin vcpu="1" cpuset="1" /> <vcpupin vcpu="2" cpuset="2" /> <vcpupin vcpu="3" cpuset="3" /> <vcpupin vcpu="4" cpuset="4" /> <vcpupin vcpu="5" cpuset="5" /> <vcpupin vcpu="6" cpuset="6" /> <vcpupin vcpu="7" cpuset="7" /> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type>hvm</type> </os> <features> <acpi /> <apic /> <pae /> </features> <cpu mode="host-passthrough" /> <clock offset="utc" /> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/bin/kvm-spice</emulator> <disk type="file" device="disk"> <driver name="qemu" type="qcow2" /> <source file="#folder#/#qcow_root#" /> <target dev="hda" bus="ide" /> <alias name="ide0-0-0" /> <address type="drive" controller="0" bus="0" target="0" unit="0" /> </disk> <disk type="file" device="cdrom"> <driver name="qemu" type="raw" /> <source file="#folder#/#Cloud_ INIT_ ISO#" /> <target dev="sdb" bus="sata" /> <readonly /> <alias name="sata1-0-0" /> <address type="drive" controller="1" bus="0" target="0" unit="0" /> </disk> <controller type="usb" index="0"> <alias name="usb0" /> <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x2" /> </controller> <controller type="pci" index="0" model="pci-root"> <alias name="pci.0" /> </controller> <controller type="ide" index="0"> <alias name="ide0" /> <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x1" /> </controller> <interface type="network"> <source network="public_interface" /> <vlan> <tag id="#public_vlan#" /> </vlan> <alias name="hostdev1" /> <address type="pci" domain="0x0000" bus="0x00" slot="0x11" function="0x0" /> </interface> <interface type="network"> <source network="inside_interface" /> <alias name="hostdev2" /> <address type="pci" domain="0x0000" bus="0x00" slot="0x12" function="0x0" /> </interface> <serial type="pty"> <source path="/dev/pts/3" /> <target port="0" /> <alias name="serial0" /> </serial> <console type="pty" tty="/dev/pts/3"> <source path="/dev/pts/3" /> <target type="serial" port="0" /> <alias name="serial0" /> </console> <memballoon model="none" /> </devices> <seclabel type="none" /> </domain>
    The following is a sample template of a VM which uses SR-IOV interfaces. Use this template by making edits, wherever applicable.
    <?xml version="1.0" encoding="UTF-8"?> <domain type="kvm"> <name>#domain_name#</name> <memory unit="KiB">8388608</memory> <currentMemory unit="KiB">8388608</currentMemory> <vcpu>8</vcpu> <cputune> <vcpupin vcpu="0" cpuset="0" /> <vcpupin vcpu="1" cpuset="1" /> <vcpupin vcpu="2" cpuset="2" /> <vcpupin vcpu="3" cpuset="3" /> <vcpupin vcpu="4" cpuset="4" /> <vcpupin vcpu="5" cpuset="5" /> <vcpupin vcpu="6" cpuset="6" /> <vcpupin vcpu="7" cpuset="7" /> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type>hvm</type> </os> <features> <acpi /> <apic /> <pae /> </features> <cpu mode="host-passthrough" /> <clock offset="utc" /> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/bin/kvm-spice</emulator> <disk type="file" device="disk"> <driver name="qemu" type="qcow2" /> <source file="#folder#/#qcow_root#" /> <target dev="hda" bus="ide" /> <alias name="ide0-0-0" /> <address type="drive" controller="0" bus="0" target="0" unit="0" /> </disk> <disk type="file" device="cdrom"> <driver name="qemu" type="raw" /> <source file="#folder#/#Cloud_ INIT_ ISO#" /> <target dev="sdb" bus="sata" /> <readonly /> <alias name="sata1-0-0" /> <address type="drive" controller="1" bus="0" target="0" unit="0" /> </disk> <controller type="usb" index="0"> <alias name="usb0" /> <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x2" /> </controller> <controller type="pci" index="0" model="pci-root"> <alias name="pci.0" /> </controller> <controller type="ide" index="0"> <alias name="ide0" /> <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x1" /> </controller> <interface type='hostdev' managed='yes'> <mac address='52:54:00:79:19:3d'/> <driver name='vfio'/> <source> <address type='pci' domain='0x0000' bus='0x83' slot='0x0a' function='0x0'/> </source> <model type='virtio'/> </interface> <interface type='hostdev' managed='yes'> <mac address='52:54:00:74:69:4d'/> <driver name='vfio'/> <source> <address type='pci' domain='0x0000' bus='0x83' slot='0x0a' function='0x1'/> </source> <model type='virtio'/> </interface> <serial type="pty"> <source path="/dev/pts/3" /> <target port="0" /> <alias name="serial0" /> </serial> <console type="pty" tty="/dev/pts/3"> <source path="/dev/pts/3" /> <target type="serial" port="0" /> <alias name="serial0" /> </console> <memballoon model="none" /> </devices> <seclabel type="none" /> </domain>
  4. Launch the VM by performing the following steps:
    1. Ensure you have the following three files in your directory as shown in the following sample screenshot:
      • qcow file- vcg-root
      • cloud-init- vcg-test.iso
      • Domain XML file that defines the VM- test_vcg.xml, where test_vcg is the domain name.)
      Figure 19. Directory Files
    2. Define VM.
      velocloud@KVMperf2:/tmp/VeloCloudGateway$ virsh define test_vcg.xml Domain test_vcg defined from test_vcg.xml
    3. Set VM to autostart.
      velocloud@KVMperf2:/tmp/VeloCloudGateway$ virsh autostart test_vcg
    4. Start VM.
      velocloud@KVMperf2:/tmp/VeloCloudGateway$ virsh start test_vcg
  5. If you are using SR-IOV mode, after launching the VM, set the following on the Virtual Functions (VFs) used:
    1. Set the spoofcheck off.
      ip link set eth1 vf 0 spoofchk off
    2. Set the Trusted mode on.
      ip link set dev eth1 vf 0 trust on
    3. Set the VLAN, if required.
      ip link set eth1 vf 0 vlan 3500
    Note: The Virtual Functions configuration step is not applicable for OpenVSwitch (OVS) mode.
  6. Console into the VM.
    virsh list Id Name State ---------------------------------------------------- 25 test_vcg running velocloud@KVMperf2$ virsh console 25 Connected to domain test_vcg Escape character is ^]
    Special Consideration for KVM Host
    • Deactivate GRO (Generic Receive Offload) on physical interfaces (to avoid unnecessary re-fragmentation in Gateway).
      ethtool –K <interface> gro off tx off
    • Deactivate CPU C-states (power states affect real-time performance). Typically, this can be done as part of kernel boot options by appending processor.max_cstate=1 or just deactivate in the BIOS.
    • For production deployment, vCPUs must be pinned to the instance. No oversubscription on the cores should be allowed to take place.

Post-Installation Tasks

This section discusses post-installation and installation verification steps.

If everything worked as expected in the installation, you can now login to the VM.

  1. If everything works as expected, you should see the login prompt on the console. You should see the prompt name as specified in cloud-init.
    Figure 20. Login Prompt
  2. You can also refer to /run/cloud-init/result.json. If you see the message below, it is likely that the cloud init runs successfully.
    Figure 21. cloud init Successful Message
  3. Verify that the Gateway is registered with Orchestrator.
    Figure 22. Verify Registered Gateway
  4. Verify Outside Connectivity.
    Figure 23. Verify Outside Connectivity
  5. Verify that the MGMT VRF is responding to ARPs.
    Figure 24. Verify MGMT VRF
  6. Optional: Deactivate cloud-init so it does not run on every boot.
    Note: If you have deployed OVA on vSphere with vAPP properties, you must deactivate cloud-init prior to upgrading to versions 4.0.1 or 4.1.0. This is to ensure that the customization settings such as network configuration or password are not lost during the upgrade.
    touch /etc/cloud/cloud-init.disabled
  7. Associate the new gateway pool with the customer.
    Figure 25. Associate Gateway Pool
  8. Associate the Gateway with an Edge. For additional information, see the Assign Partner Gateway Handoff section in the VeloCloud SD-WAN Administration Guide.
  9. Verify that the Edge is able to establish a tunnel with the Gateway on the Internet side. From the Orchestrator, go to Monitor > Edges > [Edge] > Overview

    From the Orchestrator, go to Diagnostics > Remote Diagnostics > [Edge] > List Paths , and select Run to view the list of active paths.

    Figure 26. List of Active Paths
  10. Configure the Handoff interface. See Configure Partner Handoff.
  11. Verify that the BGP session is up.
  12. Change the network configuration.
    • Network configuration files are located under /etc/netplan.

      Example network configuration (whitespace is important!)- /etc/netplan/50-cloud-init.yaml:
      network: version: 2 ethernets: eth0: addresses: - 192.168.151.253/24 gateway4: 192.168.151.1 nameservers: addresses: - 8.8.8.8 - 8.8.4.4 search: [] routes: - to: 192.168.0.0/16 via: 192.168.151.254 metric: 100 eth1: addresses: - 192.168.152.251/24 gateway4: 192.168.152.1 nameservers: addresses: - 8.8.8.8 search: []
    Important: When cloud-init is enabled, network configuration is regenerated on every boot. In order to make changes to location configuration, deactivate cloud-init or deactivate cloud-init network configuration component:
    echo 'network: {config: disabled}' > /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg

Configure Handoff Interface in Data Plane

VeloCloud Gateway Network Configuration.

In the example featuring figure below (VRF/VLAN Hand Off to PE), we assume eth0 is the interface facing the public network (Internet) and eth1 is the interface facing the internal network (customer VRF through the PE).BGP peering configuration is managed on the VCO on a per customer/VRF basis under Configure > Customer . Note that the IP address of each VRF is configurable per customer. The IP address of the management VRF inherits the IP address configured on the SD-WAN Gateway interface in Linux.

Figure 27. VRF/VLAN Hand Off to PE Router

A management VRF is created on the SD-WAN Gateway and is used to send periodic ARP refresh to the default Gateway IP to determine the next-hop MAC. It is recommended that a dedicated VRF is set up on the PE router for this purpose. The same management VRF can also be used by the PE router to send IP SLA probe to the SD-WAN Gateway to check for SD-WAN Gateway status (SD-WAN Gateway has stateful ICMP responder that will respond to ping only when its service is up). BGP Peering is not required on the Management VRF. If a Management VRF is not set up, then you can use one of the customer VRFs as Management VRF, although this is not recommended.

  1. Edit the /etc/config/gatewayd and specify the correct VCMP and WAN interface. VCMP interface is the public interface that terminates the overlay tunnels. The WAN interface in this context is the handoff interface facing the PE.
    "vcmp.interfaces":[ "eth0" ], (..snip..) "wan": [ "eth1" ],
  2. Configure the Management VRF. This VRF is used by the SD-WAN Gateway to ARP for next-hop MAC (PE router). The same next-hop MAC will be used by all the VRFs created by the SD-WAN Gateway. You need to configure the Management VRF parameter in /etc/config/gatewayd.

    The Management VRF is the same VRF used by the PE router to send IP SLA probe to. The SD-WAN Gateway only responds to the ICMP probe if the service is up and if there are edges connected to it. Below table explains each parameter that needs to be defined. This example has Management VRF on the 802.1q VLAN ID of 1000.

    Table 12. Management VRF Parameters to be Defined
    mode QinQ (0x8100), QinQ (0x9100), none, 802.1Q, 802.1ad
    c_tag C-Tag value for QinQ encapsulation or 802.1Q VLAN ID for802.1Q encapsulation
    s_tag S-Tag value for QinQ encapsulation
    interface Handoff interface, typically eth1
    "vrf_vlan": { "tag_info": [ { "resp_mode": 0, "proxy_arp": 0, "c_tag": 1000, "mode": "802.1Q", "interface": "eth1", "s_tag": 0 } ] },
  3. Edit the /etc/config/gatewayd-tunnel to include both interfaces in the wan parameter. Save the change.

    wan="eth0 eth1"

Remove Blocked Subnets

By default, the SD-WAN Gateway blocks traffic to 10.0.0.0/8 and 172.16.0.0/14. We will need to remove them before using this SD-WAN Gateway because we expect SD-WAN Gateway to be sending traffic to private subnets as well. If you do not edit this file, when you try to send traffic to blocked subnets, you will find the following messages in /var/log/gwd.log

2015-12-18T12:49:55.639 ERR [NET] proto_ip_recv_handler:494 Dropping packet destined for 10.10.150.254, which is a blocked subnet. 2015-12-18T12:52:27.764 ERR [NET] proto_ip_recv_handler:494 Dropping packet destined for 10.10.150.254, which is a blocked subnet. [message repeated 48 times] 2015-12-18T12:52:27.764 ERR [NET] proto_ip_recv_handler:494 Dropping packet destined for 10.10.150.10, which is a blocked subnet.
  1. On SD-WAN Gateway, edit /opt/vc/etc/vc_blocked_subnets.jsonfile. You will find that this file first has the following.
    [ { "network_addr": "10.0.0.0", "subnet_mask": "255.0.0.0" }, { "network_addr": "172.16.0.0", "subnet_mask": "255.255.0.0" } ]
  2. Remove the two networks. The file should look like below after editing. Save the change.
    [ ]
  3. Restart the SD-WAN Gateway process by sudo /opt/vc/bin/vc_procmon restart.

Upgrade Gateway

This section discusses how to upgrade a Gateway installation.

Note: This procedure will not work for upgrading a Gateway image version from 3.x to 4.x due to a significant platform changes. Upgrading from a 3.x to 4.x image will require a new Gateway deployment and reactivation. Refer to Partner Gateway Upgrade and Migration for upgrade information.
Note: Currently, Arista does not support downgrading for theEdge Cloud Orchestrator and VeloCloud Gateway. So before upgrading the Orchestrator or Gateway, Arista recommends you to back up the system prior to upgrade for easy recovery in the event the upgrade is not successfully completed.

Authenticate Software Update Package Via Digital Signature

The software installer in the Orchestrator version 4.3.0 and higher now has the ability to authenticate the software update package using a digital signature.

Prior to upgrading to a newer version of the software, make sure the public key exists to verify the package. The known public key location to verify signature is as follows, /var/lib/velocloud/software_update/keys/software.key. Alternatively, the key can be provided on the command line using --pubkey parameter.

The current release public key is:

-----BEGIN PUBLIC KEY----- MHYwEAYHKoZIzj0CAQYFK4EEACIDYgAEbjZ08w3RNJvuOICBp8fysU/3opLejsrP pArA1IyKeUzU0U31MU4kPcLdggojobNfs3i1kvyvGvprEmfGYWzc3dXUyT9Tv73C lVgYPLNd/nOxJsXomROKogfvJdYFuy4/ -----END PUBLIC KEY----

If the key is missing or the signature cannot be verified, the Operator will be notified that the package is untrusted with an option to proceed or not proceed.

To skip verification, use "--untrusted" parameter.

If running in batch mode or not on the terminal, the installation is aborted unless the "--untrusted" option is specified on the command line.

By default, the installer will run in interactive mode and may issue prompts. For automated scripts, use --batch parameter to suppress prompts.

Upgrade Procedures

To upgrade a Gateway installation:
  1. Download the Gateway update package.
  2. Upload the image to the Gateway system (using, for example, the scp command). Copy the image to the following location on the system:

    /var/lib/velocloud/software_update/vcg_update.tar

  3. Connect to the Gateway console and run:
    sudo /opt/vc/bin/vcg_software_update

Activate Replacement Partner Gateway

This section discusses activating a replacement Partner Gateway.

Overview

Gateway activation keys do not have the same default 30 day lifetime as Edges. In fact, a Gateway activation key has an infinite lifespan. If an on-premises Gateway fails and you wish to replace it with a newly built Gateway using the same name and IP address, you can use the same activation key that was used on the original Gateway.

As a result, for most Gateway issues, the quickest method of recovery is to spin up a new VM and register it to the Orchestrator using the failed Gateway’s activation key. This saves you a lot of time as the Orchestrator will push the existing configuration onto this new instance. Most Partners prefer this approach over configuring a new Gateway from scratch.

Prerequisites

Before you can use this Gateway replacement method, you must adjust the System Property gateway.activation.validate.deviceID and set the value to false. To do this you or another Operator with a Superuser role must go to Orchestrator > System Properties and search for gateway.activation and inspect gateway.activation.validate.deviceID. If the Value is already false as in the screenshot below, then you are ready for the next steps. If the Value is true, then a Gateway reactivation will not work, and you need to modify this System Property by selecting it.

Figure 28. System Properties

 

Figure 29. Modify System Property Value
You must be an Operator with a Superuser role to make this change. By default, the Orchestrator performs a deviceID verification, and with this System Property set to true, activating a replacement Gateway would fail because the deviceID would not be the same as the original Gateway. Setting this property to false disables the verification process on the Orchestrator.
Note: There are no adverse effects to changing this value. You may leave it as false since the Gateway authentication keys are indefinitely valid.
Important: If you are on a Hosted Shared Orchestrator and do not know whether the gateway.activation.validate.deviceID System Property is set to False and find that you cannot reactivate your Partner Gateway, you can reach out to Arista VeloCloud SD-WAN Support and they will assist you in changing that System Property on your Orchestrator.

Replacement Partner Gateway Workflow

These are the steps to activate a replacement Partner Gateway:
  1. Locate the original activation key. This key is found by going to Gateway Management > Gateways and selecting the name of the Gateway you are replacing. Click the down arrow beside the name and note the activation key.
    Figure 30. Gateway Management
  2. Use the activation key to activate the replacement Gateway on your newly spun up VM: /opt/vc/bin/activate.py -s vco_name_or_ip activation_key.

Custom Configurations

This section discusses custom configurations.

NTP Configuration

NTP configuration involves editing the /etc/ntpd.conf file.

OAM Interface and Static Routes

If Gateways are to be deployed with an OAM interface, perform the following steps.
  1. Add an additional interface to the VM (ETH2).

    Arista: If a dedicated VNIC for Management/OAM is desired, add another vNIC of type vmxnet3. You must repeat the previous step, which is to select OK and then Edit Settings again so you can make a note of the vNIC MAC address.

    Figure 31. vNIC MAC Address

    KVM: If a dedicated VNIC for Management/OAM is desired, make sure you have a libvirt network named oam-network. Then add the following lines to your XML VM structure:

    ….. </controller> <interface type='network'> <source network='public_interface'/> <vlan><tag id='#public_vlan#'/></vlan> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x11' function='0x0'/> </interface> <interface type='network'> <source network='inside_interface'/> <alias name='hostdev2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x12' function='0x0'/> </interface> <interface type='network'> <source network='oam_interface'/> <vlan><tag id='#oam_vlan#'/></vlan> <alias name='hostdev2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x13' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/3'/> <target port='0'/> <alias name='serial0'/> </serial>
  2. Configure the network-config file with the additional interface.
    version: 2 ethernets: eth0: addresses: - #_IPv4_Address_/mask# mac_address: #_mac_Address_# gateway4: #_IPv4_Gateway_# nameservers: addresses: - #_DNS_server_primary_# - #_DNS_server_secondary_# search: [] routes: - to: 0.0.0.0/0 via: #_IPv4_Gateway_# metric: 1 eth1: addresses: - #_MGMT_IPv4_Address_/Mask# mac_address: #_MGMT_mac_Address_# nameservers: addresses: - #_DNS_server_primary_# - #_DNS_server_secondary_# search: [] routes: - to: 0.0.0.0/0 via: #_MGMT_IPv4_Gateway_# metric: 13 eth2: addresses: - #_OAM_IPv4_Address_/Mask# nameservers: addresses: - #_DNS_server_primary_# - #_DNS_server_secondary_# search: [] routes: - to: 10.0.0.0/8 via: #_OAM_IPv4_Gateway_# - to: 192.168.0.0/16 via: #_OAM_IPv4_Gateway_#

OAM - SR-IOV with vmxnet3 or SR-IOV with VIRTIO

It is possible in some installations to mix and match and provide different interface types for the Gateway. This generally happens if you have an OAM without SR-IOV. This custom configuration requires additional steps since this causes the interfaces to come up out of order.

Record the MAC address of each interface.

Arista: After creating the machine, go to Edit Settings and copy the Mac address.

Figure 32. MAC Address

KVM: After defining the VM, run the following command:

Figure 33. KVM Command Prompt

Special Consideration When Using 802.1ad Encapsulation

It seems certain that 802.1ad devices do not populate the outer tag EtherType with 0x88A8. Special change is required in user data to interoperate with these devices.

Assuming a Management VRF is configured with S-Tag: 20 and C-Tag: 100, edit the vrf_vlan section in / etc/ config/ gatewayd as follows. Also, define resp_mode to 1 so that the Gateway will relax its check to allow Ethernet frames that have incorrect EtherType of 0x8100 in the outer header.

SNMP Integration

This section discusses how to configure SNMP integration.

For additional information on SNMP configuration, see Net-SNMP documentation. To configure SNMP integration:

  1. Edit /etc/snmp/snmpd.conf.
  2. Add the following lines to the config file with source IP address of the systems that will be connecting to SNMP service. You can configure using either SNMPv2c or SNMPv3.
    • The following example will configure access to all counters from localhost via community string vc-vcg and from 10.0.0.0/8 with community string myentprisecommunity using SNMPv2c version.
      agentAddress udp:161 # com2sec sec.name source community com2sec local localhost vc-vcg com2sec myenterprise 10.0.0.0/8 myentprisecommunity# group access.name sec.model sec.name group rogroup v2c local group rogroup v2c myenterpriseview all included .1 80 # access access.name context sec.model sec.level match read write notif access rogroup "" any noauth exact all none none#sysLocation Sitting on the Dock of the Bay #sysContact Me <このメールアドレスはスパムボットから保護されています。閲覧するにはJavaScriptを有効にする必要があります。>sysServices 72master agentx# # Process Monitoring ## At least one 'gwd' process proc gwd # At least one 'mgd' process proc mgd# # Disk Monitoring # # 100MBs required on root disk, 5% free on /var, 10% free on all other disks disk / 100000 disk /var 5% includeAllDisks 10%# # System Load # # Unacceptable 1-, 5-, and 15-minute load averages load 12 10 5 # "Pass-through" MIB extension command pass_persist .1.3.6.1.4.1.45346 /opt/vc/bin/snmpagent.py veloGateway
      Note: In the above example, the process gwd comprises entire Data and Control Plane of the Gateway. The Management Plane Daemon (mgd) is responsible for communication with the Orchestrator. This process is kept isolated from gwd so that in the incident of a total failure of the gwd process, the Orchestrator is still reachable for configuration changes or software updates required to resolve the failure.
    • The following example shows configuration using SNMPv3 version.
      vcadmin:~$ cat /etc/snmp/snmpd.conf ############################################################################### # # EXAMPLE.conf: # An example configuration file for configuring the Net-SNMP agent ('snmpd') # See the 'snmpd.conf(5)' man page for details # # Some entries are deliberately commented out, and will need to be explicitly activated # ############################################################################### # # AGENT BEHAVIOUR # # Listen for connections from the local system only # agentAddress udp:127.0.0.1:161 # Listen for connections on all interfaces (both IPv4 *and* IPv6) agentAddress udp:161 ############################################################################### # # SNMPv3 AUTHENTICATION # # Note that these particular settings don't actually belong here. # They should be copied to the file /var/lib/snmp/snmpd.conf # and the passwords changed, before being uncommented in that file *only*. # Then restart the agent # createUser authOnlyUser MD5 "remember to change this password" # createUser authPrivUser SHA "remember to change this one too" DES # createUser internalUser MD5 "this is only ever used internally, but still change the password" # If you also change the usernames (which might be sensible), # then remember to update the other occurances in this example config file to match. ############################################################################### # # ACCESS CONTROL # # system + hrSystem groups only view systemonly included .1.3.6.1.4.1.45346 # Full access from the local host # rocommunity public localhost # Default access to basic system info rocommunity public default -V systemonly # Full access from an example network # Adjust this network address to match your local settings, change the community string, # and check the 'agentAddress' setting above rocommunity secret 10.0.0.0/16 # Full read-only access for SNMPv3 rouser authOnlyUser # Full write access for encrypted requests # Remember to activate the 'createUser' lines above rwuser authPrivUser priv # It's no longer typically necessary to use the full 'com2sec/group/access' configuration # r[ow]user and r[ow]community, together with suitable views, should cover most requirements ############################################################################### # # SYSTEM INFORMATION # # Note that setting these values here, results in the corresponding MIB objects being 'read-only' # See snmpd.conf(5) for more details sysLocation Bay sysContact このメールアドレスはスパムボットから保護されています。閲覧するにはJavaScriptを有効にする必要があります。 # Application + End-to-End layers sysServices 72 # # Process Monitoring # # At least one 'mountd' process proc mountd # No more than 4 'ntalkd' processes - 0 is OK proc ntalkd 4 # At least one 'sendmail' process, but no more than 10 proc sendmail 10 1 # Walk the UCD-SNMP-MIB::prTable to see the resulting output # Note that this table will be empty if there are no "proc" entries in the snmpd.conf file # # Disk Monitoring # # 10MBs required on root disk, 5% free on /var, 10% free on all other disks disk / 10000 disk /var 5% includeAllDisks 10% # Walk the UCD-SNMP-MIB::dskTable to see the resulting output # Note that this table will be empty if there are no "disk" entries in the snmpd.conf file # # System Load # # Unacceptable 1-, 5-, and 15-minute load averages load 12 10 5 # Walk the UCD-SNMP-MIB::laTable to see the resulting output # Note that this table *will* be populated, even without a "load" entry in the snmpd.conf file ############################################################################### # # ACTIVE MONITORING # # send SNMPv1 traps trapsink localhost public # send SNMPv2c traps trap2sink localhost public # send SNMPv2c INFORMs informsink localhost public # Note that you typically only want *one* of these three lines # Uncommenting two (or all three) will result in multiple copies of each notification. # # Event MIB - automatically generate alerts # # Remember to activate the 'createUser' lines above iquerySecName internalUser rouser internalUser # generate traps on UCD error conditions defaultMonitors yes # generate traps on linkUp/Down linkUpDownNotifications yes ############################################################################### # # EXTENDING THE AGENT # # Arbitrary extension commands # extend test1 /bin/echo Hello, world! extend-sh test2 echo Hello, world! ; echo Hi there ; exit 35 #extend-sh test3 /bin/sh /tmp/shtest # Note that this last entry requires the script '/tmp/shtest' to be created first, # containing the same three shell commands, before the line is uncommented # Walk the NET-SNMP-EXTEND-MIB tables (nsExtendConfigTable, nsExtendOutput1Table # and nsExtendOutput2Table) to see the resulting output # Note that the "extend" directive supercedes the previous "exec" and "sh" directives # However, walking the UCD-SNMP-MIB::extTable should still returns the same output, # as well as the fuller results in the above tables. # # "Pass-through" MIB extension command # #pass .1.3.6.1.4.1.8072.2.255 /bin/sh PREFIX/local/passtest #pass .1.3.6.1.4.1.8072.2.255 /usr/bin/perl PREFIX/local/passtest.pl rocommunity velocloud localhost #pass .1.3.6.1.4.1.45346 /opt/vc/bin/snmpagent.py veloGateway pass_persist .1.3.6.1.4.1.45346 /opt/vc/bin/snmpagent.py veloGateway # Note that this requires one of the two 'passtest' scripts to be installed first, # before the appropriate line is uncommented. # These scripts can be found in the 'local' directory of the source distribution, # and are not installed automatically. # Walk the NET-SNMP-PASS-MIB::netSnmpPassExamples subtree to see the resulting output # # AgentX Sub-agents # # Run as an AgentX master agent master agentx # Listen for network connections (from localhost) # rather than the default named socket /var/agentx/master
  3. Edit /etc/iptables/rules.v4. Add the following lines to the config with the source IP of the systems that will be connecting to SNMP service:
    # WARNING: only add targeted rules for addresses and ports # do not add blanket drop or accept rules since Gateway will append its own rules # and that may prevent it from functioning properly *filter :INPUT ACCEPT [0:0] -A INPUT -p udp -m udp --source 127.0.0.1 --dport 161 -m comment --comment "allow SNMP port" -j ACCEPT -A INPUT -p udp -m udp --source 10.0.0.0/8 --dport 161 -m comment --comment "allow SNMP port" -j ACCEPT :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] COMMIT
  4. Restart snmp and iptables services:
    /etc/init.d/snmpd restart /etc/init.d/firewall restart service vc_process_monitor restart

Custom Firewall Rules

This section discusses how to modify custom firewall rules.

To modify local firewall rules, edit the following file: /etc/iptables/rules.v4.

Important: Add only targeted rules for addresses and ports. Do not add blanket drop or accept rules. Gateway will append its own rules to the table and, because the rules are evaluated in order, that may prevent Gateway software from functioning properly.
*filter :INPUT ACCEPT [0:0] -A INPUT -p udp -m udp --source 127.0.0.1 --dport 161 -m comment --comment "allow SNMP port" -j ACCEPT :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] COMMIT

Restart netfilter service:

service netfilter-persistent restart service vc_process_monitor restart