打印

Shell-based Configuration

The shell-based configuration can be used to set up either a single-node CVP instance or multi-node CVP instances. The steps you use vary depending on whether you are setting up a single-node instance or a multi-node instance.

Cluster and device interfaces

A cluster interface is the interface that is able to reach the other two nodes in a multi-node installation. A device interface is the interface used by managed devices to connect to CVP. The ZTP configuration file is served over this interface. These two parameters are optional and default to eth0. Configuring these two interfaces is useful in deployments where a private network is used between the managed devices and a public-facing network is used to reach the other two cluster nodes and the GUI.

Configuring a Single-Node CVP Instance using CVP Shell

After initial bootup, CVP can be configured at the VM's console using the CVP config shell. At points during the configuration, you must start the network, NTPD, and CVP services. Starting these services may take some time to complete before moving on to the next step in the process.

Pre-requisites:

Before you begin the configuration process, make sure that you:

To configure CVP using the CVP config shell:

  1. Login at the VM console as cvpadmin.
  2. Enter your configuration and apply it (see the following example).
    In this example, the root password is not set (it is not set by default). In this example of a CVP shell, the bold text is entered by the cvpadmin user.
    Note: To skip static routes, simply press enter when prompted for number of static routes.
    localhost login: cvpadmin
    Changing password for user root.
    New password:
    Retype new password:
    passwd: all authentication tokens updated successfully.
    Enter a command
    [q]uit [p]rint [s]inglenode [m]ultinode [r]eplace [u]pgrade
    >s
    
    Enter the configuration for CloudVision Portal and apply it when done.
    Entries marked with '*' are required.
    
    common configuration:
    dns: 172.22.22.40
    DNS domains: sjc.aristanetworks.com, ire.aristanetworks.com
    ntp: ntp.aristanetworks.com
    Telemetry Ingest Key: arista
    CloudVision WiFi Enabled: yes
    CloudVision WiFi HA cluster IP:
    Cluster Interface name: eth0
    Device Interface name: eth0
    node configuration:
     *hostname (fqdn): cvp80.sjc.aristanetworks.com
     *default route: 172.31.0.1
    DNS domains: sjc.aristanetworks.com, ire.aristanetworks.com
    Number of Static Routes: 1
    Route for Static Route #1: 1.1.1.0
    Netmask for Static Route #1: 255.255.255.0
    Interface for Static Route #1: eth0
    TACACS server ip address:
     *IP address of eth0: 172.31.0.168
     *Netmask of eth0: 255.255.0.0
    [q]uit [p]rint [e]dit [v]erify [s]ave [a]pply [h]elp ve[r]bose
    >v
    Valid config format.
    Applying proposed config for network verification.
    saved config to /cvpi/cvp-config.yaml
    Running : cvpConfig.py tool...
    [189.568543] vmxnet3 0000:0b:00.0 eth0: intr type 3, mode 0, 9 vectors allocated
    [189.576571] vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps
    [203.860624] vmxnet3 0000:13:00.0 eth1: intr type 3, mode 0, 9 vectors allocated
    [203.863878] vmxnet3 0000:13:00.0 eth1: NIC Link is Up 10000 Mbps
    [204.865253] Ebtables v2.0 unregistered
    [205.312888] ip_tables: (C) 2000-2006 Netfilter Core Team
    [205.331703] ip6_tables: (C) 2000-2006 Netfilter Core Team
    [205.355522] Ebtables v2.0 registered
    [205.398575] nf_conntrack version 0.5.0 (65536 buckets, 262144 max)
    Stopping: network
    Running : /bin/sudo /sbin/service network stop
    Running : /bin/sudo /bin/systemctl is-active network
    Starting: network
    Running : /bin/sudo /bin/systemctl start network.service
    [206.856170] vmxnet3 0000:0b:00.0 eth0: intr type 3, mode 0, 9 vectors allocated
    [206.858797] vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps
    [206.860627] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
    [207.096883] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
    [211.086390] vmxnet3 0000:13:00.0 eth1: intr type 3, mode 0, 9 vectors allocated
    [211.089157] vmxnet3 0000:13:00.0 eth1: NIC Link is Up 10000 Mbps
    [211.091084] IPv6: ADDRCONF(NETDEV_UP): eth1: link is not ready
    [211.092424] IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready
    [211.245437] warning: `/bin/ping' has both setuid-root and effective capabilities. Therefore not raising all capabilities.
    Warning: External interfaces, ['eth1'], are discovered under /etc/sysconfig/network-scripts
    These interfaces are not managed by CVP.
    Please ensure that the configurations for these interfaces are correct.
    Otherwise, actions from the CVP shell may fail.
    
    Valid config.
    [q]uit [p]rint [e]dit [v]erify [s]ave [a]pply [h]elp ve[r]bose

Configuring Multi-node CVP Instances Using the CVP Shell

Use this procedure to configure multi-node CVP instances using the CVP shell. This procedure includes the steps to set up a primary, secondary, and tertiary node, which is the number of nodes required for redundancy. It also includes the steps to verify and apply the configuration of each node.

The sequence of steps in this procedure follow the process described in the basic steps in the process

Pre-requisites:

Before you begin the configuration process, make sure that you:

Complete the following steps to configure multi-node CVP instances:

  1. Login at the VM console for the primary node as cvpadmin.
  2. At the cvp installation mode prompt, type m to select a multi-node configuration.
  3. At the prompt to select a role for the node, type p to select primary node.
    Note: You must select primary first. You cannot configure one of the other nodes before you configure the primary node.
  4. Follow the CloudVision Portal prompts to specify the configuration options for the primary node. (All options with an asterisk (*) are required.) The options include:
    • Root password (*)
    • Default route (*)
    • DNS (*)
    • NTP (*)
    • Telemetry Ingest Key
    • Cluster interface name (*)
    • Device interface name (*)
    • Hostname (*)
    • IP address (*)
    • Netmask (*)
    • Number of static routes
    • Route for each static route
    • Interface for static route
    • TACACS server ip address
    • TACACS server key/port
    • IP address of primary (*) for secondary/tertiary only
    Note: If there are separate cluster and device interfaces (the interfaces have different IP addresses), make sure that you enter the hostname of the cluster interface. If the cluster and device interface are the same (for example, they are eth0), make sure you enter the IP address of eth0 for the hostname.
  5. At the following prompt, type v to verify the configuration.
    [q]uit, [p]rint, [e]dit, [v]erify, [s]ave, [a]pply, [h]elp ve[r]bose.

    If the configuration is valid, the system shows a Valid config status message.

  6. Type a to apply the configuration for the primary node and wait for the line Waiting for other nodes to send their hostname and ip with spinning wheel.

    The system automatically saves the configuration as a YAML document and shows the configuration settings in pane 1 of the shell.)

  7. When Waiting for other nodes to send their hostname and ip line is printed by the primary node, go to the shell for the secondary node, and specify the configuration settings for the secondary node (All options with an asterisk (*) are required, including primary node IP address)
  8. At the following prompt, type v to verify the configuration.
    [q]uit, [p]rint, [e]dit, [v]erify, [s]ave, [a]pply, [h]elp ve[r]bose. 

    If the configuration is valid, the system shows a Valid config status message.

  9. At the Primarys root password prompt, type (enter) the password for the primary node, and then press Enter.
  10. Go to the shell for the tertiary node, and specify the configuration settings for the node. (All options with an asterisk (*) are required.)
  11. At the following prompt, type v to verify the configuration.
    [q]uit, [p]rint, [e]dit, [v]erify, [s]ave, [a]pply, [h]elp ve[r]bose.

    If the configuration is valid, the system shows a Valid config status message.

  12. At the Primary IP prompt, type the IP address of the primary node.
  13. At the Primarys root password prompt, press Enter.

    The system automatically completes the CVP installation for all nodes (this is done by the primary node). A message appears indicating that the other nodes are waiting for the primary node to complete the CVP installation.

    When the CVP installation is successfully completed for a particular node, a message appears in the appropriate pane to indicate the installation was successful. (This message is repeated in each pane.)

  14. Go to shell for the primary node, and type q to quit the installation.
  15. At the cvp login prompt, login as root.
  16. At the [root@cvplogin]# prompt, switch to the cvp user account by typing su cvp, and then press Enter.
  17. Run the cvpi status all command, and press Enter.

    The system automatically checks the status of the installation for each node and provides status information in each pane for CVP. The information shown includes some of the configuration settings for each node.

Rules for the Number and Type of Nodes

Three nodes are required for multi-node CVP instances, where a node is identified as either the primary, secondary, or tertiary. You define the node type (primary, secondary, or tertiary) for each node during the configuration.

The Basic Steps in the Process

All multi-node configurations follow the same basic process. The basic steps are:

  1. Specify the settings for the nodes in the following sequence (you apply the configuration later in the process):
    • Primary node
    • Secondary node
    • Tertiary node
  2. Verify and then apply the configuration for the primary node. (During this step, the system automatically saves the configuration for the primary node as a YAML document. In addition, the system shows the configuration settings.)

    Once the system applies the configuration for the primary node, the other nodes need to send their hostname and IP address to the primary node.

  3. Verify and then apply the configuration for the secondary node.

    As part of this step, the system automatically pushes the hostname, IP address, and public key of the secondary node to the primary node. The primary node also sends a consolidated YAML to the secondary node, which is required to complete the configuration of the secondary node.

  4. The previous step (verifying and applying the configuration) is repeated for the tertiary node. (The automated processing of data described for the secondary node is also repeated for the tertiary node.)

    Once the configuration for all nodes has been applied (steps 1 through 4 above), the system automatically attempts to complete the CVP installation for all nodes (this is done by the primary node). A message appears indicating that the other nodes are waiting for the primary node to complete the CVP installation.

  5. You quit the installation, then login as root and check the status of CVP.

    The system automatically checks the status and provides status information in each pane for the CVP service.

The CVP Shell

For multi-node configurations, you need to open 3 CVP consoles (one for each node). Each console is shown in it's own pane. You use each console to configure one of the nodes (primary, secondary, or tertiary).

The system also provides status messages and all of the options required to complete the multi-node configuration. The status messages and options are presented in the panes of the shell that correspond to the node type.

Figure 1 shows three CVP Console shells for multi-node configurations. Each shell corresponds to a CVP Console for each node being configured.

 

Figure 1: CVP Console Shells for Multi-node Configurations

Examples

 

The following examples show the commands used to configure (set up) the primary, secondary, and tertiary nodes, and apply the configurations to the nodes. Examples are also included of the system output shown as CVP completes the installation for each of the nodes.

Primary Node Configuration

This example shows the commands used to configure (set up) the primary node.

localhost login: cvpadmin
Changing password for user root.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.
Enter a command
[q]uit [p]rint [s]inglenode [m]ultinode [r]eplace [u]pgrade
>m
Choose a role for the node, roles should be mutually exclusive
[p]rimary [s]econdary [t]ertiary
>p

Enter the configuration for CloudVision Portal and apply it when done.
Entries marked with '*' are required.

common configuration:
dns: 172.22.22.40, 172.22.22.10
DNS domains: sjc.aristanetworks.com, ire.aristanetworks.com
ntp: ntp.aristanetworks.com
Telemetry Ingest Key: arista
CloudVision WiFi Enabled: no
CloudVision WiFi HA cluster IP:
Cluster Interface name: eth0
Device Interface name: eth0
node configuration:
 *hostname (fqdn): cvp57.sjc.aristanetworks.com
 *default route: 172.31.0.1
Number of Static Routes:
TACACS server ip address:
 *IP address of eth0: 172.31.0.186
 *Netmask of eth0: 255.255.0.0
[q]uit [p]rint [e]dit [v]erify [s]ave [a]pply [h]elp ve[r]bose
>

Secondary Node Configuration

This example shows the commands used to configure (set up) the secondary node.

localhost login: cvpadmin
Changing password for user root.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.
Enter a command
[q]uit [p]rint [s]inglenode [m]ultinode [r]eplace [u]pgrade
>m
Choose a role for the node, roles should be mutually exclusive
[p]rimary [s]econdary [t]ertiary
>s

Enter the configuration for CloudVision Portal and apply it when done.
Entries marked with '*' are required.

common configuration:
dns: 172.22.22.40, 172.22.22.10
DNS domains: sjc.aristanetworks.com, ire.aristanetworks.com
ntp: ntp.aristanetworks.com
Telemetry Ingest Key: arista
CloudVision WiFi Enabled: no
CloudVision WiFi HA cluster IP:
Cluster Interface name: eth0
Device Interface name: eth0
*IP address of primary: 172.31.0.186
node configuration:
 *hostname (fqdn): cvp65.sjc.aristanetworks.com
 *default route: 172.31.0.1
Number of Static Routes:
TACACS server ip address:
 *IP address of eth0: 172.31.0.153
 *Netmask of eth0: 255.255.0.0
>

Tertiary Node Configuration

This example shows the commands used to configure (set up) the tertiary node.

Changing password for user root.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.
Enter a command
[q]uit [p]rint [s]inglenode [m]ultinode [r]eplace [u]pgrade
>m
Choose a role for the node, roles should be mutually exclusive
[p]rimary [s]econdary [t]ertiary
>t

Enter the configuration for CloudVision Portal and apply it when done.
Entries marked with '*' are required.

common configuration:
dns: 172.22.22.40, 172.22.22.10
DNS domains: sjc.aristanetworks.com, ire.aristanetworks.com
ntp: ntp.aristanetworks.com
Telemetry Ingest Key: arista
Cluster Interface name: eth0
Device Interface name: eth0
 *IP address of primary: 172.31.0.186
node configuration:
hostname (fqdn): cvp84.sjc.aristanetworks.com
 *default route: 172.31.0.1
Number of Static Routes:
TACACS server ip address:
 *IP address of eth0: 172.31.0.213
 *Netmask of eth0: 255.255.0.0
[q]uit [p]rint [e]dit [v]erify [s]ave [a]pply [h]elp ve[r]bose
>

Verifying the Primary Node Configuration and Applying it to the Node

This example shows the commands used to verify the configuration of the primary node and apply the configuration to the node.

[q]uit [p]rint [e]dit [v]erify [s]ave [a]pply [h]elp ve[r]bose
>v
Valid config format.
Applying proposed config for network verification.
saved config to /cvpi/cvp-config.yaml
Running : cvpConfig.py tool...
[ 8608.509056] vmxnet3 0000:0b:00.0 eth0: intr type 3, mode 0, 9 vectors allocated
[ 8608.520693] vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps
[ 8622.807169] vmxnet3 0000:13:00.0 eth1: intr type 3, mode 0, 9 vectors allocated
[ 8622.810214] vmxnet3 0000:13:00.0 eth1: NIC Link is Up 10000 Mbps
Stopping: network
Running : /bin/sudo /sbin/service network stop
Running : /bin/sudo /bin/systemctl is-active network
Starting: network
Running : /bin/sudo /bin/systemctl start network.service
[ 8624.027029] vmxnet3 0000:0b:00.0 eth0: intr type 3, mode 0, 9 vectors allocated
[ 8624.030254] vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps
[ 8624.032643] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[ 8624.238995] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[ 8638.294690] vmxnet3 0000:13:00.0 eth1: intr type 3, mode 0, 9 vectors allocated
[ 8638.297973] vmxnet3 0000:13:00.0 eth1: NIC Link is Up 10000 Mbps
[ 8638.300454] IPv6: ADDRCONF(NETDEV_UP): eth1: link is not ready
[ 8638.302186] IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready
[ 8638.489266] warning: `/bin/ping' has both setuid-root and effective capabilities. Therefore not raising all capabilities.
Warning: External interfaces, ['eth1'], are discovered under /etc/sysconfig/network-scripts
These interfaces are not managed by CVP.
Please ensure that the configurations for these interfaces are correct.
Otherwise, actions from the CVP shell may fail.

Valid config.
[q]uit [p]rint [e]dit [v]erify [s]ave [a]pply [h]elp ve[r]bose
>

Verifying the Tertiary Node Configurations and Applying them to the Nodes

This example shows the commands used to verify the configurations of the tertiary nodes and apply the configurations to the nodes.

[q]uit [p]rint [e]dit [v]erify [s]ave [a]pply [h]elp ve[r]bose
>v
Valid config format.
Applying proposed config for network verification.
saved config to /cvpi/cvp-config.yaml
Running : cvpConfig.py tool...
[ 9195.362192] vmxnet3 0000:0b:00.0 eth0: intr type 3, mode 0, 9 vectors allocated
[ 9195.365069] vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps
[ 9195.367043] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[ 9195.652382] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[ 9209.588173] vmxnet3 0000:13:00.0 eth1: intr type 3, mode 0, 9 vectors allocated
[ 9209.590896] vmxnet3 0000:13:00.0 eth1: NIC Link is Up 10000 Mbps
[ 9209.592887] IPv6: ADDRCONF(NETDEV_UP): eth1: link is not ready
[ 9209.594222] IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready
Stopping: network
Running : /bin/sudo /sbin/service network stop
Running : /bin/sudo /bin/systemctl is-active network
Starting: network
Running : /bin/sudo /bin/systemctl start network.service
[ 9210.561940] vmxnet3 0000:0b:00.0 eth0: intr type 3, mode 0, 9 vectors allocated
[ 9210.564602] vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps
[ 9224.805267] vmxnet3 0000:13:00.0 eth1: intr type 3, mode 0, 9 vectors allocated
[ 9224.808891] vmxnet3 0000:13:00.0 eth1: NIC Link is Up 10000 Mbps
[ 9224.811150] IPv6: ADDRCONF(NETDEV_UP): eth1: link is not ready
[ 9224.812899] IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready
Warning: External interfaces, ['eth1'], are discovered under /etc/sysconfig/network-scripts
These interfaces are not managed by CVP.
Please ensure that the configurations for these interfaces are correct.
Otherwise, actions from the CVP shell may fail.

Valid config.
[q]uit [p]rint [e]dit [v]erify [s]ave [a]pply [h]elp ve[r]bose
>

Waiting for the Primary Node Installation to Finish

These examples show the system output shown as CVP completes the installation for the primary node.
  • Waiting for primary node installation to pause until other nodes send files
    [q]uit [p]rint [e]dit [v]erify [s]ave [a]pply [h]elp ve[r]bose
    >a
    Valid config format.
    saved config to /cvpi/cvp-config.yaml
    Applying proposed config for network verification.
    saved config to /cvpi/cvp-config.yaml
    Running : cvpConfig.py tool...
    [15266.575899] vmxnet3 0000:0b:00.0 eth0: intr type 3, mode 0, 9 vectors allocated
    [15266.588500] vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps
    [15266.591751] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
    [15266.672644] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
    [15280.937599] vmxnet3 0000:13:00.0 eth1: intr type 3, mode 0, 9 vectors allocated
    [15280.941764] vmxnet3 0000:13:00.0 eth1: NIC Link is Up 10000 Mbps
    [15280.944883] IPv6: ADDRCONF(NETDEV_UP): eth1: link is not ready
    [15280.947038] IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready
    Stopping: network
    Running : /bin/sudo /sbin/service network stop
    Running : /bin/sudo /bin/systemctl is-active network
    Starting: network
    Running : /bin/sudo /bin/systemctl start network.service
    [15282.581713] vmxnet3 0000:0b:00.0 eth0: intr type 3, mode 0, 9 vectors allocated
    [15282.585367] vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps
    [15282.588072] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
    [15282.948613] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
    [15296.871658] vmxnet3 0000:13:00.0 eth1: intr type 3, mode 0, 9 vectors allocated
    [15296.875871] vmxnet3 0000:13:00.0 eth1: NIC Link is Up 10000 Mbps
    [15296.879003] IPv6: ADDRCONF(NETDEV_UP): eth1: link is not ready
    [15296.881456] IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready
    Warning: External interfaces, ['eth1'], are discovered under /etc/sysconfig/network-scripts
    These interfaces are not managed by CVP.
    Please ensure that the configurations for these interfaces are correct.
    Otherwise, actions from the CVP shell may fail.
    
    Valid config.
    Running : cvpConfig.py tool...
    [15324.884887] vmxnet3 0000:0b:00.0 eth0: intr type 3, mode 0, 9 vectors allocated
    [15324.889169] vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps
    [15324.893217] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
    [15324.981682] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
    [15339.240237] vmxnet3 0000:13:00.0 eth1: intr type 3, mode 0, 9 vectors allocated
    [15339.243999] vmxnet3 0000:13:00.0 eth1: NIC Link is Up 10000 Mbps
    [15339.247119] IPv6: ADDRCONF(NETDEV_UP): eth1: link is not ready
    [15339.249370] IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready
    Stopping: network
    Running : /bin/sudo /sbin/service network stop
    Running : /bin/sudo /bin/systemctl is-active network
    Starting: network
    Running : /bin/sudo /bin/systemctl start network.service
    [15340.946583] vmxnet3 0000:0b:00.0 eth0: intr type 3, mode 0, 9 vectors allocated
    [15340.950891] vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps
    [15340.953786] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
    [15341.251648] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
    [15355.225649] vmxnet3 0000:13:00.0 eth1: intr type 3, mode 0, 9 vectors allocated
    [15355.229400] vmxnet3 0000:13:00.0 eth1: NIC Link is Up 10000 Mbps
    [15355.232674] IPv6: ADDRCONF(NETDEV_UP): eth1: link is not ready
    [15355.234725] IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready
    Waiting for other nodes to send their hostname and ip
    \
  • Waiting for the primary node installation to finish
    Waiting for other nodes to send their hostname and ip
    -
    Running : cvpConfig.py tool...
    [15707.665618] vmxnet3 0000:0b:00.0 eth0: intr type 3, mode 0, 9
    vectors allocated
    [15707.669167] vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps
    [15707.672109] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
    [15708.643628] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
    [15722.985876] vmxnet3 0000:13:00.0 eth1: intr type 3, mode 0, 9
    vectors allocated
    [15722.990116] vmxnet3 0000:13:00.0 eth1: NIC Link is Up 10000 Mbps
    [15722.993221] IPv6: ADDRCONF(NETDEV_UP): eth1: link is not ready
    [15722.995325] IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready
    [15724.245523] Ebtables v2.0 unregistered
    [15724.940390] ip_tables: (C) 2000-2006 Netfilter Core Team
    [15724.971820] ip6_tables: (C) 2000-2006 Netfilter Core Team
    [15725.011963] Ebtables v2.0 registered
    [15725.077660] nf_conntrack version 0.5.0 (65536 buckets, 262144 max)
    Stopping: ntpd
    Running : /bin/sudo /sbin/service ntpd stop
    Running : /bin/sudo /bin/systemctl is-active ntpd
    Starting: ntpd
    Running : /bin/sudo /bin/systemctl start ntpd.service
    --
    Verifying configuration on the secondary node
    Verifying configuration on the tertiary node
    Starting: systemd services
    Starting: cvpi-check
    Running : /bin/sudo /bin/systemctl start cvpi-check.service
    Starting: zookeeper
    Running : /bin/sudo /bin/systemctl start zookeeper.service
    Starting: cvpi-config
    Running : /bin/sudo /bin/systemctl start cvpi-config.service
    Starting: cvpi
    Running : /bin/sudo /bin/systemctl start cvpi.service
    Running : /bin/sudo /bin/systemctl enable zookeeper
    Running : /bin/sudo /bin/systemctl start cvpi-watchdog.timer
    Running : /bin/sudo /bin/systemctl enable docker
    Running : /bin/sudo /bin/systemctl start docker
    Running : /bin/sudo /bin/systemctl enable kube-cluster.path
    Running : /bin/sudo /bin/systemctl start kube-cluster.path
    Waiting for all components to start. This may take few minutes.
    Still waiting for aaa aeriadiakmonitor alertmanager-multinode-service ambassador apiserver apiserver-www apiserver-www apiserver-www audit aware ... {total 271)
    Still waiting for aaa aerisdisknonitor alertmanager-multinode-service anbassador apiserver apiserver-www apiserver-www apiserver-www audit bapmaintmode ... (total 235)
    Still waiting for asa aerisdiskmonitor alertmanager-multinode-service ambassador apiserver apiserver-www spiserver-www apiserver-www audit bgpmaintmode ... (total 236)
    Still waiting for aaa aerisdiskmonitor alertmanager-multinode-service ambassador apiserver apiserver-www apiserver-www apiserver-www audit bgpmaintmode ... {total 235)
    Still waiting for aaa aerisdiskmonitor alertmanager-multinode-service ambassador apiserver apiserver-www apiserver-www apiserver-www audit bgpmaintmode ... {total 235)
    Still waiting for aaa aeriasdiskmonitor alertmanager-multinode-service ambassador apiserver apiserver-www apiserver-www apiserver-www audit bgpmaintmode ... (total 235)
    Still waiting for aaa aerisdisknenitor alertmanager-multinode-service ambassador apiserver apiserver-www apiserver-www apiserver-wwww audit bgpmaintmode ... (total 236)
    Still waiting for eae aerisdiskmonitor alertmanager-multinode-service ambassador apiserver apiserver-www apiserver-wrw apiserver-www audit bgpmaintmode ... (total 229)
    Still waiting for aaa aerisdisknonitor alertmanager-multinode-service ambassador apiserver apiserver-www apiserver-www apiserver-www audit bgpmaintmode ... (total 228)
    Still waiting for aaa aerisdiskmonitor alertmanager-multinode-service ambassador apiserver apiserver-www apiserver-www apiserver-www audit bgpmaintmode ... (total 213)
    Still waiting for aaa alertmanager-multinode-service ambassador apiserver apiserver-www apiserver-www apiserver-www audit bgpmaintmode bugalerts-query-tagger ... (total 199)
    Still waiting for aaa alertmanager-multinode-service ambassador apiserver apiaserver apiserver apiserver-www apiserver-www apiserver-www audit ... (total 181)
    Still waiting for ase ambassador spisercver-www apiserver-www episerver-www audit bgpmaintmode bugalerts-update ccapi cemgr ... (total 121)
    Still waiting for aaa ambassador apiserver-www apiserver-www apiserver-www audit bgpmaintmode ccapi ccmgr certs ... (total 78)
    Still waiting for saa ambassador apiserver-www apiserver-www apiserver-www audit certs cloudmanager compliance cvp-backend ... (total 44)
    Still waiting for aaa ambassador apiserver-www apiserver-www apiserver-www certs cloudmanager cloudmanager cloudmanager compliance ... (total 35)
    Still waiting for aaa cvp-frontend cvp-frontend cvp-frontend cvp-www cvp-www cvp-www inventory ztp
    Still waiting for aaa cvp-frontend cvp-frontend cvp-frontend cvp-www cvp-www cvp-www inventory ztp
    Still waiting for aaa cvp-frontend cvp-frontend cvp-frontend cvp-www cvp-www cvp-www inventory ztp
    Still waiting for aaa cvp-frontend cvp-frontend cvp-frontend cvp-www cvp-www cvp-www inventory ztp
    Still waiting for aaa cvp-frontend cvp-frontend cvp-frontend cvp-www cvp-www cvp-www inventory ztp
    Still waiting for aaa cvp-frontend cvp-frontend cvp-frontend cvp-www cvp-www cvp-www inventory ztp
    Still waiting for aaa evp-frontend evp-frontend evp-frontend cvp-www evp-www cvp-www inventory ztp
    Still waiting for cvp-frontend cvp-frontend cvp-frontend
    CVP installation successful
    Running : cvpConfig.py tool...
    Stopping wifimanager
    Running : su - cvp -c "cvpi stop wifimanager"
    Stopping aware
    Running : su - cvp -c "cvpi stop aware"
    Disabling wifimanager
    Running : su - cvp -c "cvpi disable wifimanager"
    Disabling aware
    Running 1 su - cvp -c "cvpi disable aware"
    
    [q]uit [p]rint [e]dit [v]erify [s]ave [a]pply [h]elp ve[r)bose

Waiting for the Secondary and Tertiary Node Installation to Finish

This example shows the system output displayed as CVP completes the installation for the secondary and tertiary nodes.

[q]uit [p]rint [e]dit [v]erify [s]ave [a]pply [h]elp ve[r]bose
>a
Valid config format.
saved config to /cvpi/cvp-config.yaml
Applying proposed config for network verification.
saved config to /cvpi/cvp-config.yaml
Running : cvpConfig.py tool...
[15492.903419] vmxnet3 0000:0b:00.0 eth0: intr type 3, mode 0, 9 vectors allocated
[15492.908473] vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps
[15492.910297] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[15493.289569] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[15507.118778] vmxnet3 0000:13:00.0 eth1: intr type 3, mode 0, 9 vectors allocated
[15507.121579] vmxnet3 0000:13:00.0 eth1: NIC Link is Up 10000 Mbps
[15507.123648] IPv6: ADDRCONF(NETDEV_UP): eth1: link is not ready
[15507.125051] IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready
Stopping: network
Running : /bin/sudo /sbin/service network stop
Running : /bin/sudo /bin/systemctl is-active network
Starting: network
Running : /bin/sudo /bin/systemctl start network.service
[15508.105909] vmxnet3 0000:0b:00.0 eth0: intr type 3, mode 0, 9 vectors allocated
[15508.108752] vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps
[15522.301114] vmxnet3 0000:13:00.0 eth1: intr type 3, mode 0, 9 vectors allocated
[15522.303766] vmxnet3 0000:13:00.0 eth1: NIC Link is Up 10000 Mbps
[15522.305580] IPv6: ADDRCONF(NETDEV_UP): eth1: link is not ready
[15522.306866] IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready
Warning: External interfaces, ['eth1'], are discovered under /etc/sysconfig/network-scripts
These interfaces are not managed by CVP.
Please ensure that the configurations for these interfaces are correct.
Otherwise, actions from the CVP shell may fail.

Valid config.
Running : cvpConfig.py tool...
[15549.664989] vmxnet3 0000:0b:00.0 eth0: intr type 3, mode 0, 9 vectors allocated
[15549.667899] vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps
[15549.669783] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[15550.046552] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[15563.933328] vmxnet3 0000:13:00.0 eth1: intr type 3, mode 0, 9 vectors allocated
[15563.937507] vmxnet3 0000:13:00.0 eth1: NIC Link is Up 10000 Mbps
[15563.940501] IPv6: ADDRCONF(NETDEV_UP): eth1: link is not ready
[15563.942113] IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready
Stopping: network
Running : /bin/sudo /sbin/service network stop
Running : /bin/sudo /bin/systemctl is-active network
Starting: network
Running : /bin/sudo /bin/systemctl start network.service
[15565.218666] vmxnet3 0000:0b:00.0 eth0: intr type 3, mode 0, 9 vectors allocated
[15565.222324] vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps
[15565.225193] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[15565.945531] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[15579.419911] vmxnet3 0000:13:00.0 eth1: intr type 3, mode 0, 9 vectors allocated
[15579.422707] vmxnet3 0000:13:00.0 eth1: NIC Link is Up 10000 Mbps
[15579.424636] IPv6: ADDRCONF(NETDEV_UP): eth1: link is not ready
[15579.425962] IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready
Running : cvpConfig.py tool...
[15600.608075] vmxnet3 0000:0b:00.0 eth0: intr type 3, mode 0, 9 vectors allocated
[15600.610946] vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps
[15600.613687] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[15600.986529] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[15615.840426] vmxnet3 0000:13:00.0 eth1: intr type 3, mode 0, 9 vectors allocated
[15615.843207] vmxnet3 0000:13:00.0 eth1: NIC Link is Up 10000 Mbps
[15615.845197] IPv6: ADDRCONF(NETDEV_UP): eth1: link is not ready
[15615.846633] IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready
[15616.732733] Ebtables v2.0 unregistered
[15617.213057] ip_tables: (C) 2000-2006 Netfilter Core Team
[15617.233688] ip6_tables: (C) 2000-2006 Netfilter Core Team
[15617.261149] Ebtables v2.0 registered
[15617.309743] nf_conntrack version 0.5.0 (65536 buckets, 262144 max)
Stopping: ntpd
Running : /bin/sudo /sbin/service ntpd stop
Running : /bin/sudo /bin/systemctl is-active ntpd
Starting: ntpd
Running : /bin/sudo /bin/systemctl start ntpd.service
Pushing hostname, ip address and public key to the primary node
Primary's root password:
Transferred files
Receiving public key of the primary node
-
Waiting for primary to send consolidated yaml
-
Received authorized keys and consolidated yaml files
Running : /bin/sudo /bin/systemctl start cvpi-watchdog.timer
Running : cvpConfig.py tool...
[15748.205170] vmxnet3 0000:0b:00.0 eth0: intr type 3, mode 0, 9 vectors allocated
[15748.208393] vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps
[15748.210206] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[15748.591559] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[15752.406867] vmxnet3 0000:13:00.0 eth1: intr type 3, mode 0, 9 vectors allocated
[15752.409789] vmxnet3 0000:13:00.0 eth1: NIC Link is Up 10000 Mbps
[15752.412015] IPv6: ADDRCONF(NETDEV_UP): eth1: link is not ready
[15752.413603] IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready
Stopping: zookeeper
Running : /bin/sudo /sbin/service zookeeper stop
Running : /bin/sudo /bin/systemctl is-active zookeeper
Stopping: cvpi-check
Running : /bin/sudo /sbin/service cvpi-check stop
Running : /bin/sudo /bin/systemctl is-active cvpi-check
Stopping: ntpd
Running : /bin/sudo /sbin/service ntpd stop
Running : /bin/sudo /bin/systemctl is-active ntpd
Starting: ntpd
Running : /bin/sudo /bin/systemctl start ntpd.service
Starting: cvpi-check
Running : /bin/sudo /bin/systemctl start cvpi-check.service
Starting: zookeeper
Running : /bin/sudo /bin/systemctl start zookeeper.service
Running : /bin/sudo /bin/systemctl enable docker
Running : /bin/sudo /bin/systemctl start docker
Running : /bin/sudo /bin/systemctl enable kube-cluster.path
Running : /bin/sudo /bin/systemctl start kube-cluster.path
Running : /bin/sudo /bin/systemctl enable zookeeper
Running : /bin/sudo /bin/systemctl enable cvpi
Waiting for primary to finish configuring cvp.
-
Please wait for primary to complete cvp installation.
[q]uit [p]rint [e]dit [v]erify [s]ave [a]pply [h]elp ve[r]bose
>
..