Installing and Configuring the DMF Controller

This chapter describes the basic procedure for installing the DANZ Monitoring Fabric (DMF) Controller software.

 

Connecting to the Controller

The DANZ Monitoring Fabric (DMF) Controller can be deployed as a hardware appliance, a one-RU hardware device containing preloaded software, or as a Virtual Machine (VM).

Complete the initial setup of the Controller in any of the following ways:
  • Attach a monitor and keyboard to the appliance console port directly.
  • Use a terminal server to connect to the appliance console port and telnet to the terminal server.
  • When deploying the Controller as a VM, connect to the VM console.
Note: Arista Networks recommends having an iDRAC connection to the Controller, Service Node, Analytics Node, and Recorder Node appliances. This connection helps in easy troubleshooting of issues.Refer to the chapter, Using iDRAC with a Dell R440 or R740 Server later in this guide for more details.

Connecting to the Controller Appliance Using a Terminal Server

After you connect the DANZ Monitoring Fabric (DMF) Controller appliance serial console to a terminal server, you can use telnet (or SSH if supported by your terminal server) to connect to the hardware appliance through the terminal server.

To connect the serial connection on a Controller appliance to a terminal server, complete the following steps:

  1. Obtain a serial console cable with a DB-9 connector on one end and an RJ-45 connector on the other.
  2. Connect the DB-9 connector to the hardware appliance DB-9 port (not the VGA port) and the RJ45 to the terminal server.
  3. Configure the terminal server port baud rate to 115200.
  4. Set the port baud rate of 115200 on the terminal server.
Note:
  • On some terminal servers, the saved configuration must be reloaded after changing the baud-rate.
  • You should now be able to telnet or SSH to the hardware appliance serial console through the terminal server.

Configuring the Active Controller Using the First Boot Script

After connecting to the Controller appliance to start the initial setup, the system runs the First Boot script. The following configuration information is required to configure the DANZ Monitoring Fabric (DMF) Controller:
  • IP address for the active and standby Controller
  • The subnet mask and the default gateway IP address
  • NTP server IP address
  • Host name
  • (Optional) DNS server IP address
  • (Optional) DNS search domain name
Note:The default serial console rate on DMF hardware appliances for releases before 6.1 was 9600. When upgrading an existing Controller appliance with a serial interface set to 9600 baud, change the terminal setting to 115200 after the upgrade.
To perform the initial configuration of the DMF Controller, complete the following steps:
  1. Insert the bootable USB drive into the Controller appliance USB port.
    Refer to Appendix B: Creating a USB Boot Image to make a bootable USB drive.
    Note: After powering on the hardware appliance for the active Controller, press Enter when prompted to begin the installation.
  2. Press Enter to begin the installation.
    This product is governed by an End User License Agreement (EULA).
    You must accept this EULA to continue using this product.
    You can view this EULA from our website at:
    https://www.arista.com/en/eula
    Do you accept the EULA for this product? (Yes/No) [Yes] > Yes
    Running system pre-check
    Finished system pre-check
    Starting first-time setup
    Local Node Configuration
    ------------------------
  3. To accept the agreement, type the command: Yes.
    Refer to Appendix B: Creating a USB Boot Image to make a bootable USB drive.
    Note: When entering the command: No, the system prompts the user to log out. Accepting the EULA is mandatory to install the product.
    Enter the command: Yes to accept the EULA. The system prompts for a password to be used for emergency recovery of the Controller node, as shown below.
    Emergency recovery user password >
  4. Set the recovery user password and confirm the entry.
    Local Node Configuration
    ------------------------
    Emergency recovery user password>
    Emergency recovery user password (retype to confirm)>
    Hostname> controller-1
  5. Choose the IP forwarding mode when prompted.
    [1] IPv4 only
    [2] IPv6 only
    [3] IPv4 and IPv6
    >3
  6. Choose the method for assigning IPv4 addresses.
    Management network (IPv4) options:
    [1] Manual
    [2] Automatic via DHCP
    [1] >
    Note: When using [2] Automatic via DHCP, Arista Networks recommends reserving the DHCP address for the MAC address of the Controller’s management port.
  7. Choose the method for assigning IPv6 addresses.
    Management network (IPv6) options:
    [1] Automatic via SLAAC & DHCPv6
    [2] Manual
    [1] >

    The DMF Controller Appliance allows IPv6 Stateless Address Auto configuration for Controller IPv6 management.

    Note: Support for Jumbo Ethernet frames is enabled by default.
    When using [1] in Step 6 above, the system prompts the user to enter the IPv4 address and related configuration information for the Controller node.
    IPv4 address [0.0.0.0/0]> 10.106.8.2/24
    IPv4 gateway (Optional) > 10.106.8.1
    DNS server 1 (Optional) > 10.3.0.4
    DNS server 2 (Optional) > 10.1.5.200
    DNS search domain (Optional) > qa.aristanetworks.com

    The IPv4 address configuration is applied to the Controller Management Interface (white-labeled ports) on the Controller appliance.

  8. Enter the IP address and optional information regarding the DNS server, as shown in the example.
    Note: Entering the IP address followed by a slash (/) and the number of bits in the subnet mask, the system does not prompt for the CIDR.
    The system prompts the user to create a new cluster or join an existing cluster.
    DNS search domain (Optional) > bsn.sjc.aristanetworks.com
    Clustering
    ----------
    Controller cluster options:
    [1] Start a new cluster
    [2] Join an existing cluster
    > 1
  9. Type 1 when configuring the active controller.
    The system prompts the user to enter the cluster name and description and to set the password.
    > 1
    Cluster name > dmf-cluster
    Cluster description (Optional) >
    Cluster administrator password >
    Cluster administrator password (retype to confirm) >
  10. Enter a name for the cluster, and an optional description, and set the password for administrative access to the cluster.
    The system then prompts for the name of each NTP server to use to set the system time required for synchronizing between nodes in the cluster and fabric switches.
    System Time
    -----------
    Default NTP servers:
    - 0.bigswitch.pool.ntp.org
    - 1.bigswitch.pool.ntp.org
    - 2.bigswitch.pool.ntp.org
    - 3.bigswitch.pool.ntp.org
    NTP server options:
    [1] Use default NTP servers
    [2] Use custom NTP servers
    [1] > 1
    Note: Fabric switches and nodes obtain their NTP service from the active Controller.
  11. Enter the IP address or fully-qualified domain name of the NTP server for the network.
    Completing the initial configuration without specifying the NTP server is possible, but deploying the DMF Controller in a production environment requires a functioning NTP configuration.
    The system completes the configuration and displays the First Boot Menu.
    [ 1] Apply settings
    [ 2] Reset and start over
    [ 3] Update Recovery password (*****)
    [ 4] Update Hostname (controller-1)
    [ 5] Update IP Option (IPv4 and IPv6)
    [ 6] Update IPv6 Method (Automatic via SLAAC & DHCPv6)
    [ 7] Update IPv4 Address (10.106.8.2/24)
    [ 8] Update IPv4 Gateway (10.106.8.3)
    [ 9] Update DNS Server 1 (10.3.0.4)
    [10] Update DNS Server 2 (10.1.5.200)
    [11] Update DNS Search Domain (qa.aristanetworks.com)
    [12] Update Cluster Option (Start a new cluster)
    [13] Update Cluster Name (dmf-cluster)
    [14] Update Cluster Description (<none>)
    [15] Update Admin Password (*****)
    [16] Update NTP Option (Use default NTP servers)
    [1] >
  12. To apply the configuration, type 1.
    Note: To change any previous configurations, type the appropriate menu option, make the required changes, and then type 1 to apply the settings when the system returns to this menu.

    After entering the option to apply the settings, the system applies the settings, starts the DMF Controller, and displays the Controller banner and login prompt.

    The system applies the settings and displays a series of prompts as each process completes. When successful, the system displays the following prompts.
    [Stage 1] Initializing system
    [Stage 2] Configuring controller
    Waiting for network configuration
    IP address on em1 is 10.106.8.3
    Generating cryptographic keys
    [Stage 3] Configuring system time
    Initializing the system time by polling the NTP servers:
    0.bigswitch.pool.ntp.org
    1.bigswitch.pool.ntp.org
    2.bigswitch.pool.ntp.org
    3.bigswitch.pool.ntp.org
    [Stage 4] Configuring cluster
    Cluster configured successfully.
    Current node ID is 10249
    All cluster nodes:
    Node 10249: fe80::1618:77ff:fe67:3f0c:6642
    First-time setup is complete!
    Press enter to continue >
    DANZ Monitoring Fabric 8.4.0 (dmf-8.4.0 #11)
    Log in as 'admin' to configure
    controller-1 login:
    To login to the Controller, use the account name admin and the password set for the cluster during installation. Optionally, log in to the active Controller using SSH with the assigned IP address or use a web browser to connect to the IP address using HTTPS.
    controller login: admin
    This email address is being protected from spambots. You need JavaScript enabled to view it..8.2's password:
    Login: admin, on Thu 2020-11-19 21:39:28 UTC, from 10.95.66.14
    Last login: on Thu 2020-11-19 21:39:05 UTC, from 10.95.66.14
    controller-1>

Configuring the Standby Controller

 

Joining Standby Controller to Existing Cluster

Arista Networks recommends deploying the DANZ Monitoring Fabric (DMF) in a two-node cluster for operational resilience. Set up the first (active) cluster node described above. Use the steps described here to configure a second Controller node and join it to the cluster as a standby node.
An active and standby Controller can be deployed in different L3 subnets, changing how Controllers communicate. Active and standby Controllers communicate using Controller management IPv4 address. This configuration enables the provisioning of active and standby Controllers in different IP subnets (L3 domains) or on the same subnet (L2 domain).
Note: Both nodes in a cluster must be running the same version of DMF Controller software. The standby node will not join the cluster if it is not running the same version as the Active node. The maximum latency between active and standby Controllers should be less than 300ms.

Powering on the hardware appliance or the VM with the pre-installed DMF software prompts the user to log in as admin for the first-time configuration.

  1. Log in to the second appliance using the admin account.
    Note: Ports on the Controller-facing management-switch port must come up within 5 seconds.

    Powering on the hardware appliance for the active Controller prompts the user to press Enter to begin the installation.

  2. Press Enter to begin the installation.
  3. Log in to the standby Controller using the admin account.
    Note: DANZ Monitoring Fabric 8.4.0 (dmf-8.4.0 #11)

    Log in as 'admin' to configure

    controller login: admin

    Use the default account (admin) without a password when the system is booting from the factory default image. The system displays the following prompt to accept the End User License Agreement (EULA).
    This product is governed by an End User License Agreement (EULA).
    You must accept this EULA to continue using this product.
    You can view this EULA from our website at:
    https://www.arista.com/en/eula
    Do you accept the EULA for this product? (Yes/No) [Yes] > Yes
    To read the agreement, type ``View``
  4. Accept the agreement.
    Note:When entering the command: No, the system prompts the user to log out. Accepting the EULA is mandatory to install the product.

    After typing Yes to accept the EULA, the system prompts for a password to be used for emergency recovery of the Controller node, as shown below: To read the agreement, type View.

    To accept the agreement, type Yes.

    Starting first-time setup
    Local Node Configuration
    ------------------------
    Password for emergency recovery user >
    Retype Password for emergency recovery user >
  5. Set the recovery user password and confirm the entry.
    Local Node Configuration
    ------------------------
    Emergency recovery user password >
    Emergency recovery user password (retype to confirm) >
    Hostname > controller-2
  6. Choose the IP forwarding mode, when prompted.
    Management network options:
    [1] IPv4 only
    [2] IPv6 only
    [3] IPv4 and IPv6
    > 3
  7. Choose the method for assigning IPv4 addresses.
    Management network (IPv4) options:
    [1] Manual
    [2] Automatic via DHCP
    [1] >
    Note: When using [2] Automatic via DHCP, Arista Networks recommends reserving the DHCP address for the MAC address of the Controller’s management port.
  8. Choose the method for assigning IPv6 addresses.
    Management network (IPv6) options:
    1] Automatic via SLAAC & DHCPv6
    2] Manual
    1] >

    The DMF Controller appliance allows IPv6 Stateless Address Auto configuration for Controller IPv6 management.

  9. (Optional) When using[1] Manual in Step 7, the system prompts the user to enter the IPv4 address and related configuration information for the Controller node.
    Enter the IP address and optional information regarding the DNS server, as shown in this example.
    Note:Entering the IP address followed by a slash (/) and the number of bits in the subnet mask, the system may not prompt for the CIDR.
    IPv4 address [0.0.0.0/0] > 10.106.8.3/24
    IPv4 gateway (Optional) > 10.106.8.1
    DNS server 1 (Optional) > 10.3.0.4
    DNS server 2 (Optional) > 10.1.5.200
    DNS search domain (Optional) >qa.aristanetworks.com
    Note: The IP address configuration is applied to the Controller Management Interface on the Controller appliance.
  10. The system prompts the user wants to create a new cluster or join an existing cluster.
    Controller Clustering
    ---------------------
    Controller cluster options:
    [1] Start a new cluster
    [2] Join an existing cluster
    > 2

    Type 2 to join the standby Controller to the existing cluster.

  11. The system prompts the user for the IPv4 address of the active Controller for the cluster the standby will join.
    Existing Controller IP > 10.106.8.2/24

    The user is prompted to enter the IP address of the active Controller for the cluster.

  12. The user is prompted to enter and confirm the cluster password.
    Cluster administrator password >
    Cluster administrator password (retype to confirm) > Menu
  13. The system displays the First Boot Menu and all provided details for final verification. Press 1 to apply the settings and start the process of the standby Controller joining the DMF cluster.
    :: Please choose an option: 
    [ 1] Apply settings 
    [ 2] Reset and start over 
    [ 3] Update Recovery Password 
    [ 4] Update Hostname 
    [ 5] Update IP Option 
    [ 6] Update IPv4 Address 
    [ 7] Update IPv4 Gateway 
    [ 8] Update DNS Server 1 
    [ 9] Update DNS Server 2 
    [10] Update DNS Search Domain 
    [11] Update Cluster Option 
    [12] Update Cluster to Join 
    [13] Update Admin Password 
    [1] > 1
  14. The system advises the current DMF cluster does not have secure control enabled. Enable this optional feature later, as required. Press 1 to resume connecting the standby Controller with the active Controller in the DMF cluster.
    [Stage 1] Initializing system
    [Stage 2] Configuring local node
    Waiting for network configuration
    IP address on ens4 is 10.2.0.185
    Generating cryptographic keys
    Please verify that:
    Secure control plane is NOT configured.
    You can verify the above by running "show secure control plane"
    on the existing controller 10.2.1.90.
    Options:
    [1] Continue connecting (the above info is correct)
    [2] Cancel and review parameters
    > 1
    [Stage 3] Configuring system time
    Initializing the system time by polling the NTP servers:
    0.bigswitch.pool.ntp.org
    1.bigswitch.pool.ntp.org
    2.bigswitch.pool.ntp.org
    3.bigswitch.pool.ntp.org
    [Stage 4] Configuring cluster
    Cluster configured successfully.
    Current node ID is 5302
    All cluster nodes:
    Node 5302: [fe80::5054:ff:fe6a:dd0f]:6642
    Node 15674: [fe80::5054:ff:fec4:d1b8]:6642
    First-time setup is complete!
    Press enter to continue >
    DANZ Monitoring Fabric 8.0.0 (dmf-8.0.0 #62)
    Log in as 'admin' to configure
    Note: It is possible to connect to the DMF Controller using the IP address assigned to either the active or standby Controller. However, you can make configuration changes only when connected to the active Controller. In addition, statistics and other information will be more accurate and up-to-date when viewed on the active Controller.
    DANZ Monitoring Fabric 8.4.0 (dmf-8.4.0 #11)
    Log in as 'admin' to configure
    This email address is being protected from spambots. You need JavaScript enabled to view it..8.3's password:
    Login: admin, on Fri 2020-11-20 05:06:19 UTC, from 10.95.66.14
    Last login: on Fri 2020-11-20 05:06:09 UTC, from 10.95.66.14
    ======================== WARNING: STANDBY CONTROLLER =========================
    This controller node is in standby mode.
    This session should only be used for for troubleshooting.
    Log in to the active to make configuration changes
    and to access up-to-date operational data.
    ================================================================================.
    standby controller-2>

Moving existing Standby Controller to different IPv4 subnet

To move an existing standby Controller to a different subnet, the standby Controller management IPv4 address has to be changed before moving the appliance to a different subnet. Before making any changes, ensure the underlying network configuration is set up correctly for connectivity.
  1. Change the management IP address on the standby Controller. There are two methods to change the Controller management IP address.
    Method A: Using IDRAC or Controller console, remove the existing management ip address and add the new ip address. Go to config->>local node->>interface management->>ipv4 to change controller management ip address.
    Controller-2(config-local-if-ipv4)# no ip 192.168.39.44/24 gateway 192.168.39.1
    192.168.39.44: currently in use: port 22: 192.168.39.1:54978, 192.168.39.1:54979
    192.168.39.44: proceed ("y" or "yes" to continue): y
    DMF-Controller-2(config-local-if-ipv4)# ip 192.168.39.45/24 gateway 192.168.39.1
    Method B: Use the REST API to replace the Controller management ip address instead of removing and adding a new management ip address. When using this method, there is no need to use Controller IDRAC or console access to change the management IP address. Log in as an admin user and use the below REST API to replace the Controller management IP address from bash.
    Controller-2# deb bash
    admin@DMF-Controller-2:~$ curl -g -H "Cookie: session_cookie=$FL_SESSION_COOKIE" 'http://
    localhost:8080/api/v1/data/controller/os/config/local/network/interface[name="management
    "]/ipv4/address' -X PUT -d '[{"ip-address": "192.168.39.45", "prefix": 24, "gateway":
    "192.168.39.1"}]’
  2. Move the standby Controller to the new subnet.

Moving an Existing Standby Controller to a Different Controller Cluster

Follow the procedure below to move an existing standby Controller to a different Controller cluster.

  1. Remove the standby Controller from the existing cluster.
  2. Connect to the Controller using iDRAC or Controller console.
  3. Run the boot factory-default command.
    Controller-2# boot factory-default
  4. Perform the first-boot operation specified in the section Joining Standby Controller to Existing Cluster ** if joining an existing new Controller cluster or **Configuring the Active Controller Using the First Boot Script to create a new Controller cluster.

Accessing the DANZ Monitoring Fabric Controller

This section describes connecting to the DANZ Monitoring Fabric (DMF) Controller.

To access the Active DMF Controller, use the IP address of the active Controller. If configured, use the cluster's Virtual IP (VIP) as described in the Configuring the Cluster Virtual IP section.

Refer to the Using Local Groups and Users to Manage DMF Controller Access section to manage administrative user accounts and passwords.

Using the DANZ Monitoring Fabric CLI

Once the DANZ Monitoring Fabric (DMF) Controllers are up and running, log in to the DMF Controller using the VM local console or SSH.

DMF divides CLI commands into modes and sub-modes, which restrict commands to the appropriate context. The main modes are as follows:
  • login mode: Commands available immediately after logging in, with the broadest possible context.
  • enable mode: Commands that are available only after entering the enable command.
  • config mode: Commands that significantly affect system configuration and are used after entering the configured command. Access sub-modes from this mode.

Enter sub-modes from config mode to provision specific monitoring fabric objects. For example, the switch command changes the CLI prompt to (config-switch)# to configure the switch identified by the switch DPID or alias.

After logging in, the CLI appears in the login mode where the default prompt is the system name followed by a greater than sign (>), as in the following example.

controller-1>
To change the CLI to enable mode, enter the enable command. The default prompt for enable mode is the system name followed by a pound sign (#), as shown below.
controller-1> enable
controller-1#
To change to config mode, enter the configure command. The default prompt for config mode is the system name followed by (config)#, as shown below.
controller-1> enable
controller-1# configure
controller-1(config)#
controller-1(config)# switch filter-1
controller-1(config-switch)#
To return to the enable mode, enter the end command, as shown below.
controller-1(config)# end
controller1#

To view a list of the commands available from the current mode or submode, enter the help command. To view detailed online help for the command, enter the help command followed by the command name.

To display the options available for a command or keyword, enter the command or keyword followed by a question mark (?).

Capturing CLI Command Output

Pipe commands through other commands like grep for analysis and troubleshooting DANZ Monitoring Fabric (DMF) operations. View the contents of the output files by entering the show running-config command, as in the example below.
controller-1> show running-config | grep <service unmanaged-service TSTS>
post-service pst2
pre-service pst
Copy files to an external FTP server, as in the example below.
copy running-config scp://<username@scp_server>//<file>

Using the DANZ Monitoring Fabric GUI

The DANZ Monitoring Fabric (DMF) GUI performs similar CLI operations using a graphic user interface instead of text commands and options. DMF supports the use of the DANZ Monitoring Fabric GUI with recent versions of any of the following supported browsers:
  • Firefox
  • Chrome
  • Internet Explorer
  • Safari
  • Microsoft Edge

To connect to the DANZ Monitoring Fabric GUI, use the IP address assigned to the DMF controller. The following figure shows a browser connecting to the DANZ Monitoring Fabric GUI using HTTPS at the IP address 192.168.17.233.

Figure 1. Accessing the DANZ Monitoring Fabric GUI
When connecting to the Controller for the first time, a prompt requesting a security exception may appear because the Controller HTTPS server uses an unknown (self-signed) certificate authority.
Note: While using Internet Explorer, the login attempt may fail if the system time is different than the Controller time. To fix this, ensure the system used to log in to the Controller is in sync with the Controller time.

After accepting the prompts, the system displays the login prompt.

Use the admin username and password configured for DANZ Monitoring Fabric during installation or any user account and password configured with administrator privileges. A user in the read-only group will have access to options for monitoring fabric configuration and activity but cannot change the configuration.

The main menu for the DANZ Monitoring Fabric GUI appears in the following figure.
Figure 2. DANZ Monitoring Fabric GUI Main Menu
After logging in to the DANZ Monitoring Fabric GUI, a landing page appears with the Controller Dashboard and a menu bar at the top with sub-menus containing options for setting up DANZ Monitoring Fabric and monitoring network activity. The menu bar includes the following sub-menus:
  • Fabric: Manage DANZ Monitoring Fabric switches and interfaces.
  • DANZ Tap: Manage DANZ Tap policies, services, and interfaces.
  • Maintenance: Configure fabric-wide settings (clock, SNMP, AAA, sFlow, Logging, Analytics Configuration).
  • Integration: Manage integration of vCenter or NSX-V instances to allow monitoring traffic using DMF.
  • Security: Manage administrative access.
  • Profile: Display or change user preferences, password, or sign out.

Managing the DMF Controller Cluster

This section describes configuring settings for a DANZ Monitoring Fabric (DMF) cluster or both Active and Standby Controllers. Most configuration occurs on the active Controller, which synchronizes the standby Controller.

Verifying Cluster Configuration

A DANZ Monitoring Fabric (DMF) Out-of-Band HA cluster consists of two Controller nodes, one active and one standby. Keep the following conditions in mind when configuring the cluster:
  • Both active and standby must be in the same IP subnet.
  • Firewall configurations are separate for active and standby, so manually keep the configuration consistent between the two nodes.
  • NTP service is required to establish a cluster. Starting with DMF 7.0.0, the active Controller provides the NTP service for the cluster and connected switches.
  • When SNMP service is enabled, it must be manually configured to be consistent on both nodes. To verify the cluster state, use the following commands:
  • Enter the show controller details command from either the active or standby Controller.
    controller-1(config)# show controller details
    Cluster Name : dmf-cluster
    Cluster UID : 8ef968f80bd72d20d30df3bc4cb6b271a44de970
    Cluster Virtual IP : 10.106.8.4
    Redundancy Status : redundant
    Redundancy Description : Cluster is Redundant
    Last Role Change Time : 2020-11-19 18:12:49.699000 UTC
    Cluster Uptime : 3 weeks, 1 day
    # IP@Node IdDomain IdStateStatus Uptime
    - |---------- |- |------- |--------- |------- |--------- |-------------------- |
    1 10.106.8.3 5784 1standbyconnected11 hours, 6 minutes
    2 10.106.8.2*253771active connected11 hours, 11 minutes
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Failover History ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    # New ActiveTime completedNode Reason Description
    - |---------- |------------------------------ |----- |--------------------- |------------------------------------------------------- |
    1 25377 2020-11-19 18:12:38.299000 UTC25377cluster-config-changeChanged connection state: cluster configuration changed
    controller-1(config)#
  • Enter the show controller command to display the current Controller roles from either the active or standby node, as in the following example.
    controller-1(config)# show controller
    Cluster Name : dmf-cluster
    Cluster Virtual IP : 10.106.8.4
    Redundancy Status : redundant
    Last Role Change Time : 2020-11-19 18:12:49.699000 UTC
    Failover Reason : Changed connection state: cluster configuration changed
    Cluster Uptime : 3 weeks, 1 day
    # IP@StateUptime
    - |---------- |- |------- |-------------------- |
    1 10.106.8.3 standby11 hours, 24 minutes
    2 10.106.8.2*active 11 hours, 29 minutes
    controller-1(config)#

Configuring the Cluster Virtual IP

Setting up the Virtual IP (VIP) for the cluster is a best practice for connecting to the management port of the Active node using an IP address that does not change even if the active Controller fails over and the role of the standby Controller changes to Active.
Note: VIP will not work if active and standby Controllers are in different IPv4 subnets. The active and standby Controllers have to be in the same L2 domain.
On the active Controller, enter the virtual-ip command from the config-controller submode.
controller-1> enable
controller-1# config
controller-1(config)# controller
controller-1(config-controller)# virtual-ip 10.106.8.4
controller-1(config-controller)#

Verify the VIP by entering the show controller command.

controller-1(config)# show controller
Cluster Name : dmf-cluster
Cluster Virtual IP : 10.106.8.4
Redundancy Status : redundant
Last Role Change Time : 2020-11-19 18:12:49.699000 UTC
Failover Reason : Changed connection state: cluster configuration changed
Cluster Uptime : 3 weeks, 1 day
# IP@StateUptime
- |---------- |- |------- |-------------------- |
1 10.106.8.3 standby11 hours, 24 minutes
2 10.106.8.2*active 11 hourmvs, 29 minutes
controller-1(config)#
Note:Make sure you use a unique IP address for the VIP of the cluster. If you use the IP of the standby Controller by mistake, the nodes will form separate clusters. To resolve this problem, assign a unique IP address for the VIP.

Setting the Time Zone

It is essential that the switches, Controllers, hypervisors, VMs, and management systems in the DANZ Monitoring Fabric (DMF) have synchronized system clocks and all use the same time zones.
Note: The hypervisor and the virtual machine running the Controller using different time zones may cause problems with log files or access to the DMF GUI.

To view or change the current time zone on the DMF Controller, complete the following steps.

GUI Procedure

Select Maintenance > Clock from the main menu.
Figure 3. Maintenance Clock

The clock displays the time on the system used to access the DMF GUI and the Controller, information about the currently configured NTP server, and provides an option for forcing immediate NTP synchronization with the NTP server, which in DMF 8.0.0 and later, runs on the DMF Master Controller.

CLI Procedure

To set the time zone for the Controller from the config-controller submode, use the set timezone command:
[no] clock timezone <time-zone>

Replace time-zone with the specific string for the desired time zone. To see a list of supported values, press Tab after the clock set command. Certain values, such as America/, are followed by a slash (/). These values must be followed by a keyword for the specific time zone. View the supported time zones by pressing Tab again after selecting the first keyword.

For example, the following commands set the time zone for the current Controller to Pacific Standard Time (PST):
controller-1( (config)# ntp time-zone America/Los_Angeles
Warning: Active REST API sessions will not be informed of updates to time-zone.
Warning: Please logout and login to any other active CLI sessions to
Warning: update the time-zone for those sessions.
controller-1( (config)#
Note: Starting in DANZ Monitoring Fabric Release 8.1.0, changes in time zone are logged to the floodlight.log.
The following command removes the manually configured time zone setting and resets the Controller to the default (UTC).
controller-1(config-controller)# no clock timezone
Warning: Manually setting the clock for the DMF Controller using the clock set command may affect database reconciliation between the nodes in a Controller cluster. Because time stamps are used to identify records that should be updated, Arista Networks recommends adjusting any time skew using the ntpdate command. Using an NTP server ensures accurate synchronization between the Controller clocks.

Viewing Controller and Cluster Status

To view the overall controller status, click the DANZ Monitoring Fabric (DMF) logo in the left corner of the GUI Main menu. The system displays the DMF GUI landing page, shown in the following figure.
Figure 4. Cluster and Controller Status

The number of warnings and errors are listed and highlighted in yellow in the upper left corner of the page. This page also provides configuration options that let you view and change fabric-wide settings.

CLI Procedure

View the overall controller status by entering the show controller command.
controller-1(config)# show controller details
Cluster Name : dmf-cluster
Cluster UID : 8ef968f80bd72d20d30df3bc4cb6b271a44de970
Cluster Virtual IP : 10.106.8.4
Redundancy Status : redundant
Redundancy Description : Cluster is Redundant
Last Role Change Time : 2020-11-19 18:12:49.699000 UTC
Cluster Uptime : 3 weeks, 1 day
# IP@Node IdDomain IdStateStatus Uptime
- |---------- |- |------- |--------- |------- |--------- |-------------------- |
1 10.106.8.3 5784 1standbyconnected11 hours, 6 minutes
2 10.106.8.2*253771active connected11 hours, 11 minutes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Failover History ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# New ActiveTime completedNode Reason Description
- |---------- |------------------------------ |----- |--------------------- |---------------------------- |
1 25377 2020-11-19 18:12:38.299000 UTC25377 cluster-config-changeChanged connection state:
cluster configuration changed
controller-1(config)#
Use the following commands to modify the Controller configuration:
  • access-control: Configure access control of the Controller
  • cluster-name: Configure cluster name
  • description: Configure cluster description
  • virtual-ip: Configure management virtual IP
To modify the hostname, IPv4 or v6 addresses, or SNMP server configuration, use the following commands:
  • hostname: Configure hostname for this host
  • interface: Configure controller network interface
  • snmp-server: Configure local SNMP attributes

Saving and Restoring Controller Configuration

A properly configured Controller requires no regular maintenance. However, Arista Networks recommends periodically saving a copy of the Controller running-config to a location outside the Controller. Use the copy command from the Controller to an FTP server.

# copy running-config scp://admin:admin@myserver/configs

The format for the remote server user name, password, and path name is as follows: scp://<user>:<password>@<host>/<path_to_saved_config>

To restore the Controller configuration from a remote server, remove the standby Controller, then boot factory default on the active node to start a new cluster.

CLI Procedure

  1. Remove the standby node, if present.
    # show controller
    Cluster Name : desktop-DMF
    Cluster Virtual IP : 10.100.6.4
    Redundancy Status : redundant
    Last Role Change Time : 2018-10-01 10:43:51.582000 CDT
    Failover Reason : Changed connection state: connected to node 15444
    Cluster Uptime : 3 months, 2 weeks
    # IP @ State Uptime
    -|----------|-|-------|---------------|
    1 10.100.6.3 * active 4 days, 2 hours
    2 10.100.6.2 standby 4 days, 2 hours
    # system remove node 10.100.6.2
  2. Reboot the DANZ Monitoring Fabric (DMF) Controller to the factory default by entering the following commands:
    controller-1> enable
    controller-1# boot factory-default
    Re-setting controller to factory defaults...
    Warning: This will reset your controller to factory-default state and reboot it.
    You will lose all node/controller configuration and the logs
  3. Confirm the operation and enter the administrator password.
    Do you want to continue [no]? yes ...
    Removing existing log files ... Resetting system state ...
    Current default time zone: 'Etc/UTC' ...
    Enter NEW admin password:
    ...
    boot: ...
    localhost login:
  4. Rerun the first-time setup again to reconfigure the Controller.
  5. Copy the saved running config from the remote server to the DMF Controller by entering the following command.
    # copy scp://<user>:<password>@<host>/<path_to_saved_config> running-config
  6. Restore the cluster.
    Remove the standby Controller, then boot factory default on the Active node to start a new cluster.
Note: With introduction of the Command Line Interface (CLI) to Managed Appliances (Service Node / Packet Recorder), the CLI command boot factory-default has been extended to launch the first-boot setup process.

Copying Files Between a Workstation and a DMF Controller

Use the SCP command followed by the keywords shown below on a workstation connected to the DANZ Monitoring Fabric (DMF) Controller management network to copy the following types of files to the Controller:
  • Certificate (//cert)
  • Private key (//private-key/<name>)
  • Running configuration (//running-config)
  • Snapshots (//snapshot)
  • Controller image files (//image)
Copying into //snapshot on the controller overwrites the current running-config except the local
node configuration
Copying into //running-config merges the destination configuration to the running-config on the
controller
To copy, use the following syntax:
scp <filename> admin@<controller-ip>://<keyword>
For example, the following command copies the Controller ISO image file from a workstation to the image file partition on the Controller to install on the controller.
scp DMF-8.0.0-Controller-Appliance-2020-12-21.iso This email address is being protected from spambots. You need JavaScript enabled to view it.://image

This example copies the DMF 8.0.0 ISO file from the local workstation or server to the image partition of the DMF Controller running at 10.2.1.183.

Use any user account belonging to the admin group. Replace the admin username in the above example with any other admin-group user, as in the following example.
c:\>pscp -P 22 -scp DMF-8.0.0-Controller-Appliance-2020-12-21.iso This email address is being protected from spambots. You need JavaScript enabled to view it.://image
This example uses the user account admin-upgrade, which should be a member of the admin group. Use the PSCP command on a Windows workstation to copy the file to the Controller.
c:\>pscp -P 22 -scp <filename> admin@<controller-ip>://<keyword>
Use the SCP command to get the following files from the Controller and copy them to the local file system of the server or workstation.
  • Running configuration (copy running-config <dest>)
  • Snapshots (copy snapshot:// <dest>)
Use the following commands to copy running-config/snapshot to the remote location:
controller-1# copy snapshot:// scp://This email address is being protected from spambots. You need JavaScript enabled to view it.://anet/DMF-720.snp
controller-1# copy running-config scp://This email address is being protected from spambots. You need JavaScript enabled to view it.://anet/DMF-720.cfg

Snapshot File Management Using REST API

Manage snapshots using the REST API. The API actions and their CLI equivalents are as follows:
  • Take : copy running-config snapshot://my-snapshot
  • Fetch : copy http[s]://snapshot-location snapshot://my-snapshot
  • Read : show snapshot my-snapshot
  • Apply : copy snapshot://my-snapshot running-config
  • List : show snapshot
  • Delete : delete snapshot my-snapshot

Take:

The example below saves the current running-config to snapshot://<file name>.
curl -H -g -k "Cookie: session_cookie=<session-cookie>" https://<Controller IP>:8443/api/v1/rpc/
controller/snapshot/take -d '{"name": "snapshot://testsnap"}'

Fetch:

Retrieve a snapshot found at source-url and save it locally as testsnap.
curl -g -k -H "Cookie: session_cookie=<session-cookie>" https://<Controller IP>:8443/api/v1/rpc/
controller/snapshot/fetch -d '{"source-url": "https://...", "name": "snapshot://testsnap"}'

Read:

View the contents of the snapshot named testsnap. The API action isn’t quite the same as the CLI command, since the CLI gives the snapshot contents translated into running-config style CLI commands but the API will give the raw, untranslated snapshot data.
Note: Using this API, the user can save the snapshot to a local file on a server. This is equivalent to copy snapshot://<filename> scp://<remote file location>.
To dump the content of snapshot file.
curl -g -k -H "Cookie: session_cookie=<session-cookie>" https://<Controller IP>:8443/api/v1/
snapshot/testsnap/data
To save the content of snapshot to another file.
curl -g -k -H "Cookie: session_cookie=<session-cookie>" https://<Controller IP>:8443/api/v1/
snapshot/testsnap/data --output testfile.snp

Above curl example reads the testsnap snapshot file from Controller IP and writes to a file named testfile.snp.

Apply:

Apply the snapshot named testsnap to the controller.
curl -g -k -H "Cookie: session_cookie=<session-cookie>" https://<Controller IP>:8443/api/v1/rpc/
controller/snapshot/apply -d '{"name": "snapshot://testsnap"}'

List:

List all snapshots on a controller.
curl -g -k -H "Cookie: session_cookie=<session-cookie>" https://<Controller IP>:8443/api/v1/data/
controller/snapshot/available

Delete:

Delete a snapshot named testsnap from the controller.
curl -g -k -H "Cookie: session_cookie=<session-cookie>" 'https://<Controller IP>:8443/api/v1/data/
controller/snapshot/available[name="testsnap"]'

Convert a Text-based running-config into a Snapshot

A keyword is added to the copy running-config snapshot://sample command to convert text running-config commands into a JSON snapshot.

Use the keyword transaction to perform the conversion. The keyword can also create a snapshot with specific commands included.

While similar, the following workflows describe two applications of the copy running-config snapshot://sample command and the new transaction keyword.

Workflow - Create a Snapshot

Create a snapshot using the following command:

> copy file://text-commands snapshot://new-snapshot

Use this choice to convert a collection of text commands into a working snapshot. The resulting snapshot has several advantages over the collection of text commands:

  • Snapshots have version details as part of the file format, allowing the configuration to be rewritten based on changes in the new versions of the product.
  • REST APIs post the configuration in large chunks, applying snapshots more quickly. A single text command typically updates a specific amount of configuration parameters while writing the resulting modification to the persistent storage.

The conversion process will:

  • Create a new transaction with all the configuration parameters removed.
  • Replay each command and apply the configuration changes to the active transaction.
  • Invoke the snapshot REST API, which builds a snapshot from the current transaction.
  • Delete the current transaction, preventing any of the applied configuration changes from the replayed command from becoming active.
Note: The conversion requires that the syntax of the text command to be replayed works with the currently supported syntax where it will be applied.

Workflow - Create a Snapshot containing a Specific Configuration

Manually create a snapshot that contains a specific configuration using the following steps.

  1. Enter the configuration mode that supports changes to the configuration.
  2. Create a new transaction using an empty or a current configuration by running one of the following command options.
# begin transaction erase
# begin transaction append
Configuration changes applied while the transaction is active are performed against the transaction but do not update the configuration until the transaction is committed (using the commit transaction command). Several validation checks are postponed until committing the changes. The commit does not post if validation errors are present.
Note: In this sample workflow, the objective is to create a new snapshot and not to update the system's configuration.
Various changes to the configuration include:
  • Add a new configuration, for example, new switches, new policies, new users, etc.,
  • Modify the configuration.
  • Delete the configuration.
  • The local configuration should not be changed, as transactions do not currently manage it. The local system configuration is updated immediately (for example, the hostname).

The new transaction keyword can be used with the copy command, requesting that the configuration within the transaction be copied or applied to the snapshot and not to the current system configuration. For example, using the following command:

# copy running-config snapshot//:sample transaction
Delete the transaction using the following command:
# delete transaction
Note: The snapshot is updated while the system configuration is not updated.

To check the active transaction on the system, use the following command:

# show transaction

Sample Sessions

The examples below will familiarize the reader with converting a text-based running-config into a snapshot.

Example One

Convert a collection of text commands into a working snapshot using the following command:

> copy file://text-commands snapshot://new-snapshot
Controller1(config)# show file
# Name Size Created
-|--------------------|----|-----------------------|
1 textcommands 3560 2023-08-15 21:37:44 UTC
Controller1(config)# copy file://textcommands snapshot://snap_textcommands
Lines applied: 175, snap_textcommands: Snapshot updated
Controller1(config)# show snapshot snap_textcommands
Note: Some command options listed below are hidden in the CLI since they are in Beta in DMF 8.4.0.

Example Two

Manually create a snapshot containing a specific configuration to be added to the existing running configuration. The commands begin transaction or begin transaction append add the commands executed on the transaction to the existing running configuration.

Controller1(config)# begin transaction
id : 5gMoJ6uu7mcA3j3Gs5fGyLRT8ZJW7CI9
Controller1(config)#
Controller1config)#
Controller1(config)# policy policy15
Controller1(config-policy)# action forward
Controller1(config-policy)#
Controller1(config-policy)# exit
Controller1(config)# copyrunning-config snapshot://snap_policy15 transaction
Controller1(config)#
Controller1(config)# delete transaction
Controller1(config)# show snapshot snap_policy15
!
! Saved-Config snap_policy15
! Saved-Config version: 1.0
! Version: 8.4.0
! Appliance: bmf/dmf-8.4.x
! Build-Number 133
!
version 1.0
! ntp
ntp server ntp.aristanetworks.com
! snmp-server
snmp-server enable traps
snmp-server community ro 7 02161159070f0c
! local
local node
hostname Controller1
interface management
!
ipv4
ip 10.93.194.145/22 gateway 10.93.192.1
method manual
!
ipv6
method manual
! deployment-mode
deployment-mode pre-configured
! user
user admin
full-name 'Default admin'
hashed-password method=PBKDF2WithHmacSHA512,salt=qV-1YyqWIZsYc_SK1ajniQ,rounds=25000,ph=true,0vtPyup3h5JThGFLff-1zw-42-BV7tG7Sm99ROT1OmZCZjlzcWLJj9Lc28mxkQI1-assfW2e-OPDhZbu9qCE2Q
! group
group admin
associate user admin
group read-only
! aaa
aaa accounting exec default start-stop local
! controller
controller
cluster-name dmf204
virtual-ip 10.93.194.147
access-control
!
access-list api
10 permit from 10.93.194.145/32
15 permit from 10.93.194.146/32
!
access-list gui
1 permit from ::/0
2 permit from 0.0.0.0/0
!
access-list ntp
1 permit from ::/0
2 permit from 0.0.0.0/0
!
access-list snmp
1 permit from 0.0.0.0/0
!
access-list ssh
1 permit from ::/0
2 permit from 0.0.0.0/0
!
access-list sync
1 permit from ::/0
2 permit from 0.0.0.0/0
!
access-list vce-api
1 permit from ::/0
2 permit from 0.0.0.0/0
! auto-vlan-mode
auto-vlan-mode push-per-filter
! service-node
service-node dmf204-sn
mac e4:43:4b:48:58:ac
! switch
switch gt160
mac c4:ca:2b:47:97:bf
admin hashed-password $6$ppXOyA92$0hibVW63R0t1T3f7NRUFxPEWUb4b64l4dTGEayrrXcw5or/ZDxm/ZNvotQ7AQfVMo7OZ1I.yDLwrnlVXrZkV3.
!
interface Ethernet1
speed 10G
!
interface Ethernet5
speed 25G
switch hs160
mac c4:ca:2b:b7:44:83
admin hashed-password $6$McgvJd94$vRxDNkr2OSz3kiZSYPFCfuIbcaBuIcoC7ywlVeFFd7oAgLn1eVcV6NyEFZnykje4ILUjmJPWdWeu3LaF4sWzd/
!
interface Ethernet4
speed 10G
!
interface Ethernet47
role delivery interface-name veos2-delivery
!
interface Ethernet48
loopback-mode mac
speed 10G
role both-filter-and-delivery interface-name veos1-filter strip-no-vlan
switch smv160
mac 2c:dd:e9:4e:5e:f5
admin hashed-password $6$RyahYdXx$bUXeQCZ1bHNLcJBA9ZmoH/RmErwpDXvJE20UnEXKoLQffodjsyIlnZ1nG54X5Cq5qgb6uTGXs1TMYkqBWurLh1
!
interface Ethernet31/1
rate-limit 100
speed 10G
role delivery interface-name veos6-delivery ip-address 10.0.1.11 nexthop-ip 10.0.1.10 255.255.255.0
! crypto
crypto
!
http
cipher 1 ECDHE-ECDSA-AES128-GCM-SHA256
!
ssh
cipher 1 aes192-ctr
mac 1 hmac-sha1
! policy
policy policy15
action forward

Example Two

Manually create a snapshot that contains a specific configuration.

Controller1(config)# begin transaction erase
id : GAOGEuLS26I67bqJ7J2NcpOtORfflUn_
Controller1(config)#
Controller1(config)# policy policy16
Controller1(config-policy)# action forward
Controller1(config-policy)#
Controller1(config-policy)# exit
Controller1(config)#
Controller1(config)# copy running-config snapshot://snap_policy16 transaction
Controller1config)#
Controller1(config)#
Controller1(config)# delete transaction
Controller1(config)#
Controller1(config)# show snapshot snap_policy16
!
! Saved-Config snap_policy16
! Saved-Config version: 1.0
! Version: 8.4.0
! Appliance: bmf/dmf-8.4.x
! Build-Number 133
!
version 1.0
! local
local node
hostname Controller1
interface management
!
ipv4
ip 10.93.194.145/22 gateway 10.93.192.1
method manual
!
ipv6
method manual
! policy
policy policy16
action forward

Limitations

  • The text-command-to-snapshot conversion process requires that the syntax of the text command to be replayed works (i.e., be compatible) with the currently supported syntax where it is getting applied.
  • Only a global (i.e., cluster-wide) configuration can be managed with snapshots and transactions. View the local (non-global) configuration with the show running-config local command.
  • An error is displayed if the copy running-config snapshot://sample transaction command is performed without starting a new transaction.

Managing DMF Sessions

To view and configure settings for remote sessions established to the DANZ Monitoring Fabric (DMF) Controller, switches, and managed appliances, complete the following steps:
  1. Select Security > Sessions from the DMF main menu.
    Figure 5. Security Sessions

    This page displays the sessions currently established. To forcibly end a session, select Delete from the menu control for the session.

  2. Click Settings to configure the default settings for remote connections to the Controller.
    Figure 6. Security Sessions Configurations

    This dialog lets you set an expiration time for sessions and the maximum number of sessions allowed for each user.

  3. Make any changes required and click Submit.
    CLI Commands
    To limit the number of concurrent sessions allowed for each user, use the following command:
    aaa concurrent-limit <number>
    For example, the following command limits the number of concurrent sessions to 5 for each user account.
    controller-1(config)# aaa concurrent-limit 5

    This limit causes the sixth session connection attempt by the same user account to fail. The excess oldest sessions are closed if more than five are already open.

    This limit applies to sessions established through the GUI, CLI, or REST API, whether directed to the DANZ Monitoring Fabric fabric switches, managed appliances, Active or standby Controller, and the cluster virtual IP address.
    Note: All users should log out when finished to avoid blocked access. No new sessions are allowed if the number of existing sessions equals the limit configured.
    To set the expiration time for an AAA session, use the following command from config mode:
    [no] aaa session-expiration <minutes>
    Replace minutes with an integer specifying the number of minutes for the session expiration. For example, the following command sets the AAAS session expiration to 10 minutes:
    controller-1(config)# aaa session-expiration 10

Managing and Viewing Logs

By default, the DANZ Monitoring Fabric (DMF) fabric switches send syslog messages to both the Active and Standby Controllers. Syslog messages are disk persistent and only removed based on time and rotation policy.

After configuring an external syslog server, the Controllers send the syslog messages to the external server and keep a local copy of the syslogs on the Controller. When configuring multiple external syslog servers, DMF sends the syslog messages to every server. Physical switch logs can be sent directly to an external syslog server instead of sending the logs to the DMF Controller.

Sending Logs to a Remote Syslog Server

With the external Syslog server configured and the logging switch-remote option enabled, the fabric switches send Syslog messages only to the configured external Syslog servers but not to the Controllers. When the logging switch-remote option is enabled, the Controller does not have a local copy of the switch logs.

The Controllers do not share their syslog with the other Controller in the cluster. In other words, the active Controller does not send its syslogs to the standby Controller, and the standby Controller does not share its syslogs with the Active Controller. Access the individual Controller logs from either the Active or Standby node.

Using the GUI to Configure Remote Logging

To manage logging, complete the following steps:
  1. Select Maintenance > Logging from the main menu and click Remote Logging.
    Figure 7. Maintenance Logging
  2. To enable remote logging, identify a remote syslog server by clicking the Provision control (+) in the Remote Servers table.
    Note: Enabling remote logging causes syslog messages to be sent directly from the fabric switches to the specified server, bypassing the Controller. As a result, the Switch Light log files will not be available on the local Controller for analysis using the Analytics option.
    Figure 8. Create Remote Server
  3. Enter the IP address of the remote server. If you are not using the default port (514), specify the port to use and click Save. The server appears on the Logging page.

Using the CLI to Configure Remote Logging

To configure the syslog server for a Controller node, enter the logging remote command using the following syntax:

..code-block:: none

logging remote <ip-address>

For example, the following command sets the syslog server address to 192.168.100.1:
controller-1(config)# logging remote 192.168.100.1

This example exports the syslog entries from the Controller node to the syslog server at IP address 192.168.100.1.

Viewing Log Files on the Controller

Use the show logging command to view the log files for the different components of the DANZ Monitoring Fabric (DMF) Controller. The options for this command are as follows:
  • audit: Show audit file contents
  • controller: Show log contents for the Controller floodlight process
  • remote: Show remote logs
  • switch switch: Show logs for the specified switch
  • syslog: Show syslog file contents
  • web-access: Show content of the web server access log
  • web-error: Show content of the web server error log
For example, the following command shows the logs for the DMF Controller.
controller-1> show logging controller
floodlight: WARN [MdnsResponder:Network Queue Processing Thread] ZTN4093: 1CC38M2._gm_idrac._tcp.
local.
2018-03-26T04:03:17.900-07:00 invalid/unrecognized SRV name 1CC38M2._gm_idrac._tcp.local.
floodlight: WARN [MdnsResponder:Network Queue Processing Thread] ZTN4093: 1CC38M2._gm_idrac._tcp.
local.
2018-03-26T04:03:17.900-07:00 invalid/unrecognized SRV name 1CC38M2._gm_idrac._tcp.local.
floodlight: WARN [MdnsResponder:Network Queue Processing Thread] ZTN4093: 1CC38M2._gm_idrac._tcp.
local.
2018-03-26T04:03:17.900-07:00 invalid/unrecognized SRV name 1CC38M2._gm_idrac._tcp.local.
...
controller-1>

Administrative Activity Logs

The DANZ Monitoring Fabric (DMF) Controller logs user activities to the floodlight log file, logging the following events:
  • CLI commands entered
  • Login and logout events
  • Queries to the REST server
  • RPC message summaries between components in the Controller

CLI Commands

Use grep with the show logging command to see the local accounting logs, as in the following example:
controller-1# show logging controller | grep "cmd_args"
Sep 4 21:23:10 BT2 floodlight: INFO [net.bigdb.auth.AuditServer:Dispatcher-3] AUDIT EVENT: bigcli.
command application_id=bigcli cmd_args=enable
Sep 4 21:23:10 BT2 floodlight: INFO [net.bigdb.auth.AuditServer:Dispatcher-4] AUDIT EVENT: bigcli.
command application_id=bigcli cmd_args=configure
Sep 4 21:23:16 BT2 floodlight: INFO [net.bigdb.auth.AuditServer:Dispatcher-3] AUDIT EVENT: bigcli.
command application_id=bigcli cmd_args=bigtap policy policy3
Sep 4 21:23:20 BT2 floodlight: INFO [net.bigdb.auth.AuditServer:Dispatcher-6] AUDIT EVENT: bigcli.
command application_id=bigcli cmd_args=bigchain chain hohoh
Sep 4 21:23:22 BT2 floodlight: INFO [net.bigdb.auth.AuditServer:Dispatcher-3] AUDIT EVENT: bigcli.
command application_id=bigcli cmd_args=ext
Sep 4 21:23:24 BT2 floodlight: INFO [net.bigdb.auth.AuditServer:Dispatcher-3] AUDIT EVENT: bigcli.
command application_id=bigcli cmd_args=configure
Sep 4 21:23:30 BT2 floodlight: INFO [net.bigdb.auth.AuditServer:Dispatcher-5] AUDIT EVENT: bigcli.
command
To modify the accounting configuration, use the following commands. To start local accounting only, enter the following command:
controller-1(config)# aaa accounting exec default start-stop local
To start remote accounting only, enter the following command:
controller-1(config)# aaa accounting exec default start-stop {local [group tacacs+] | [group
            radius]}
To view audit records, enter the following command if local accounting is enabled:
controller-1# show logging controller | grep AUDIT

If accounting is remote only, consult the administrator for the TACACS+ server for more information.

REST API Logging

Starting with the DANZ Monitoring Fabric 8.4 release, the REST API body can be logged into the audit.log file. Before the 8.4 release, REST API calls from GUI or REST clients were logged in the audit.log file without the request's data (or) body. In this release, the data and body of the REST API call can be logged.

To enable the audit logging, use the configuration below:
aaa audit-logging log-request-leaf-values record-all-request-values
controller-1# config
controller-1(config)# aaa audit-logging log-request-leaf-values record-all-request-values
controller-1(config)#
To disable, use the no form of the command:
no aaa audit-logging log-request-leaf-values record-all-request-values
Example of audit.log entry showing SNMP configuration from GUI:
2022-11-16T12:18:05.106-08:00 floodlight: INFO LOCLAUD1001: AUDIT EVENT: DB_QUERY auth-
description="session/9d0a66315f7d9e0df8f2478fe7c0b3d77cec25e865e4e135f0f9e28237570b70"
user="admin" remote-address="fd7a:629f:52a4:20d0:1ca8:28ed:6f59:cd47" session-id=
"9d0a66315f7d9e0df8f2478fe7c0b3d77cec25e865e4e135f0f9e28237570b70" operation="REPLACE"
uri="https://[fdfd:5c41:712d:d080:5054:ff:fe57:5dba]/api/v1/data/controller/os/config/
global/snmp" http-method="PUT" request-leaf-values="{"contact":"Arista","location":"HQ",
"trap-enabled":false,"user[name=\"cvml\"]/auth-passphrase":"AUTH-STRING","user[name=\
"cvml\"]/name":"cvml","user[name=\"cvml\"]/priv-passphrase":"PRIV-STRING","user[name=\
"cvml\"]/priv-protocol":"aes"}" code="204"
To view the configuration:
controller-# show run aaa
! aaa
aaa accounting exec default start-stop local
aaa audit-logging log-request-leaf-values record-all-request-values
controller-1#

Restricting Size of REST API Body

Users can restrict the maximum size of the REST API body being passed to a rest call using the following command. Unless configured, the max-body-size is 9223372036854775807 bytes.
rest-api max-body-size <>
In the example below, the configuration restricts the body of the REST API call to 16K bytes. The REST API call is rejected if the body size exceeds 16K bytes. This feature can help remediate Denial Of Service (DOS) attacks where an intruder tries to send a very large configuration repeatedly, attempting to keep the Controller busy parsing the information before applying. As needed, enable this configuration to the attack vector quickly before trying to find the culprits.
controller-1(config)# rest-api max-body-size
<max-body-size> Integer between 16384 to max integer size
controller-1(config)# rest-api max-body-size
controller-1(config)# rest-api max-body-size 16384
controller-1(config)#
To disable, use the no form of the command:
no rest-api max-body-size <>

Syslog Over TLS

This section describes how Syslog over TLS is implemented in DANZ Monitoring Fabric (DMF) and the configuration required to implement this feature.

Overview

Transport Layer Security (TLS) is used in DANZ Monitoring Fabric (DMF) to secure syslog messages, which may originate or traverse a non-secure network in transit to the syslog server. Using TLS helps mitigate the following primary threats:
  • Impersonation: An unauthorized sender may send messages to an authorized receiver, or an unauthorized receiver may try to deceive a sender.
  • Modification: An attacker may modify a syslog message in transit to the receiver.
  • Disclosure: An unauthorized entity could examine the contents of a syslog message. TLS, when used as a secure transport, reduces these threats by providing the following functionality.
  • Authentication counters impersonation.
  • Integrity-checking counters modifications to a message on a hop-by-hop basis.
  • Confidentiality works to prevent disclosure of message contents.

Starting from DMF Release 8.4, syslog over TLS is supported only for communication from controllers to remote syslog servers. Switches and managed appliances such as Recorder Nodes and Service Nodes support plain unencrypted UDP-based syslog messages. To enable syslog over TLS on the controller, please refer to the next section.

Configuration

To enable TLS for syslog messaging, complete the following steps:
Note: The steps below are needed for mutual authentication between client and server. For non-mutual cases, users can skip importing the controller certificate and a private key and configuring them.
  1. Generate the certificates for the remote syslog servers and the DANZ Monitoring Fabric (DMF) Controller using the same trusted public or private CA.
  2. Copy the trusted CA root certificate, the controller certificate, and the controller private key to the DMF Controller.
    controller-1# copy scp://user@x.x.x.x:/.../cacert.pem cert://
    controller-1# copy scp://user@x.x.x.x:/.../dmf-controller-cert.pem cert://
    controller-1# copy scp://user@x.x.x.x:/.../dmf-controller-privkey.pem private-key://dmf-privkey
  3. Copy the CA certificate and the syslog server certificates and keys to the respective syslog servers. TLS support should be enabled on the servers by following the prescribed steps for the syslog applications that they are running.
  4. On the active DMF Controller, identify the CA certificate to use with remote syslog servers.
    controller-1(config)# logging secure ca <capath>
    Note: In the example in Step 2, it is cacert.pem.
  5. Identify a certificate for mutual authentication.
    controller-1(config)# logging secure cert <certpath>
    Note: In the example in Step 2, it is dmf-controller-cert.pem.
  6. On the active DMF Controller, identify the key for mutual authentication.
    controller-1(config)# logging secure private-key <keypath>
    Note: In the example in Step 2, it is dmf-privkey, the name given to the key during the copy procedure.
  7. On the active DMF Controller, enable secure logging.
    controller-1(config)# logging secure tls
  8. To view the syslog messages, enter the following command.
    controller-1# show logging
    Note: Syslog applications typically send logs to /var/log/messages, /var/log/dmesg, and /var/log/journal by default.

Creating a Support Bundle

A collection of running configuration and log files is critical to understanding the fabric behavior when the fabric is in a faulty state.

The DANZ Monitoring Fabric (DMF) CLI provides the commands to automate the collecting, archiving, and uploading of critical data. These commands cover all devices of the DMF fabric, such as Controllers, switches, DMF Service Node, and DMF Recorder Node.

The following are the commands to configure the Support Bundle data upload:
controller-1> enable
controller-1# configure
controller-1(config)# service
controller-1(config-service)# support auto-upload
Enabled diagnostic data bundle upload
Use "diagnose upload support" to verify upload server connectivity
To check if the auto-upload is enabled:
controller-1# show run service
! service
service
support auto-upload
controller-1#
The following is the command to launch the Support Bundle collection. Once the support bundle is collected, it will automatically be uploaded, as seen in the output. Provide the bundle ID to support personnel.
controller-1# support
Generating diagnostic data bundle for technical support. This may take several minutes...
Support Bundle ID: SGPVW-BZ3MM
Switchlight collection completed after 14.2s. Collected 1 switch (8.56 MB)
Local cli collection completed after 38.2s. Collected 75 commands (0.32 MB)
Local rest collection completed after 0.1s. Collected 3 endpoints (0.43 MB)
Local bash collection completed after 10.0s. Collected 127 commands (4.74 MB)
Local file collection completed after 15.5s. Collected 39 paths (1753.31 MB)
Cluster collection completed after 138.2s. Collected 1 node (1764.50 MB)
Collection completed. Signing and compressing bundle...
Support bundle created successfully
00:04:03: Completed
Generated Support Bundle Information:
Name : anet-support--DMF-Controller--2020-11-24--18-31-39Z--SGPVW-BZ3MM.tar.gz
Size : 893MB
File System Path : /var/lib/floodlight/support/anet-support--DMF-Controller--2020-11-24--18-31-39Z--
SGPVW-BZ3MM.tar.gz
Url : https://10.2.1.103:8443/api/v1/support/anet-support--DMF-Controller--2020-11-24--
18-31-39Z--SGPVW-BZ3MM.tar.gz
Bundle id : SGPVW-BZ3MM
Auto-uploading support anet-support--DMF-Controller--2020-11-24--18-31-39Z--SGPVW-BZ3MM.tar.gz
Transfer complete, finalizing upload
Please provide the bundle ID SGPVW-BZ3MM to your support representative.
00:00:48: Completed
controller-1#
The show support command shows the status of the automatic upload.
controller-1# show support
# Bundle Bundle idSize Last modified Upload status
- |----------------------------------------------------------------------- |----------- |----- |------------------------------ |---------------- |
1 anet-support--DMF-Controller--2020-11-24--18-31-39Z--SGPVW-BZ3MM.tar.gzSGPVW-BZ3MM893MB2020-11-24 18:35:46.400000 UTCupload-completed
Use the diagnose upload supportcommand to verify the reachability and health of the server used to receive the support bundle. Below is an example output of checks performed when running the command. When a support bundle upload fails, use the command to identify the possible causes.
controller-1# diagnose upload support
Upload server version: diagus-master-43
Upload diagnostics completed successfully
00:00:02: Completed
Check : Resolving upload server hostname
Outcome : ok
Check : Communicating with upload server diagnostics endpoint
Outcome : ok
Check : Upload server healthcheck status
Outcome : ok
Check : Upload server trusts authority key
Outcome : ok
Check : Signature verification test
Outcome : ok
Check : Resolving objectstore-accelerate hostname
Outcome : ok
Check : Resolving objectstore-direct hostname
Outcome : ok
Check : Communicating with objectstore-accelerate
Outcome : ok
Check : Communicating with objectstore-direct
Outcome : ok
controller-1#
After resolving the issues, retry the upload using the upload support support bundle file namecommand. Use the same command if auto-upload was not configured before taking the support bundle.
controller-1# upload support anet-support--DMF-Controller--2020-11-24--18-31-39Z--SGPVW-BZ3MM.tar.gz

NIST 800-63b Password Compliance

This feature activates password compliance for local accounts on DANZ Monitoring Fabric (DMF) devices (Controller, switches, DMF Service Node, Arista Analytics Node, and DMF Recorder Node). The NIST 800-63b feature enforces that any newly chosen password fulfills the following requirements:

  • The password needs to be at least 8 characters long.
  • The password is not a known compromised password.
Note: Enabling NIST 800-63b compliance only enforces that password changes in the future will enforce the rules; it does not affect existing passwords. Updating all passwords after enabling NIST-800-63b compliance is strongly recommended.

Configuration

The NIST-800-63b compliance mode needs to be set separately on the Controller and the Arista Analytics Node to enforce password compliance for the entire DANZ Monitoring Fabric (DMF) cluster.

Enable NIST 800-63b password compliance
Controller(config)# aaa authentication password-compliance nist-800-63b
Warning: A password compliance check has been enabled. This enforces compliance
Warning: rules for all newly chosen passwords, but it doesn't retroactively
Warning: apply to existing passwords. Please choose new passwords for
Warning: all local users, managed appliances, switches, and the recovery user.
Disable NIST 800-63b password compliance (non-FIPS)
Controller(config)# no aaa authentication password-compliance

FIPS versions always enforce NIST 800-63b password compliance by default unless explicitly configured not to do so.

Disable NIST 800-63b password compliance (FIPS)
Controller(config)# aaa authentication password-compliance no-check
Note: The NIST-800-63b compliance mode must be set separately on the Controller and the Arista Analytics Node to enforce password compliance for the entire DMF cluster.
View the NIST 800-63b password compliance configuration
Controller(config)# show running-config aaa
! aaa
aaa authentication password-compliance nist-800-63b
Note: Upon activation of NIST 800-63b password compliance, updating local account passwords for DMF devices is recommended. Refer to the commands below to update the passwords.
Controller admin password update
Controller# configure
Controller(config)# user admin
Controller(config-user)# password <nist-compliant-password>
Service Node admin password update
Controller(config)# service-node <service-node-name>
Controller(config-service-node)# admin password <nist-compliant-password>
Packet Recorder admin password update
Controller(config)# packet-recorder <packet-recorder-name>
Controller(config-packet-recorder)# admin password <nist-compliant-password>
Switch admin password update
Controller(config)# switch <switch-name>
Controller(config-switch)# admin password <nist-compliant-password>
Note: After upgrading to DMF-7.2.1 and above, existing user credentials continue to work. In the case of a FIPS image, password compliance is enabled by default, and this does not affect existing user accounts. Changing the recovery and local account passwords is recommended after upgrading from the previous FIPS image. In the first time installation of a FIPS image, the first-boot process enforces the NIST compliance standards for the recovery and admin accounts.

Custom Password Compliance

This feature activates password compliance for local accounts on DANZ Monitoring Fabric (DMF) devices (Controller, switches, DMF Service Node, Arista Analytics Node, and DMF Recorder Node).

DMF supports custom password requirements for local users.

To enable the custom password compliance method, use the following command:
aaa authentication password-compliance custom-check
controller-1(config)# aaa authentication password-compliance custom-check
Warning: A password compliance check has been enabled. This enforces compliance
Warning: rules for all newly chosen passwords, but it doesn't retroactively
Warning: apply to existing passwords. Please choose new passwords for
Warning: all local users, managed appliances, switches, and the recovery user.
controller-1(config)#
To disable, use the no form of the command:
no aaa authentication password-compliance custom
Note: Enabling the custom password compliance does not apply to already configured user passwords. Arista Networks recommends resetting or reconfiguring the user password to adhere to the password requirements. Without configuring the custom password compliance, configuring the requirements is not effective. To use the feature configure the custom compliance method and set the password requirements.
Once the custom password compliance is enabled, configure the requirements for a password using the following commands:
controller-1(config)# aaa authentication password-requirement
max-repeated-characters the maximum number of repeated characters allowed
max-sequential-characters the maximum number of sequential characters allowed
minimum-length the minimum required length for passwords
minimum-lowercase-letter the minimum number of lowercase characters required
minimum-numbers the minimum number of numerical characters required
minimum-special-characters the minimum number of special characters required
minimum-uppercase-letter the minimum number of uppercase characters required
reject-known-exposed-passwords check the password against known exposed passwords
controller-1(config)# aaa authentication password-requirement
For example, to set a password that requires 10 minimum characters, 1 minimum number, and 2 maximum repeated characters, do the following:
controller-1(config)# aaa authentication password-requirement minimum-length 10
controller-1(config)# aaa authentication password-requirement minimum-numbers 1
controller-1(config)# aaa authentication password-requirement max-repeated-characters 2
Configuring or resetting a password that does not meet the requirements results in the following error message:
controller-1# conf
controller-1(config)# user customPW
controller-1(config-user)# password admin
Error: the password needs to be at least 10 characters long
controller-1(config-user)#
To view the configured password compliance and requirements, use the show run aaa authentication command:
controller-1# show run aaa authentication
! aaa
aaa authentication password-compliance custom-check
aaa authentication password-requirement max-repeated-characters 2
aaa authentication password-requirement minimum-length 10
aaa authentication password-requirement minimum-numbers 1
controller-1#
To remove the requirements configured, use no form of the commands:
controller-1(config)# no aaa authentication password-requirement minimum-length 10
controller-1(config)# no aaa authentication password-requirement minimum-numbers 1
controller-1(config)# no aaa authentication password-requirement max-repeated-characters 2

Switch Management Interfaces not Mirroring Controller Management Interface ACLs

Use this feature to configure Access Control Lists (ACLs) on a managed device that do not directly reflect the ACLs configured on the Controller.

Specifically, a user can override the user-configured ACLs on the Controller (generally inherited by the managed devices) so that ACLs allowing specific types of traffic from the Controller only are pushed to managed devices.

The user performs this action on a per-managed-device basis or globally for all managed devices on the CLI. The Controller and analytics node are excluded from receiving this configuration type (when performed globally).

The feature introduces a new CLI mode to address this configuration type on the Controller. That is, the configuration used for pushing to all managed devices exclusively (excluding the Controller) unless overrides exist.

A managed device is a device whose life cycle ZTN manages.

The total set of managed devices is as follows:

  • Managed Appliances
    • Service Node

    • Recorder Node

  • Switches
    • SWL

    • EOS

Note: The analytics node is not in this set since the ZTN mechanisms on the Controller do not manage its life cycle.

Configuration using the CLI

Configure the following on the Controller to enforce Intra-Fabric Only Access (IFOA) for the API service (i.e., port 8443) on all managed devices.

C1> en
C1# conf
C1(config)# managed-devices
C1(config-managed-devices)# access-control
C1(config-managed-devices-access)# service api
C1(config-managed-devices-access-service)# intra-fabric-only-access
Reminder: IP address/method of the management interface cannot be changed,
when a service has intra-fabric only access enforced.
Note: A warning is displayed outlining that the management interface's IP address cannot be changed once any of the services have IFOA enforced for any managed devices. To change the management interface's IP address, disable IFOA for all services.

Several services can have IFOA enforced on them. The list of services and their corresponding ports are shown in the table below. An ❌ means enforcing IFOA on that port on that managed device type is impossible.

Conversely, an ✅ means enforcing IFOA on that port on that managed device type is possible. There may be some information beside the ✅ indicating what runs on that port on the managed device.

Protocol Service Node Packet Recorder SWL EOS
SSH (22, TCP)
WEB (80, TCP) ✅ (SLRest, plain-text) ✅ (cAPI)
HTTPS (443, TCP) ✅ (nginx reverse proxies to 1234, for the stenographer) ✅ (SLRest, encrypted)
API (8443, TCP) ✅ (BigDB) ✅ (BigDB) ✅ (BigDB)
To override this global/default config (i.e., config applied to all managed devices) for a specific managed device, use the following config and push it to the Controller.
C1(config)# switch switch-name
C1(config-switch)# access-control override-global
C1(config-switch)# access-control
C1(config-switch-access)# service api
C1(config-switch-access-service)# no intra-fabric-only-access

As illustrated below, push a similar configuration for the managed appliances, i.e., the Recorder and Service nodes.

Recorder Node

C1(config)# recorder-node device rn1
C1(config-recorder-node)#
C1(config-recorder-node)# access-control override-global
C1(config-recorder-node)# access-control
C1(config-recorder-node-access)# service api
C1(config-recorder-node-access-service)# no intra-fabric-only-access

Service Node

C1(config)# service-node sn1
C1(config-service-node)#
C1(config-service-node)# access-control override-global
C1(config-service-node)# access-control
C1(config-service-node-access)# service api
C1(config-service-node-access-service)# no intra-fabric-only-access

It is also possible to push a configuration that does not override the entire configuration under managed devices but instead merges with it on a per-service basis, for example:

C1(config)# switch core1
C1(config-switch)# access-control merge-global
C1(config-switch)# access-control
C1(config-switch-access)# service api
C1(config-switch-access-service)# no intra-fabric-only-access

This action will merge the global/default config specified under the config-managed-devices CLI submode with the config set for this specific managed device (in this case, the device is a switch, and its name on the Controller is core1).

CLI Show Commands

There are several helpful show commands.

When merging the global/default access-control configuration with the device-specific configuration, understanding the effective configuration (the configuration used in generating the appropriate ACLs) may not be obvious. To see the effective configuration for a specific device, perform the following command:

C1(config)# show effective-config switch core1
! switch
switch core1
access-control
!
service api
intra-fabric-only-access

While displaying the managed device's effective configuration, check the running-config generated by ZTN (the configuration sent to the device), confirming the configuration pushed to the managed device.

C1(config)# show service-node rn1 running-config
.
.
.
interface ma1 acl subnet 10.243.254.20/32 proto tcp port 8443 accept
interface ma1 acl subnet fe80::5054:ff:fef8:b844/128 proto tcp port 8443 accept
interface ma1 acl subnet 0.0.0.0/0 proto tcp port 8443 drop
interface ma1 acl subnet ::/0 proto tcp port 8443 drop
interface ma1 acl subnet ::/0 proto udp port 161 accept
interface ma1 acl subnet 0.0.0.0/0 proto udp port 161 accept
interface ma1 acl subnet 0.0.0.0/0 proto udp port 161 drop
interface ma1 acl subnet ::/0 proto udp port 161 drop
interface ma1 acl subnet 10.243.254.20/32 proto tcp port 22 accept
interface ma1 acl subnet fe80::5054:ff:fef8:b844/128 proto tcp port 22 accept
interface ma1 acl subnet ::/0 proto tcp port 22 accept
interface ma1 acl subnet 0.0.0.0/0 proto tcp port 22 accept
interface ma1 acl subnet 0.0.0.0/0 proto tcp port 22 drop
interface ma1 acl subnet ::/0 proto tcp port 22 drop
interface ma1 acl default accept
.
.
.
Note: For API (port 8443), the system only pushes ACLs that permit IPv4/LLv6 traffic from the Controller and drops everything else. Other ACLs (SSH/SNMP) are not generated from the managed devices' access control configuration on the CLI. They are generated from access-rules configuration for the Controller that gets used or inherited by the managed devices.
Note: The same show command exists for recorder-nodes and service-nodes, i.e.,

 

Recorder Nodes
  • show effective-config recorder-node rn1
  • show recorder-node rn1 running-config
Service Nodes
  • show effective-config service-node rn1
  • show service-node rn1 running-config

Limitations

The main limitation of this feature is the inability to change the management interface's IP address (on the CLI) once enforcing IFOA for any of the services on any managed devices so that the Controller does not inadvertently get locked out from the managed devices.

Note: Changing the management IP address via the REST API without getting blocked is possible. However, Arista Networks advises against doing so when enforcing IFOA for any service on any managed device.

Recovery Procedure

This section describes the recovery procedure when one or both Controllers go down.

Recovery from a Single Controller Failure

Procedure
  1. Log in to the remaining (online) Controller and enter the system remove-node failed controller command.
  2. Start a new Controller and complete the first boot process.
  3. When prompted, join the existing/remaining Controller (the configuration syncs to the new Controller as soon as it joins the cluster).
    Note: This step restores only the running configuration. It does not restore user files from the failed Controller.

Recovery from a Dual Controller Failure

The following procedure assumes that the Controllers have the same IP addresses as the failed Controllers.

Procedure

  1. Archive the current running configuration as a database snapshot and save it to the SCP server. Enter the following command from enable mode.
    controller-1# copy running-config snapshot://current.snp
    controller-1# copy snapshot://current.snp scp://This email address is being protected from spambots. You need JavaScript enabled to view it./home/anetadmin/
                      controller.snp
    This email address is being protected from spambots. You need JavaScript enabled to view it.'s password:
    controller.snp 5.05KB -00:00
    controller-1#
  2. Install the first Controller and complete the first boot process.
  3. Restore the archived version of the running configuration by entering the following command from enable mode.
    new-controller-1# copy scp://This email address is being protected from spambots. You need JavaScript enabled to view it./home/anetadmin/controller.snp
                      snapshot://controller.snp
    This email address is being protected from spambots. You need JavaScript enabled to view it.'s password:
    controller.snp
    5.05KB - 00:00
    new-controller-1# copy snapshot://controller. running-config
    new-controller-1#
    Note: This step only restores the running configuration. It does not restore all user files from the previously running Controller.
  4. Install the second Controller and complete the first boot process.
  5. When prompted, join the first Controller to form a cluster.
  6. The configuration between Controllers synchronizes as soon as the standby Controller joins the cluster.
    Note: This step only restores the running configuration. It does not restore user files from the failed Controller.
  7. If the new Controllers are running a different version than the previous set of Controllers, reboot the switches to obtain a compatible switch image from the Controller. Enter the following command from enable mode :
    new-controller-1# system reboot switch switch