DANZ Monitoring Fabric Verified Scale

This document describes the DANZ Monitoring Fabric multi-dimension scale test performed with DMF controllers.

Overview

Network visibility is a growing concern in data centers due to increasing virtualization, service-oriented architecture, and cloud-based IT. However, visibility into network traffic with traditional monitoring infrastructure could be improved. Expensive monitoring infrastructure, including application performance monitoring tools, Intrusion Detection Systems (IDS), and forensic tools, could be more efficiently utilized due to a lack of management of monitored traffic.

DANZ Monitoring Fabric (DMF) is an advanced network monitoring solution that alleviates this problem dramatically. DMF leverages high-performance bare metal Ethernet switches to provide the most scalable, flexible, and cost-effective monitoring fabric. Using an SDN-centric architecture, DMF enables tapping traffic everywhere in the network and delivers it to any troubleshooting, network monitoring, application performance monitoring, or security tool.

At its core is the centralized DMF Controller software that converts user-defined policies into highly optimized flows programmed into the forwarding ASICs of bare metal Ethernet switches running the production-grade Switch Light™ Operating System from Arista Networks Confidential. DMF delivers unprecedented network visibility with bare-metal economics, getting the right traffic to the right tool at the right time. With its open and published Application Programming Interfaces (APIs), the DMF Controller allows customers to deploy integrated network monitoring solutions along with the DMF.

Note: This document's scale and performance numbers came from a DMF hardware controller.

DMF Verified Scale Values

TCAM Rule Limits

The following tables contain the data for the scalability limits tested and verified for the DANZ Monitoring Fabric.

Table 1. Verified Jericho Series of Switches
Broadcom Switch Series Broadcom Chipset Switch Name
Jericho Series Jericho Arista DCS-7280SR-48C6
Jericho Plus Arista DCS-7280SR2-48YC6
Jericho 2 Arista DCS-7280CR3-32P4
Jericho 2C Arista DCS-7280SR3-48YC8
Note: The table above displays the switches used to verify the scale and performance for each supported Broadcom Chipset. Refer to the DMF Hardware Compatibility Guide for a complete list of supported switches.
Table 2. Verified TCAM Rule Limits: Jericho Series of Switches
  Match Mode Broadcom Jericho Switches Broadcom Jericho Plus Switches Broadcom Jericho 2 Switches Broadcom Jericho 2C Switches
IPv4 TCAM rules per switch (Verified Limit/Max Limit) Full 6140/6144 6140/6144 6140/6144 6140/6144
L3-L4 6140/6144 6140/6144 6140/6144 6140/6144
Offset Not Supported Not Supported Not Supported Not Supported
IPv6 TCAM rules per switch (Verified Limit/Max Limit) Full 6140/6144 6140/6144 6140/6144 6140/6140
L3-L4 6140/6144 6140/6144 6140/6144 6140/6140
Offset Not Supported Not Supported Not Supported Not Supported
Match conditions per policy Full IPv4/IPv6 6140/6140 6140/6140 6140/6140 6140/6140
L3-L4 IPv4/IPv6 6140/6140 6140/6140 6140/6140 6140/6140
L3-L4

Offset IPv4/IPv6

Not Supported Not Supported Not Supported Not Supported
Table 3. Supported Trident Series of Switches
Broadcom Switch Series Broadcom Chipset Switch Name
Trident Series Trident 2 Dell S4048F-ON, Dell S6000F-ON
Trident 2 Plus Dell S4048-48T, Dell S6010F-ON
Trident 3 Arista DCS-7050CX3-32S, Arista DCS-7050SX3-48YC8, Arista DCS-7050SX3-48YC12, Dell S5248F-ON, Dell S5232F-ON, Arista DCS-7050SX3-96YC8
Table 4. Verified TCAM Rule Limits: Trident Series of Switches
  Match Mode Broadcom Trident 2 Switches Broadcom Trident 2 Plus Switches Broadcom Trident 3 Switches
IPv4 TCAM rules per switch (Verified Limit /Max Limit) Full 2040/2044 8100/8188 3055/3068
L3-L4 4088/4092 8100/8188 3055/3068
Offset 2040/2044 8100/8188 3055/3068
IPv6 TCAM rules per switch (Verified Limit /Max Limit) Full 1535/2044 6100/8188 2300/3068
L3-L4 1535/4092 6100/8188 2300/3068
Offset 1535/2044 6100/8188 2300/3068
Match conditions per policy Full-IPv4/v6 2040/1535 8100/6100 3055/2300
L3-L4IPv4/v6 4088/1535 8100/6100 3055/2300
L3-L4

Offset-IPv4/v6

2040/1535 8100/6100 3055/2300
Table 5. Supported Tomahawk Series of Switches
Broadcom Switch Series Broadcom Chipset Switch Name
Tomahawk Series Tomahawk Dell Z9100F-ON, Dell S6100F-ON
Tomahawk Plus Dell S5048F-ON
Tomahawk 2 Arista DCS-7260CX3-64E

Dell Z9264F-ON

Table 6. Verified TCAM Rule Limits: Tomahawk Series of Switches
  Match Mode Broadcom Tomahawk Broadcom Tomahawk Plus Broadcom Tomahawk 2
IPv4 TCAM rules per switch (Verified Limit /Max Limit) Full 1015/1020 1015/1020 1015/1020
L3-L4 1015/1020 1015/1020 1015/1020
Offset 1015/1020 1015/1020 1015/1020
IPv6 TCAM rules per switch (Verified Limit /Max Limit) Full 760/1020 760/1020 760/1020
L3-L4 760/1020 760/1020 760/1020
Offset 760/1020 760/1020 760/1020
Match conditions per policy Full-IPv4/v6 1015/760 1015/760 1015/760
L3-L4IPv4/v6 1015/760 1015/760 1015/760
L3-L4

Offset-IPv4/v6

1015/760 1015/760 1015/760
Table 7. Supported Maverick Series of Switches
Broadcom Switch Series Broadcom Chipset Switch Name
Maverick Series Maverick Dell S4112F-ON
Table 8. Verified TCAM Rule Limits: Maverick Series of Switches
  Match Mode Broadcom Maverick Switches
IPv4 TCAM rules per switch (Verified Limit /Max Limit) Full 4088/4092
L3-L4 8100/8188
Offset 4088/4092
IPv6 TCAM rules per switch (Verified Limit /Max Limit) Full 3060/4092
L3-L4 3060/8188
Offset 3060/4092
Match conditions per policy Full-IPv4/v6 4088/3060
L3-L4IPv4/v6 8100/3060
L3-L4

Offset-IPv4/v6

4088/3060

The DMF 8.5 release supports the following EOS switches using the Broadcom Qumran chipset.

Table 9. Verified Qumran-based Series of Switches
Broadcom Switch Series Broadcom Chipset Switch Name

Qumran-based Series

 

QumranAX DCS-7020SR-24C2
Qumran2C DCS-7280CR3K-36S
Qumran2A DCS-7280SR3-40YC6
Note: The table above displays the switch models used to verify the scale and performance for each supported Broadcom chipset. Please refer to the DMF 8.5 Hardware Compatibility List for a complete list of supported switches.
Table 10. Verified TCAM Rule Limits: Qumran Series of Switches
  Match Mode Broadcom QumranAX Switches Broadcom Qumran2C Switches Broadcom Qumran2A Switches

IPv4 TCAM Rules per Switch (Verified Limit /Max Limit)

 

Full

 

4084/4088

 

6140/6144

 

6140/6144

 

L3-L4

 

4084/4088

 

6140/6144

 

6140/6144

 

Offset

 

Not Supported

 

Not Supported

 

Not Supported

 

IPv6 TCAM Rules per Switch (Verified Limit /Max Limit)

 

Full

 

4084/4088

 

6140/6144

 

6140/6144

 

L3-L4

 

4084/4088

 

6140/6144

 

6140/6144

 

Offset

 

Not Supported

 

Not Supported

 

Not Supported

 

Match Conditions per Policy

 

Full IPv4/IPv6

 

4084/4084

 

6140/6140

 

6140/6140

 

L3-L4 IPv4/IPv6

 

4084/4084

 

6140/6140

 

6140/6140

 

L3-L4 Offset IPv4/IPv6

 

Not Supported

 

Not Supported

 

Not Supported

 

Port Channel Interface Limits

Table 11. Verified Port Channel Interface Limits on Trident/Tomahawk Series
 

Maximum Hardware/Software

Verified Limits
Number of Port Channel Interfaces Per Switch 64 10
Number of Port Channel Member Interfaces 32 32
Table 12. Verified Port Channel Interface Limits on Jericho Series
 

Maximum Hardware/Software

Verified Limits
Number of Port Channel Interfaces Per Switch 1024 16
Number of Port Channel Member Interfaces 32 32

Tunnel Interface Limits

Table 13. Verified VXLAN Tunnel Interface Limits on Trident/Tomahawk Series
 

Maximum Hardware/Software Limit

Verified Limits
VXLAN Rx Tunnels per Switch 2000 2000
VXLAN Bidirectional / Tx Tunnels per Switch Depends on available ports on switch. 60
Note: DMF switches based on Trident 3 and Tomahawk 2 chipsets from Broadcom were used to verify the supported VXLAN tunnel scale values. Please refer to the DMF Hardware Compatibility Guide for details on the feature and supported switch platforms.
Table 14. Verified L2GRE Tunnel Interface Limits on Trident/Tomahawk Series
 

Maximum

Hardware/Software Limit

Verified Limits
L2GRE Rx Tunnels per Switch 2000 2000
L2GRE Bidirectional / Tx Tunnels per Switch Depends on available ports on switch. 60

Functional Limits

Table 15. Verified Functional Limits
Functionality Verified Limits
Filter Interfaces per switch 128
Delivery interfaces per switch 128
Services Chained in a Policy 4
User created policies per fabric (Disable overlap to create more than 200 user policies) 200
Max number of policies which can overlap 10 (Default is 4)
Max number of policies per fabric (user + dynamic policies) 4000
Switches per Fabric 150
Filter interfaces per Fabric 1500
Delivery interfaces per Fabric 1000
Managed Services Per Fabric 40
Managed Services Per Switch 40
No of Service Nodes Per Fabric 5
Filter interfaces per policy per Fabric 1000
Connected devices per fabric 100
IPv4 address groups 170
IPv4 addresses per group 20000
IPv6 address groups 50
IPv6 addresses per group 100
Maximum RTT between active and standby controller, between switch and controllers 300 ms
Maximum Users 500
Maximum Groups 500
Unmanaged Service interfaces per switch 44
Unmanaged Service per switch 22
Unmanaged Service interfaces per Fabric 100
Unmanaged Service per switch 50

Naming Conventions

Table 16. Naming Conventions
 

Minimum Length

Maximum Length

Allowed Pattern
Username 1 255 [a-zA-Z][-0-9a-zA-Z_]*
Password 1 255 [0-9a-zA-Z,./;[]<>?:{}|❵~!@#$%^&*()_+-=]
Group Name 1 255 [a-zA-Z][-0-9a-zA-Z_]*
Filter Interface Name 1 255 [a-zA-Z][-.:0-9a-zA-Z_]*
Delivery Interface Name 1 255 [a-zA-Z][-.:0-9a-zA-Z_]*
Service Interface Name 1 255 [a-zA-Z][-.:0-9a-zA-Z_]*
Service Name 1 255 [a-zA-Z][-.:0-9a-zA-Z_]*

DMF Service Node Verified Scale Values

NetFlow Scale Values

Table 17. Verified NetFlow Scale Values
DMF Service Node: Netflow Verified Limits
Service Node Throughput per port 
(DCA-DM-SC, DCA-DM-SDL)
  • 10 Gbps for IMIX traffic.
(DCA-DM-SEL)
  • 20 Gbps for IMIX traffic.
Max Packets processed per port
(DCA-DM-SC)
  • 6.0 million pps per port when 1 port is used.

    (DCA-DM-SDL)

  • 5.5 million pps per port when 1 port is used.

    (DCA-DM-SEL)

  • 7.5 million pps per port when 1 port is used.

    (DCA-DM-SC)

  • 5.5 million pps per port when 2 ports on the same NIC are used.

    (DCA-DM-SDL)

  • 5.0 million pps per port when 4 ports on the same NIC are used.

    (DCA-DM-SEL)

  • 7.0 million pps per port when 2 ports on the same NIC are used.

    (DCA-DM-SDL)

  • 4.0 million pps per port when 16 port are used.

    (DCA-DM-SEL)

  • 6.0 million pps per port when 16 ports are used.
Expected Netflow Traffic out of per service node port 300Mbps 
Max Number of Flows supported 1 million per port of supported managed-appliances.

16 million per 16 ports of supported managed-appliances.

Note: All executed test cases send 10Gbps traffic to supported 10G service node ports with 1 million flows.
Note: All executed test cases send 20Gbps traffic to the DCA-DM-SEL.

IPFIX Scale Values

Table 18. IPFIX Template Used
IPV4 Template IPV6 Template
  • key destination-ipv4-address
  • key destination-ipv6-address
  • key destination-transport-port
  • key destination-transport-port
  • key dot1q-vlan-id
  • key dot1q-vlan-id
  • key source-ipv4-address
  • key source-ipv6-address
  • key source-transport-port
  • key source-transport-port
  • field flow-end-milliseconds
  • field flow-end-milliseconds
  • field flow-end-reason
  • field flow-end-reason
  • field flow-start-milliseconds
  • field flow-start-milliseconds
  • field maximum-ttl
  • field maximum-ttl
  • field minimum-ttl
  • field minimum-ttl
  • field packet-delta-count
  • field packet-delta-count
Note: All executed test cases send 10Gbps traffic to all supported 10G service node ports with 1 million flows.
Table 19. Verified IPFIX Scale Values
DMF Service Node: IPFIX IPv4 Verified Limits IPv6 Verified Limits
Service Node Throughput per port.
(DCA-DM-SC)
  • 10 Gbps for IMIX traffic.
(DCA-DM-SEL)
  • 20 Gbps for IMIX traffic.
(DCA-DM-SC)
  • 10 Gbps for IMIX traffic.
(DCA-DM-SEL)
  • 11 Gbps for IMIX traffic.
Max Packets processed per port.
(DCA-DM-SC)
  • 7.5 million pps per port when 1 port is used.
  • 7.0 million pps per port when 2 ports on the same NIC are used.
(DCA-DM-SEL)
  • 9.5 million pps per port when 1 port is used.
  • 8.5 million pps per port when 2 ports on the same NIC are used.
  • 7.0 million pps per port when 16 ports are used.
(DCA-DM-SC)
  • 6.4 million pps per port when 1 port is used.
  • 6.0 million pps per port when 2 ports on the same NIC are used.
(DC-DM-SEL)
  • 7.5 million pps per port when 1 port is used.
  • 7.5 million pps per port when 2 ports on the same NIC are used.
  • 6.5 million pps per port when 16 ports are used.

Expected IPFIX Traffic out of per service node port.

300 Mbps. 500 Mbps.
Max Number of Flows tested per port.

(DCA-DM-SC)

  • 1 million per port.
  • 4 million when 4 ports are used.

(DCA-DM-SEL)

  • 16 million when 16 ports are used.

(DCA-DM-SC)

  • 1 million per port.
  • 4 million when 4 ports are used.

(DCA-DM-SEL)

  • 16 million when 16 ports are used.

Deduplication Verified Scale Values

Table 20. Verified Scale for Deduplication Managed Services
Managed Service One Service Node Port 4 Service Node Ports 16 Service Node Ports
Deduplication Maximum Packet Rate Processed
(DCA-DM-SC)
  • 2 ms window: 14 million pps.
  • 4, 6 ms window: 13 million pps.
  • 8 ms window: 11 million pps.
(DCA-DM-SDL)
  • 2 ms window: 14 million pps.
  • 4, 6 ms window: 13 million pps.
  • 8 ms window: 11 million pps.
(DCA-DM-SEL)
  • 2 ms window: 19 million pps.
  • 4, 6 ms window: 18 million pps.
  • 8 ms window: 16 million pps.
(DCA-DM-SC)
  • 2 ms window: 13 million pps per port when 4 ports are used.
  • 4, 6 ms window: 13 million pps per port when 4 ports are used.
  • 8 ms window: 11 million pps per port when 4 ports are used.
(DCA-DM-SEL)
  • 2 ms window: 17.5 million pps per port when 2 ports on the same NIC are used.
  • 4, 6 ms window: 16.5 million pps per port when 2 ports on the same NIC are used.
  • 8 ms window: 15.5 million pps per port when 2 ports on the same NIC are used.
(DCA-DM-SDL)
  • 2, 4, 6, 8 ms window: 8 million pps.
(DCA-DM-SEL)
  • 2, 4, 6, 8 ms window: 15.5 million unique pps.
Deduplication Maximum Bandwidth by Service Node Port

(DCA-DM-SC)

10 Gbps for IMIX traffic.
  • 2 ms window: It handles 10 Gbps traffic per port with average packet size > 70 bytes.
  • 4, 6 ms window: It handles 10 Gbps traffic per port with average packet size > 76 bytes.
  • 8 ms window: It handles 10 Gbps traffic per port with average packet size > 94 bytes.

(DCA-DM-SEL)

20Gbps for IMIX traffic.
  • 2, 4, 6 and 8 ms window: It handles 20 Gbps traffic per port with average packet size > 70 bytes.

(DCA-DM-SC)

40 Gbps for IMIX traffic.
  • 2 ms window: It handles 10 Gbps traffic per port with average packet size > 76 bytes.
  • 4, 6 ms window: It handles 10 Gbps traffic per port with average packet size > 76 bytes.
  • 8 ms window: It handles 10 Gbps traffic per port with average packet size > 94 bytes.

(DCA-DM-SEL)

40Gbps for IMIX traffic.
  • 2, 4, 6 and 8 ms window: It handles 40 Gbps traffic per port with average packet size > 70 bytes.

(DCA-DM-SC)

160 Gbps for IMIX traffic.
  • Service node ports handles 10 Gbps traffic per port with average packet size > 210 bytes.

(DCA-DM-SEL)

320 Gbps for IMIX traffic.
  • Service node ports handles 20 Gbps traffic per port with average packet size > 210 bytes.
Note: Tested for 100%, 50%, 20%, and 0% deduplication by sending 10Gbps traffic with different packet sizes.

Header Stripping Verified Scale Values

Table 21. Header Stripping Verified Scale Values
Managed Service One Service Node Port 4 Service Node Port 16 Service Node Port
Header Stripping Maximum Packet Rate Processed

(DCA-DM-SC)

  • 14 million pps per port.

(DCA-DM-SDL)

  • 12 million pps per port.

(DCA-DM-SEL)

  • 29 million pps per port.

(DCA-DM-SC)

  • 14 million pps per port.

(DCA-DM-SDL)

  • 8 million pps per port.

(DCA-DM-SEL)

  • 29 million pps per port.

(DCA-DM-SDL)

  • 7.5 million pps per port.

(DCA-DM-SEL)

  • 14.5 million pps per port.
Header Stripping Maximum Bandwidth by Service Node Port

(DCA-DM-SC)

  • 10 Gbps for IMIX traffic. It handles 10 Gbps traffic per port with average packet size > 70 bytes.

(DCA-DM-SEL)

  • 20 Gbps for IMIX traffic. It handles 20 Gbps traffic per port with average packet size > 70 bytes.

(DCA-DM-SC)

  • 40 Gbps for IMIX traffic. It handles 10 Gbps traffic per port with average packet size > 70 bytes.

(DCA-DM-SEL)

  • 40 Gbps for IMIX traffic.

It handles 20 Gbps traffic per port with average packet size > 70 bytes.

(DCA-DM-SC)

  • 160 Gbps for IMIX traffic. It handles 10 Gbps traffic per port with average packet size > 160 bytes.

(DCA-DM-SEL)

  • 320 Gbps for IMIX traffic. It handles 20 Gbps traffic per port with average packet size > 140 bytes.
Table 22. Header Stripping Verified Scale Values
Managed Service One Service Node Port 4 Service Node Port 16 Service Node Port
Header Stripping Maximum Packet Rate Processed

(DCA-DM-SC)

  • 14 million pps per port.

(DCA-DM-SDL)

  • 12 million pps per port.

(DCA-DM-SEL)

  • 29 million pps per port.

(DCA-DM-SC)

  • 14 million pps per port.

(DCA-DM-SDL)

  • 8 million pps per port.

(DCA-DM-SEL)

  • 29 million pps per port.

(DCA-DM-SDL)

  • 7.5 million pps per port.

(DCA-DM-SEL)

  • 14.5 million pps per port.
Header Stripping Maximum Bandwidth by Service Node Port

(DCA-DM-SC)

  • 10 Gbps for IMIX traffic. It handles 10 Gbps traffic per port with average packet size > 70 bytes.

(DCA-DM-SEL)

  • 20 Gbps for IMIX traffic. It handles 20 Gbps traffic per port with average packet size > 70 bytes.

(DCA-DM-SC)

  • 40 Gbps for IMIX traffic. It handles 10 Gbps traffic per port with average packet size > 70 bytes.

(DCA-DM-SEL)

  • 40 Gbps for IMIX traffic. It handles 20 Gbps traffic per port with average packet size > 70 bytes.

(DCA-DM-SC)

  • 160 Gbps for IMIX traffic. It handles 10 Gbps traffic per port with average packet size > 160 bytes.

(DCA-DM-SEL)

  • 320 Gbps for IMIX traffic. It handles 20 Gbps traffic per port with average packet size > 140 bytes.
Note: Tested VxLan, MPLS, ERSPAN and LISP encapsulated packets of different sizes at line rate.

Slicing, Masking and Pattern Matching Verified Scale Values

This section summarizes the verified scale values for DMF Service Node managed services.
  • Slicing
  • Masking
  • Pattern Matching
Table 23. Verified Scale for Packet Slicing as a Managed Service

Processing rate and supported bandwidth

One Service Node Port

4 Service Node Ports

16 Service Node Ports
Maximum Packet Rate Processed
(DCA-DM-SC)
  • 14 million pps per port
(DCA-DM-SDL)
  • 14 million pps per port
(DCA-DM-SEL)
  • 29.5 million pps per port
(DCA-DM-SC)
  • 13 million pps per port
(DCA-DM-SDL)
  • 8 million pps per port
(DCA-DM-SEL)
  • 17.5 million pps per port
(DCA-DM-SDL)
  • 8 million pps per port.
(DCA-DM-SEL)
  • 17.5 million pps per port.
Maximum Bandwidth by Service Node
(DCA-DM-SC)
  • 10 Gbps for IMIX traffic. It handles 10Gbps traffic per port with average packet size > 70 bytes.
(DCA-DM-SEL)
  • 20 Gbps for IMIX traffic. It handles 20Gbps traffic per port with average packet size > 130 bytes.
(DCA-DM-SC)
  • 40 Gbps for IMIX traffic. It handles 10Gbps traffic per port with average packet size > 70 bytes.
(DCA-DM-SEL)
  • 40 Gbps for IMIX traffic. It handles 20 Gbps traffic per port with average packet size > 130 bytes.
(DCA-DM-SC)
  • 160 Gbps for IMIX traffic. It handles 10Gbps traffic per port with average packet size > 70 bytes.
(DCA-DM-SEL)
  • 320 Gbps for IMIX traffic. It handles 20 Gbps traffic per port with average packet size > 130 bytes.
.
Note: Tested different packet sizes with line rate traffic.
Table 24. Verified Scale for Packet Masking as a Managed Service
Processing rate/bandwidth supported  One Service Node Port 4 Service Node Ports 16 Service Node Ports
Maximum Packet Rate Processed Depending on regex pattern

DCA-DM-SC supports 40% of 10 Gbps traffic or more per port.

DCA-DM-SEL supports 31% of 20 Gbps  traffic or more per port.

Maximum Bandwidth by Service Node Port Depending on regex pattern

One Service Node port handles about 40% of 10 Gbps traffic or more.

To get 10 Gbps performance, use LAG with 2 or more Service Node ports.

Table 25. Verified Scale for Pattern Matching as a Managed Service
Processing rate/bandwidth supported  One Service Node Port 4 Service Node Ports 16 Service Node Ports
Maximum Packet Rate Processed Depending on regex pattern

One Service Node port handles about 50% of 10 Gbps traffic or more.

DCA-DM-SEL supports 36% of 20 Gbps traffic or more per port.

Maximum Bandwidth by Service Node Port Depending on regex pattern

One Service Node port handles about 50% of 10 Gbps traffic or more.

To get 10 Gbps performance, use LAG with 2 or more Service Node ports.

Note: The performance of packet masking or packet matching depends on the packet length and the complexity of the regular expression.

Analytics Node Verified Scale Values

This section displays the tested scalability values for the Analytics Node.

Table 26. Analytics Node Scale Performance Results
  Single Node Cluster Three Node Cluster Five Node Cluster
ARP 20,000 pkts/sec 60,000 pkts/sec 100,000 pkts/sec
DHCP 15,000 pkts/sec 30,000 pkts/sec 60,000 pkts/sec
ICMP 15,000 pkts/sec 40,000 pkts/sec 80,000 pkts/sec
DNS 8,000 pkts/sec 20,000 pkts/sec 32,000 pkts/sec
TCPFlow 6,000 flows/ 18,000 flows/sec 30,000 flows/sec
sFLOW 12,000 flows/sec 30,000 flows/sec 70,000 flows/sec
Netflow v5 without Optimization 12,000 flows/sec 32,000 flows/sec 60,000 flows/sec
IPFIX without Optimization 9,000 flows/sec 27,000 flows/sec 45,000 flows/sec
Netflow v9 without Optimization 9,000 flows/sec 27,000 flows/sec 45,000 flows/sec
All the Above Cases Combined:  ARP: 800 pkts/sec

DHCP: 500 pkts/sec

ICMP: 300 pkts/sec

DNS: 3,000 pkts/sec

TCPFlow: 300 flows/sec

sFLOW: 3,000 flows/sec

Netflow ver 5: 5,000 flows/sec

ARP: 1,800 pkts/sec

DHCP: 900 pkts/sec

ICMP: 1,200 pkts/sec

DNS: 6,000 pkts/sec

TCPFlow: 400 flows/sec

sFLOW: 6,000 flows/sec

Netflow ver 5: 10,000 flows/sec

ARP: 2,000 pkts/sec

DHCP: 1,200 pkts/sec

ICMP: 2,000 pkts/sec

DNS: 8,000 pkts/sec

TCPFlow: 500 flows/sec

sFLOW: 8,000 flows/sec

Netflow ver 5: 13,000 flows/sec

Note: The above test measurements were performed with 60% average CPU Utilization.

Recorder Node Verified Scale Values

This section displays the tested performance numbers for the Recorder Node with no-drop packet capture characteristics.

Table 27. Maximum packets recorded on a DCA-DM-RA3 Recorder Node
Packet Size (Bytes) Packets per second Maximum Bandwidth (Gbps)
1500 Bytes or greater ~1.98 million 24 Gbps
512 Bytes or greater ~4.7 million 20 Gbps
IMIX ~6.3 million 19 Gbps
256 Bytes or greater ~8.6 million 19 Gbps
Note: IMIX is a 7:4:1 distribution of 64, 570, and 1518-byte Ethernet-encapsulated packets, leading to a 353-byte packet size average.