DANZ Monitoring Fabric Verified Scale
This document describes the DANZ Monitoring Fabric (DMF) multi-dimension scale test performed with DMF Controllers.
Overview
Network visibility is a growing concern in data centers due to increasing virtualization, service-oriented architecture, and cloud-based IT. However, visibility into network traffic with traditional monitoring infrastructure could be improved. Expensive monitoring infrastructure, including application performance monitoring tools, Intrusion Detection Systems (IDS), and forensic tools, could be more efficiently utilized due to a need for more management of monitored traffic.
DANZ Monitoring Fabric (DMF) is an advanced network monitoring solution that alleviates this problem dramatically. DMF leverages high-performance bare metal Ethernet switches to provide the most scalable, flexible, and cost-effective monitoring fabric. Using an SDN-centric architecture, DMF enables tapping traffic everywhere in the network and delivers it to any troubleshooting, network monitoring, application performance monitoring, or security tool.
At its core is the centralized DMF Controller software that converts user-defined policies into highly optimized flows programmed into the forwarding ASICs of bare metal Ethernet switches running the production-grade switch operating system from Arista Networks. DMF delivers unprecedented network visibility with bare-metal economics, getting the right traffic to the right tool at the right time. With its open and published Application Programming Interfaces (APIs), the DMF Controller allows customers to deploy integrated network monitoring solutions along with the DMF.
Note: This document's scale and performance numbers came from a DMF hardware Controller.
DMF Verified Scale Values
TCAM Rule Limits
The following tables contain the data for the scalability limits tested and verified for the DANZ Monitoring Fabric (DMF).
Table 1. Verified TCAM Rule Limits
Note: The numbers in the following table are the max TCAM values for Ingress flow2 for the switch series listed. The Ingress flow2 TCAM scale depends on features configured in the Controller, which can be less than the max value.
|
Ingress_flow2 tcam scale |
|
Match Mode |
7280R Series Switches |
7280R2 Series Switches |
7280R3 Series Switches
7289R3 Chassis Switch
Important: Except the 7280R3 switches referenced in Table 3.
|
IPv4 TCAM rules per switch (Verified Limit/Max Limit) |
Full |
6140/6144 |
6140/6144 |
8180/8188 |
L3-L4 |
6140/6144 |
6140/6144 |
8180/8188 |
Offset |
6140/6144 |
6140/6144 |
8180/8188 |
IPv6 TCAM rules per switch (Verified Limit/Max Limit) |
Full |
6140/6144 |
6140/6144 |
8180/8188 |
L3-L4 |
6140/6144 |
6140/6144 |
8180/8188 |
Offset |
6140/6144 |
6140/6144 |
8180/8188 |
Match conditions per policy |
Full IPv4/IPv6 |
6140/6140 |
6140/6140 |
8180/8180 |
L3-L4 IPv4/IPv6 |
6140/6140 |
6140/6140 |
8180/8180 |
L3-L4
Offset IPv4/IPv6
|
6140/6140 |
6140/6140 |
8180/8180 |
|
7280R Series Switches |
7280R2 Series Switches |
7280R3 Series Switches
7289R3 Chassis Switch
|
Ingress_flow1 max tcam scale |
1024 |
1024 |
1024 |
Egress_flow1 max tcam scale |
1024 |
1024 |
1024 |
Table 2. Verified TCAM Rule Limits
Note: The numbers in the following table are the max TCAM values for Ingress flow2 for the switch series listed. TCAM scale depends on features configured in the Controller, which can be less than the max value.
|
Ingress_flow2 tcam scale |
Note: The verified TCAM rule limit applies to the whole chassis, not per line card.
|
|
Match Mode |
7800R3 Series Switches |
IPv4 TCAM rules per switch (Verified Limit /Max Limit) |
Full |
8180/8188 |
L3-L4 |
8180/8188 |
Offset |
8180/8188 |
IPv6 TCAM rules per switch (Verified Limit /Max Limit) |
Full |
8180/8188 |
L3-L4 |
8180/8188 |
Offset |
8180/8188 |
Match conditions per policy |
Full-IPv4/v6 |
8180/8180 |
L3-L4IPv4/v6 |
8180/8180 |
L3-L4
Offset-IPv4/v6
|
8180/8180 |
|
7800R3 Series Switches |
Ingress_flow1 max tcam scale |
1024 × number of line cards |
Egress_flow1 max tcam scale |
1024 × number of line cards |
Table 3. Verified TCAM Rule Limits
Note: The numbers in the following table are the max TCAM values for Ingress flow2 for the switch series listed. TCAM scale depends on features configured in the Controller, which can be less than the max value.
|
Ingress_flow2 tcam scale |
|
Match Mode |
7020SR/TR Series Switches |
7280SR3E-40YC6 Series Switches, 7280SR3-40YC6 Switch, 7280TR3 Series Switches |
IPv4 TCAM Rules per Switch (Verified Limit /Max Limit)
|
Full
|
4084/4088
|
4084/4088
|
L3-L4
|
4084/4088
|
4084/4088
|
Offset
|
4084/4088
|
4084/4088
|
IPv6 TCAM Rules per Switch (Verified Limit /Max Limit)
|
Full
|
4084/4088
|
4084/4088
|
L3-L4
|
4084/4088
|
4084/4088
|
Offset
|
4084/4088
|
4084/4088
|
Match Conditions per Policy
|
Full IPv4/IPv6
|
4084/4084
|
4084/4084
|
L3-L4 IPv4/IPv6
|
4084/4084
|
4084/4084
|
L3-L4 Offset IPv4/IPv6
|
4084/4084
|
4084/4084
|
|
7020SR/TR Series Switches |
7280SR3E-40YC6 Series Switches, 7280SR3-40YC6 Switch, 7280TR3 Series Switches |
Ingress_flow1 max tcam scale |
512 |
512 |
Egress_flow1 max tcam scale |
512 |
512 |
Table 4. Verified TCAM Rule Limits
Ingress_flow2 tcam scale |
|
Match Mode |
7050X3 Series Switches |
IPv4 TCAM rules per switch (Verified Limit /Max Limit) |
Full |
3055/3068 |
L3-L4 |
3055/3068 |
Offset |
3055/3068 |
IPv6 TCAM rules per switch (Verified Limit /Max Limit) |
Full |
2300/3068 |
L3-L4 |
2300/3068 |
Offset |
2300/3068 |
Match conditions per policy |
Full-IPv4/v6 |
3055/2300 |
L3-L4IPv4/v6 |
3055/2300 |
L3-L4
Offset-IPv4/v6
|
3055/2300 |
|
7050X3 Series Switches |
Ingress_flow1 max tcam scale |
1024 |
Egress_flow1 max tcam scale |
1024 |
Table 5. Verified TCAM Rule Limits
Ingress_flow2 tcam scale |
|
Match Mode |
7260X3 Series Switches |
IPv4 TCAM rules per switch (Verified Limit /Max Limit) |
Full |
1015/1020 |
L3-L4 |
1015/1020 |
Offset |
1015/1020 |
IPv6 TCAM rules per switch (Verified Limit /Max Limit) |
Full |
760/1020 |
L3-L4 |
760/1020 |
Offset |
760/1020 |
Match conditions per policy |
Full-IPv4/v6 |
1015/760 |
L3-L4IPv4/v6 |
1015/760 |
L3-L4
Offset-IPv4/v6
|
1015/760 |
|
7260X3 Series Switches |
Ingress_flow1 max tcam scale |
1024 |
Egress_flow1 max tcam scale |
512 |
Table 6. Verified TCAM Rule Limits
Ingress_flow2 tcam scale |
|
Match Mode |
7050DX4 Series Switches, 7050PX4-32S Switch |
7050CX4 Series Switches, 7050SDX4 Series Switches, 7050SPX4 Switch |
IPv4 TCAM rules per switch (Verified Limit /Max Limit) |
Full |
4092/4095 |
6140/6143 |
L3-L4 |
4092/4095 |
6140/6143 |
Offset |
2044/2047 |
3068/3071 |
IPv6 TCAM rules per switch (Verified Limit /Max Limit) |
Full |
2044/2047 |
3068/3071 |
L3-L4 |
2044/2047 |
3068/3071 |
Offset |
2044/2047 |
3068/3071 |
Match conditions per policy |
Full-IPv4/v6 |
4092/2044 |
6140/3068 |
L3-L4IPv4/v6 |
4092/2044 |
6140/3068 |
L3-L4 Offset-IPv4/v6 |
2044/2044 |
3068/3068 |
|
7050DX4 Series Switches, 7050PX4-32S Switch |
7050CX4 Series Switches, 7050SDX4 Series Switches, 7050SPX4 Switch |
Ingress_flow1 max tcam scale |
N/A |
N/A |
Egress_flow1 max tcam scale |
N/A |
N/A |
Port Channel Interface Limits
Table 7. Verified Port Channel Interface Limits on Arista 7050X3 and 7260X3 Series Switches
Arista 7050X3 and 7260X3 Series Switches
|
|
Maximum Hardware/Software
|
Verified Limits |
Number of Port Channel Interfaces Per Switch |
64 |
10 |
Number of Port Channel Member Interfaces |
32 |
32 |
Table 8. Verified Port Channel Interface Limits on Arista 7280R, 7280R2, 7280R3 Series of Switches
Arista 7280R, 7280R2 and 7280R3 Series of Switches
|
|
Maximum Hardware/Software
|
Verified Limits |
Number of Port Channel Interfaces Per Switch |
1024 |
16 |
Number of Port Channel Member Interfaces |
32 |
32 |
Tunnel Interface Limits
Verified VXLAN and L2GRE Tunnel Interface Limits on Arista 7050X3 and 7260X3 Series Switches
Arista 7050X3 and 7260X3 Series of Switches
Table 9. Verified VXLAN Tunnel Interface Limits
|
Maximum Hardware/Software Limit
|
Verified Limits |
VXLAN Rx Tunnels per Switch |
2000 |
2000 |
VXLAN Bidirectional / Tx Tunnels per Switch |
Depends on available ports on switch.1 |
60 |
Table 10. Verified L2GRE Tunnel Interface Limits
|
Maximum
Hardware/Software Limit
|
Verified Limits |
L2GRE Rx Tunnels per Switch |
2000 |
2000 |
L2GRE Bidirectional / Tx Tunnels per Switch |
Depends on available ports on switch. |
60 |
Functional Limits
Table 11. Verified Functional Limits
Functionality |
Verified Limits |
Filter Interfaces per switch |
128 |
Delivery interfaces per switch |
128 |
Services Chained in a Policy |
4 |
User created policies per fabric (Disable overlap to create more than 200 user policies) |
200 |
Max number of policies which can overlap |
10 (Default is 4) |
Max number of policies per fabric (user + dynamic policies) |
4000 |
Switches per Fabric |
150 |
Filter interfaces per Fabric |
1500 |
Delivery interfaces per Fabric |
1000 |
Managed Services Per Fabric |
40 |
Managed Services Per Switch |
40 |
No of Service Nodes Per Fabric |
5 |
Filter interfaces per policy per Fabric |
1000 |
Connected devices per fabric |
100 |
IPv4 address groups |
170 |
IPv4 addresses per group |
20000 |
IPv6 address groups |
50 |
IPv6 addresses per group |
100 |
Maximum RTT between active and standby Controller, between switch and Controllers |
300 ms |
Maximum Users |
500 |
Maximum Groups |
500 |
Unmanaged Service interfaces per switch |
44 |
Unmanaged Service per switch |
22 |
Unmanaged Service interfaces per Fabric |
100 |
Unmanaged Service per switch |
50 |
Naming Conventions
Table 12. Naming Conventions
|
Minimum Length
|
Maximum Length
|
Allowed Pattern |
Username |
1 |
255 |
[a-zA-Z][-0-9a-zA-Z_]* |
Password |
1 |
255 |
[0-9a-zA-Z,./;[]<>?:{}|❵~!@#$%^&*()_+-=] |
Group Name |
1 |
255 |
[a-zA-Z][-0-9a-zA-Z_]* |
Filter Interface Name |
1 |
255 |
[a-zA-Z][-.:0-9a-zA-Z_]* |
Delivery Interface Name |
1 |
255 |
[a-zA-Z][-.:0-9a-zA-Z_]* |
Service Interface Name |
1 |
255 |
[a-zA-Z][-.:0-9a-zA-Z_]* |
Service Name |
1 |
255 |
[a-zA-Z][-.:0-9a-zA-Z_]* |
DMF Service Node Verified Scale Values
NetFlow Scale Values
Table 13. Verified NetFlow Scale Values
DMF Service Node: Netflow |
Verified Limits |
Service Node Throughput per port 2 |
(DCA-DM-SC, DCA-DM-SDL)
- 10 Gbps for IMIX traffic.
(DCA-DM-SEL)
- 20 Gbps for IMIX traffic.
|
Max Packets processed per port |
(DCA-DM-SC 3)
- 6.0 million pps per port when 1 port is used.
(DCA-DM-SDL4)
- 5.5 million pps per port when 1 port is used.
(DCA-DM-SEL5)
- 7.5 million pps per port when 1 port is used.
(DCA-DM-SC3)
- 5.5 million pps per port when 2 ports on the same NIC are used.
(DCA-DM-SDL4)
- 5.0 million pps per port when 4 ports on the same NIC are used.
(DCA-DM-SEL5)
- 7.0 million pps per port when 2 ports on the same NIC are used.
(DCA-DM-SDL4)
- 4.0 million pps per port when 16 port are used.
(DCA-DM-SEL5)
- 6.0 million pps per port when 16 ports are used.
|
Expected Netflow Traffic out of per service node port |
300Mbps 6 |
Max Number of Flows supported |
1 million per port of supported managed-appliances.
16 million per 16 ports of supported managed-appliances.
|
Note: All executed test cases send 10Gbps traffic to supported 10G service node ports with 1 million flows.
Note: All executed test cases send 20Gbps traffic to the DCA-DM-SEL.
IPFIX Scale Values
Table 14. IPFIX Template Used
IPV4 Template |
IPV6 Template |
- key destination-ipv4-address
|
- key destination-ipv6-address
|
- key destination-transport-port
|
- key destination-transport-port
|
|
|
|
|
- key source-transport-port
|
- key source-transport-port
|
- field flow-end-milliseconds
|
- field flow-end-milliseconds
|
|
|
- field flow-start-milliseconds
|
- field flow-start-milliseconds
|
|
|
|
|
|
|
Note: All executed test cases send 10Gbps traffic to all supported 10G service node ports with 1 million flows.
Table 15. Verified IPFIX Scale Values
DMF Service Node: IPFIX |
IPv4 Verified Limits |
IPv6 Verified Limits |
Service Node Throughput per port. 7 |
(DCA-DM-SC)
- 10 Gbps for IMIX traffic.
(DCA-DM-SEL)
- 20 Gbps for IMIX traffic.
|
(DCA-DM-SC)
- 10 Gbps for IMIX traffic.
(DCA-DM-SEL)
- 11 Gbps for IMIX traffic.
|
Max Packets processed per port. |
(DCA-DM-SC 8)
- 7.5 million pps per port when 1 port is used.
- 7.0 million pps per port when 2 ports on the same NIC are used.
(DCA-DM-SEL 9)
- 9.5 million pps per port when 1 port is used.
- 8.5 million pps per port when 2 ports on the same NIC are used.
- 7.0 million pps per port when 16 ports are used.
|
(DCA-DM-SC 8)
- 6.4 million pps per port when 1 port is used.
- 6.0 million pps per port when 2 ports on the same NIC are used.
(DC-DM-SEL 9)
- 7.5 million pps per port when 1 port is used.
- 7.5 million pps per port when 2 ports on the same NIC are used.
- 6.5 million pps per port when 16 ports are used.
|
Expected IPFIX Traffic out of per service node port.
|
300 Mbps 10 . |
500 Mbps10 . |
Max Number of Flows tested per port. |
(DCA-DM-SC)
- 1 million per port.
- 4 million when 4 ports are used.
(DCA-DM-SEL)
- 16 million when 16 ports are used.
|
(DCA-DM-SC)
- 1 million per port.
- 4 million when 4 ports are used.
(DCA-DM-SEL)
- 16 million when 16 ports are used.
|
Deduplication Verified Scale Values
Table 16. Verified Scale for Deduplication Managed Services
Managed Service |
One Service Node Port |
4 Service Node Ports |
16 Service Node Ports |
Deduplication Maximum Packet Rate Processed |
(DCA-DM-SC)
- 2 ms window: 14 million pps.
- 4, 6 ms window: 13 million pps.
- 8 ms window: 11 million pps.
(DCA-DM-SDL)
- 2 ms window: 14 million pps.
- 4, 6 ms window: 13 million pps.
- 8 ms window: 11 million pps.
(DCA-DM-SEL)
- 2 ms window: 19 million pps.
- 4, 6 ms window: 18 million pps.
- 8 ms window: 16 million pps.
|
(DCA-DM-SC)
- 2 ms window: 13 million pps per port when 4 ports are used.
- 4, 6 ms window: 13 million pps per port when 4 ports are used.
- 8 ms window: 11 million pps per port when 4 ports are used.
(DCA-DM-SEL) 11
- 2 ms window: 17.5 million pps per port when 2 ports on the same NIC are used.
- 4, 6 ms window: 16.5 million pps per port when 2 ports on the same NIC are used.
- 8 ms window: 15.5 million pps per port when 2 ports on the same NIC are used.
|
(DCA-DM-SDL)
- 2, 4, 6, 8 ms window: 8 million pps.
(DCA-DM-SEL)
- 2, 4, 6, 8 ms window: 15.5 million unique pps.
|
Deduplication Maximum Bandwidth by Service Node Port |
(DCA-DM-SC)
10 Gbps for IMIX traffic.
- 2 ms window: It handles 10 Gbps traffic per port with average packet size > 70 bytes.
- 4, 6 ms window: It handles 10 Gbps traffic per port with average packet size > 76 bytes.
- 8 ms window: It handles 10 Gbps traffic per port with average packet size > 94 bytes.
(DCA-DM-SEL)
20Gbps for IMIX traffic.
- 2, 4, 6 and 8 ms window: It handles 20 Gbps traffic per port with average packet size > 70 bytes.
|
(DCA-DM-SC)
40 Gbps for IMIX traffic.
- 2 ms window: It handles 10 Gbps traffic per port with average packet size > 76 bytes.
- 4, 6 ms window: It handles 10 Gbps traffic per port with average packet size > 76 bytes.
- 8 ms window: It handles 10 Gbps traffic per port with average packet size > 94 bytes.
(DCA-DM-SEL)13
40Gbps for IMIX traffic.
- 2, 4, 6 and 8 ms window: It handles 40 Gbps traffic per port with average packet size > 70 bytes.
|
(DCA-DM-SC)
160 Gbps for IMIX traffic.
- Service node ports handles 10 Gbps traffic per port with average packet size > 210 bytes.
(DCA-DM-SEL)13
320 Gbps for IMIX traffic.
- Service node ports handles 20 Gbps traffic per port with average packet size > 210 bytes.
|
Note: Tested for 100%, 50%, 20%, and 0% deduplication by sending 10Gbps traffic with different packet sizes.
Slicing, Masking and Pattern Matching Verified Scale Values
This section summarizes the verified scale values for DMF Service Node managed services.
- Slicing
- Masking
- Pattern Matching
Table 19. Verified Scale for Packet Slicing as a Managed Service
Processing rate and supported bandwidth 18 |
One Service Node Port |
4 Service Node Ports |
16 Service Node Ports |
Maximum Packet Rate Processed |
(DCA-DM-SEL)
- 29.5 million pps per port
|
(DCA-DM-SEL)
- 17.5 million pps per port19
|
(DCA-DM-SEL)
- 17.5 million pps per port.
|
Maximum Bandwidth by Service Node |
(DCA-DM-SC)
- 10 Gbps for IMIX traffic. It handles 10Gbps traffic per port with average packet size > 70 bytes.
(DCA-DM-SEL)
- 20 Gbps for IMIX traffic. It handles 20Gbps traffic per port with average packet size > 130 bytes.
|
(DCA-DM-SC)
- 40 Gbps for IMIX traffic. It handles 10Gbps traffic per port with average packet size > 70 bytes.
(DCA-DM-SEL)
- 40 Gbps for IMIX traffic. It handles 20 Gbps traffic per port with average packet size > 130 bytes.19
|
(DCA-DM-SC)
- 160 Gbps for IMIX traffic. It handles 10Gbps traffic per port with average packet size > 70 bytes.
(DCA-DM-SEL)
- 320 Gbps for IMIX traffic. It handles 20 Gbps traffic per port with average packet size > 130 bytes.
. |
Note: Tested different packet sizes with line rate traffic.
Table 20. Verified Scale for Packet Masking as a Managed Service
Processing rate/bandwidth supported 20 |
One Service Node Port |
4 Service Node Ports |
16 Service Node Ports |
Maximum Packet Rate Processed |
Depending on regex pattern
DCA-DM-SC supports 40%19 of 10 Gbps traffic or more per port.
DCA-DM-SEL supports 31%21 of 20 Gbps 22 traffic or more per port.
|
Maximum Bandwidth by Service Node Port |
Depending on regex pattern
One Service Node port handles about 40% of 10 Gbps traffic or more.
To get 10 Gbps performance, use LAG with 2 or more Service Node ports.
|
Table 21. Verified Scale for Pattern Matching as a Managed Service
Processing rate/bandwidth supported 23 |
One Service Node Port |
4 Service Node Ports |
16 Service Node Ports |
Maximum Packet Rate Processed |
Depending on regex pattern
One Service Node port handles about 50%19 of 10 Gbps traffic or more.
DCA-DM-SEL supports 36%21 of 20 Gbps22 traffic or more per port.
|
Maximum Bandwidth by Service Node Port |
Depending on regex pattern
One Service Node port handles about 50% of 10 Gbps traffic or more.
To get 10 Gbps performance, use LAG with 2 or more Service Node ports.
|
Note: The performance of packet masking or packet matching depends on the packet length and the complexity of the regular expression.
Session Slice Scale Values
This section summarizes the verified scale values for TCP and UDP session-slicing configured as a managed service action.
Session-Slice Scale Values for UDP
Service Node Port |
IPv4 UDP Session |
IPv6 UDP Session |
IPv4/6 UDP Session |
One |
524000 Max sessions |
524000 Max sessions |
1 Million Max sessions |
4 Port |
2 Million Max sessions |
2 Million Max sessions |
4 Million Max sessions |
Session-Slice Scale Values for TCP
Service Node Port |
IPV4 TCP Session |
IPV6 TCP Session |
IPv4/6 TCP Session |
One |
524000 Max sessions |
524000 Max sessions |
1 Million Max sessions |
4 Port |
2 Million Max sessions |
2 Million Max sessions |
4 Million Max sessions |
Each service node port supports 524000 maximum sessions for each traffic type - TCP/UDP/TCP6/UDP6. With mixed traffic (TCP,TCP6,UDP,UDP6), each service node port supports a maximum of 2 million sessions.
Analytics Node Verified Scale Values
This section displays the tested scalability values for the Analytics Node.
Table 22. Analytics Node Scale Performance Results
|
Single Node Cluster |
Three Node Cluster |
Five Node Cluster |
ARP |
20,000 pkts/sec |
60,000 pkts/sec |
100,000 pkts/sec |
DHCP |
15,000 pkts/sec |
30,000 pkts/sec |
60,000 pkts/sec |
ICMP |
15,000 pkts/sec |
40,000 pkts/sec |
80,000 pkts/sec |
DNS |
8,000 pkts/sec |
20,000 pkts/sec |
32,000 pkts/sec |
TCPFlow |
6,000 flows/ |
18,000 flows/sec |
30,000 flows/sec |
sFLOW®* |
12,000 flows/sec |
30,000 flows/sec |
70,000 flows/sec |
Netflow v5 without Optimization25 |
12,000 flows/sec |
32,000 flows/sec |
60,000 flows/sec |
IPFIX without Optimization25 |
9,000 flows/sec |
27,000 flows/sec |
45,000 flows/sec |
Netflow v9 without Optimization25 |
9,000 flows/sec |
27,000 flows/sec |
45,000 flows/sec |
All the Above Cases Combined: 26 |
ARP: 800 pkts/sec
DHCP: 500 pkts/sec
ICMP: 300 pkts/sec
DNS: 3,000 pkts/sec
TCPFlow: 300 flows/sec
sFLOW: 3,000 flows/sec
Netflow version 5: 5,000 flows/sec
|
ARP: 1,800 pkts/sec
DHCP: 900 pkts/sec
ICMP: 1,200 pkts/sec
DNS: 6,000 pkts/sec
TCPFlow: 400 flows/sec
sFLOW: 6,000 flows/sec
Netflow version 5: 10,000 flows/sec
|
ARP: 2,000 pkts/sec
DHCP: 1,200 pkts/sec
ICMP: 2,000 pkts/sec
DNS: 8,000 pkts/sec
TCPFlow: 500 flows/sec
sFLOW: 8,000 flows/sec
Netflow version 5: 13,000 flows/sec
|
Note: The above test measurements were performed with 60% average CPU Utilization.
Recorder Node Verified Scale Values
This section displays the tested performance numbers for the Recorder Node with no-drop packet capture characteristics.
Table 23. Maximum packets recorded on a DCA-DM-RA3 Recorder Node
Packet Size (Bytes) |
Packets per second |
Maximum Bandwidth (Gbps) |
1500 Bytes or greater |
~1.98 million |
24 Gbps |
512 Bytes or greater |
~4.7 million |
20 Gbps |
IMIX |
~6.3 million |
19 Gbps |
256 Bytes or greater |
~8.6 million |
19 Gbps |
Note: IMIX is a 7:4:1 distribution of 64, 570, and 1518-byte Ethernet-encapsulated packets, leading to a 353-byte packet size average.
1 Configuration of Bidirectional / Tx Tunnels would require using an additional port. Therefore maximum number of supported Bidirectional / Tx Tunnels would be limited to number of free ports available on the switch.
2 In push-per-policy mode, a 4-byte internal VLAN tag is added to the traffic and this reduces the maximum bandwidth supported.
3 DCA-DM-SC Service Node (4x10G) handles 10 Gbps per port with average packet size >= 210 bytes.
4 DCA-DM-SDL Service Node (16x10G) handles 10 Gbps traffic per port with average packet size >= 285 bytes.
5 DCA-DM-SEL Service Node (16x25G) handles 20 Gbps traffic per port with average packet size >= 68 bytes.
6 Measured when each service node port sent 1 million flow records at the same time.
7 In push-per-policy mode, a 4-byte internal VLAN tag is added to the traffic and this reduces the maximum bandwidth supported and recommended.
8 DCA-DM-SC (4x10G) handles 10Gbps traffic per port with average packet size IPv4>= 160 byte for IPv6 >=190 byte.
9 DCA-DM-SEL (16x25G) handles 20Gbps traffic per port with average packet size IPv4 >= 68 byte and 10Gbps traffic per port with average packet size IPV6 >= 218 bytes.
10 Measured when service node exports ipfix data packets representing 1 million unique flows information with default eviction timers.
11 DCA-DM-SEL NIC Hardware configuration is 2 Port, Published numbers represent NIC card Performance.
12 In push-per-policy mode, 4-byte internal VLAN tags are added to the traffic, which reduces the maximum bandwidth supported to 9.7 Gbps.
13 DCA-DM-SEL maximum supported bandwidth per port is 20 Gig.
14 In push-per-policy mode, 4-byte internal VLAN tags are added to the traffic, which reduces the maximum bandwidth supported to 9.7 Gbps.
15 DCA-DM-SEL NIC Hardware configuration is 2 Port, Published numbers represent NIC card Performance.
16 In push-per-policy mode, 4-byte internal VLAN tags are added to the traffic, which reduces the maximum bandwidth supported to 9.7 Gbps.
17 DCA-DM-SEL support of ERSPAN managed-service has limitations.
18 In push-per-policy mode, 4-byte internal VLAN tags are added to the traffic, which reduces the maximum bandwidth supported to 9.7 Gbps.
19 With regex \d{3}-\d{2}-\d{4} to match/mask/drop packets with Social Security numbers in a 64 byte packet, DCA-DM-SC can handle 10 million packets/sec. The performance reduces to 5 million pps with 131 byte packet. With regex \d{4}[\s\-]*\d{4}[\s\-]*\d{4}[\s\-]*\d{4} to match/mask/drop packets with credit card numbers in a 68 byte packet, DCA-DM-SC can handle 7 million pps. Higher the packet size and position of match string in the packet will influence performance. Performance can be optimized by setting appropriate l4-payload off-set value.
20 In push-per-policy mode, 4-byte internal VLAN tags are added to the traffic, which reduces the maximum bandwidth supported to 9.7 Gbps.
21 With regex \d{3}-\d{2}-\d{4} to match/mask/drop packets with Social Security numbers in a 64 byte packet, DCA-DM-SEL can handle 11 million pps. With regex \d{4}[\s\-]*\d{4}[\s\-]*\d{4}[\s\-]*\d{4} to match/mask/drop packets with credit card numbers in a 68 byte packet, DCA-DM-SEL supports masking service 11 million pps. Higher the packet size and position of match string in the packet will influence performance. Performance can be optimized by setting appropriate l4-payload off-set value.
22 Two ports belongs to single NIC card.
23 In push-per-policy mode, 4-byte internal VLAN tags are added to the traffic, which reduces the maximum bandwidth supported to 9.7 Gbps.
* sFlow® is a registered trademark of Inmon Corp.
25 The Netflow with optimization test cases yield a result of 100,000 flows/sec for a single analytics node cluster. For more details about Netflow optimization, please refer to the Arista Analytics User Guide.
26 The rate of traffic chosen is for testing purposes only. In production network the rate of traffic for each protocol may vary.