Setting Up the Arista Analytics Node
- Requirements
- Arista Analytics Node First Boot Configuration
- Using the Arista Analytics Server CLI
- Enabling Access Control to the Analytics Server
- Configuring sFlow
- Managing the Arista Analytics Server Software
- Accessing and Configuring Arista Analytics
- Configuring Advanced Features
- Integrating Analytics with Infoblox
- Configuring SMTP Server to Send Email Alerts via Watcher
Requirements
- IP address and netmask to assign to the Analytics server
- Default IP gateway
- DNS server IP address (optional)
- DNS Search Domain (optional)
- Admin password for the Analytics server
- NTP server IPv4 address
- Password for Analytics GUI admin user (optional)
- TACACS+ Server IPv4 Address (optional)
- TACACS+ secret (optional)
- TACACS+ Server Service (optional)
- IP addresses for the DMF Controllers
Note: If Arista Analytics node is deployed along with DMF, ensure that the version on the Arista Analytics node is the same as that running on the DMF Controllers. Running different versions on the Arista Analytics node and DMF Controllers are not supported.
The ports in the following table should be open on security devices between the Controller or switches and the Arista Analytics server, as noted in the table.
Monitoring | Port Requirement | Explanation |
---|---|---|
NetFlow | UDP 2055 | Flow data exported to the Analytics node in NetFlow v5 format, either from the production network or the DANZ Monitoring Fabric. |
IPFIX | UDP 4739 | Flow data exported to the Analytics node in IPFIX/NetFlow v10 format, either from the production network or the DANZ Monitoring Fabric. |
sFlow | UDP 6343 between switches and Analytics server | Packets are sampled on filter interfaces and the SwitchLight OS sFlow agent constructs the sFlow header and forwards to Analytics server and other sFlow collectors for processing. |
Host-tracker information | UDP 6380 between switches and Analytics server | ARP, DNS, and other control traffic is forwarded from each switch to the Analytics server. A private header is prepended with a timestamp in the process. The Analytics server processes packets and maintains the host tracking database. The Controller queries the Analytics server for the latest host table. |
DMF Statistics and Events | UDP 9379 (redis) between Controller and Analytics server | Statistics gathered by the Controller from switches and service nodes are sent to the Analytics server from REDIS database. |
DMF Statistics and Events (cluster) | UDP 6379 (replicated redis) between Controller and Analytics server | Replicated redis is used to gather information with a DMF Controller cluster. |
Monitoring Active Directory or Open VPN | UDP 5043 | Required only if you are using Analytics to monitor Active Directory or Open VPN. |
Arista Analytics Node First Boot Configuration
To complete the initial configuration of Arista Analytics, complete the following steps.
Using the Arista Analytics Server CLI
Starting in the DMF 7.0 release, administrative access to Arista Analytics and other server-level operations, such as configuring sFlow and creating a support bundle, are completed on the DMF Active Controller. For details, refer to the latest version of the DANZ Monitoring Fabric Deployment Guide, available here: https://www.arista.com/en/support/software-download/dmf-ccf-mcd.
Operations that are specific to Arista Analytics are performed by using the Analytics server CLI after logging in to the Analytics server at the address assigned during the first boot configuration.
The Analytics CLI provides a subset of the commands available on the DMF Controller. For details about any command, enter Help <command> or press the Tab to see the options available. You can refer to the DANZ Fabric Command Reference Guide for information about the DMF Controller commands, which are similar to the Analytics commands.
The following shows the commands available from Login mode.
analytics-1> Tab
debug exit logout ping6 show upload
echo help no reauth support watch
enable history ping set terminal whoami
The following shows the additional commands available from enable mode.
analytics-1> enable
analytics-1# <Tab>
boot compare copy diagnose sync upgrade
clear configure delete reset system
The following shows the additional commands available from Config mode.
analytics-1# config
analytics-1(config)# <Tab>
aaa crypto local radius snmp-server version
banner end logging secure tacacs
cluster group ntp service user
Enabling Access Control to the Analytics Server
- Configuring AN (Virtual IP) IP on the DMF Controller.
- Allowing DMF physical IPs under Redis/replicated ACL on the AN.
To enable access to the Analytics server for Redis and replicated Redis, complete the following steps.
Adding Access Control to GUI
This section describes adding an access control list (ACL) command to the DANZ Monitoring Fabric (DMF) supported commands family.
DMF-ANALYTICS-CLUSTER> enable
DMF-ANALYTICS-CLUSTER# configure
DMF-ANALYTICS-CLUSTER(config)# cluster
DMF-ANALYTICS-CLUSTER(config-cluster)# access-control
DMF-ANALYTICS-CLUSTER(config-cluster-access)# access-list
<Access list name> Enter an access list name: Enter an access list name
active-directory Configure access-list for active-directory
apiConfigure access-list for api
guiConfigure access-list for gui
ipfixConfigure access-list for ipfix
netflowConfigure access-list for netflow
redisConfigure access-list for redis
replicated-redis Configure access-list for replicated-redis
snmp Configure access-list for snmp
sshConfigure access-list for ssh
DMF-ANALYTICS-CLUSTER(config-cluster-access)#
Refer to the DMF User guide for more information on Analytics ACL for GUI.Configuring sFlow
sFlow is an industry-standard technology, defined by RFC 3176, for monitoring high-speed switched networks. sFlow defines methods for sampling packets and counters in the data path, and for forwarding the results to a sFlow collector for analysis and display. The DANZ Monitoring Fabric (DMF) supports sFlow to capture information about the production network and for troubleshooting the monitoring fabric.
For information about advanced search and analysis of historical sFlow messages using the Arista Analytics Graphical User Interface (GUI), refer to the latest edition of the Arista Analytics User Guide.
- IP address and port number of one or more sFlow collectors: identifies one or more sFlow collectors to which to send the sFlow packets. The default UDP port number is 6343.
- Sample rate: specifies the number of packets to transmit before sending a sFlow packet. Sampling is enabled on all filter interfaces and disabled on core interfaces and delivery interfaces. The default sample is 1 packet per 10,000 packets.
If the sFlow collector is on a device external to the DANZ Monitoring Fabric, a static route to the collector must be configured on the external tenant logical router.
Using the DMF Controller GUI to Configure sFlow
To enable sFlow, add Analytics or other collectors, or change the default parameters, complete the following steps.
Using the DMF Controller CLI to Configure sFlow
dmf-Controller1(config)# sflow default
dmf-Controller1(config-sflow)# collector 10.106.1.57
This example configures the Analytics server with the IP address 10.106.1.57 and the default UDP port 6343 as a sFlow collector.
dmf-Controller-1(config-sflow)# collector 10.106.1.57 udp-port 6343
The default UDP port is 6343. Up to four collectors can be defined by entering the collector command for each collector.
dmf-Controller-1(config)# show running-config sflow
! sflow
sflow
collector 10.106.1.57
collector 10.106.1.58
collector 10.106.1.59
counter-interval 10
header-size 128
sample-rate 100
dmf-Controller-1(config)#
Managing the Arista Analytics Server Software
This section describes operations for managing the Arista Analytics server.
Verifying the Analytics Server Version
analytics-1# show version
Controller Version : DMF Analytics Node 8.0.0 (bigswitch/analytics/dmf-8.0.0 #28)
Resetting to the Factory Default Configuration
analytics-1(config)# boot factory-default
boot factory-default: alternate partition will be overwritten
boot factory-default: proceed ("y" or "yes" to continue):
Password Reset
Resetting the Analytics Server Administrative Password
analytics-1# config
analytics-1(config)# reset user-password
Changing password for: admin
Current password:
New password:
Re-enter:
analytics-1(config)#
Resetting Password for Recovery User
To reset the password for the recovery user, please follow one of the following procedures. The steps need to be performed on both the Controllers of the cluster as resetting the password of the recovery user on one Controller won’t change it for the recovery user on the other Controller.
- Using Controller’s Bash:
- Go to Controller Bash by executing debugbash command.
- Execute sudo passwd recovery command.
admin@Controller-1:~$ sudo passwd recovery New password: Retype new password: passwd: password updated successfully admin@Controller-1:~$
- From recovery account login:
Note: For this to work, the customer needs to know the current password for the recovery user.
recovery@Controller-1:~$ passwd recovery Changing password for recovery. Current password: New password: Retype new password: passwd: password updated successfully recovery@Controller-1:~$
- Using the API/api/v1/rpc/Controller/os/action/system-user/reset-password:
The API call below will reset the recovery user’s password to AdminAdmin. The example given below is using curl initiated from a Linux host, but any rest client can be used to call the API.
curl -g -H "Cookie: session_cookie=<session_cookie>" 'https://<Controller IP>:8443/api/v1/ rpc/Controller/os/action/system-user/reset-password' -d '{"user-name" : "recovery","password" : "AdminAdmin"}' -X POST
Resetting Password for Admin and Other Local Users
recovery@Controller-1:~$ floodlight-reset-password --user admin
Enter new admin password:
Re-enter new admin password:
Password updated for user admin
recovery@Controller-1:~$
recovery@Controller-1:~$ floodlight-reset-password --user guest
Enter new guest password:
Re-enter new guest password:
Password updated for user guest
recovery@Controller-1:~$
Restarting the Analytics Server
Checking the State of an Analytics Cluster
Accessing and Configuring Arista Analytics
http://<Analytics node IP address or domain name or Virtual IP in case of Analytics cluster>
Using the System Tab for Analytics Configuration
This page lets you configure the settings for sending alerts to an SMTP server, set the alert thresholds, and edit the mapping files used in the different dashboards.
Linking to a DMF Controller
To identify a specific DMF Controller, which is used for the Controller link in the lower left corner of the Analytics page, click the Edit control on the option.
Enter the IP address of the DMF Controller and click Save.
Configuring SMTP Settings
Enter the details for the SMTP server and other required information and click Apply & Test.
Configuring Alert Thresholds and Enabling Alerts
- Production Traffic Mix
- Monitoring Port Utilization Report
- New Host Report
When you click the Edit control for the Production Traffic Mix option, the system displays the following page.
To make changes to the threshold, edit the fields provided and click Save. To enable the alert, move the slider to the left. When you click the Edit control for the Monitoring Port Utilization Report option, the system displays the following page.
To make changes to the threshold, edit the fields provided and click Save. To enable the alert move the slider to the left. To enable the New Host Report option, move the slider to the left.
Sending Analytics SMTP Alerts to a Syslog Server
Configuring Collector Interface
analytics-1(config)# local node
analytics-1(config-local)# interface collector
analytics-1(config-local-if)# ipv4
analytics-1(config-local-if-ipv4)# ip 219.1.1.10/24
analytics-1(config-local-if-ipv4)#
In the Arista Analytics Node, there are two 10G interfaces in bond (bond3) acting as a collector interface.
Configuring Advanced Features
Machine Learning
X-Pack machine learning lets you specify activity that can be monitored over time so that changes from historical norms are flagged as discrepancies, which may indicate unauthorized network usage. For details about this feature, see the Kibana Guide: Machine learning.
To configure this feature, click the Machine Learning control in the left pane of the Kibana interface.
- Job Management: Create and manage jobs and associated data feeds.
- Anomaly Explorer: Display the results of machine learning jobs.
- Single Metric Viewer: Display the results of machine learning jobs.
- Settings: Add scheduled events to calendars and associate these calendars with your jobs.
Using Watch for Alerting
Elasticsearch alerting is a set of administrative features that enable you to watch for changes or anomalies in your data and perform the necessary actions in response. The Elasticsearch watch feature lets you generate an alert when specific network activity occurs. For details about configuring an advanced watch, refer to the Elasticsearch Reference: Alerting.
Elasticsearch provides an API for creating, managing and testing watches. A watch describes a single alert and can contain multiple notification actions.
- Schedule: A schedule for running a query and checking the condition.
- Query: The query to run as input to the condition. Watches support the full Elasticsearch query language, including aggregations.
- Condition: A condition that determines whether or not to execute the actions. You can use simple conditions (always true), or use scripting for more sophisticated scenarios.
- Actions: One or more actions, such as sending email, pushing data to 3rd party systems through a webhook, or indexing the results of the query.
A full history of all watches is maintained in an Elasticsearch index. This history keeps track of each time a watch is triggered and records the results from the query, whether the condition was met, and what actions were taken.
REST script in JSON format
- Trigger Schedules when the watch runs. This can be an interval, which causes the watcher to run after the specified time elapses (for example, every 10 seconds).
- Input Identifies the information you want to evaluate. This can be search criteria that retrieves the required input.
- Condition Identifies the activity or other condition that determines if the alert should be sent.
- Action Identifies the text of the alert and the webhook where the alert message will be sent.
{
"trigger": {
"schedule": {
"interval": "5s"
},
"input": {
"search": {
"request": {
"search_type": "query_then_fetch",
"indices": [
"flow-icmp*"
],
"types": [],
"body": {
"query": {
"match_all": {}
}
}
}
}
},
"condition": {
"compare": {
"ctx.payload.hits.total": {
"gte": 10
}
}
},
"actions": {
"my_webhook": {
"webhook": {
"scheme": "https",
"host": "hooks.slack.com",
"port": 443,
"method": "post",
"path": "/services/T029CQ2GE/B5NBNKMGR/uZjyLgVUqrQLvGl60yM9ANUP",
"params": {},
"headers": {
"Content-Type": "application/json"
},
"body": "{\"channel\": \"#office_bmf_test\", \"username\": \"webhookbot\", \"text\": \"icmp
burst detected over the set limit \", \"icon_emoji\": \":exclamation:\"}"
}
}
}
}
}
For information about configuring the SLACK webhook, refer to the following Slack documentation.
Application Dependency Mapping
This feature helps you identify how items in an Elasticsearch index are related, a process known as Application Dependency Mapping (ADM). You can explore the connections between indexed terms and see which connections are the most meaningful. For example, this feature lets you map the relationships between the Destination IP (DIP) and Source IP (SIP) for a specific application. For details about this feature, refer to the Kibana documentation.
- Can I build a map to show different client machines accessing services identified by a Layer 4 port?
- Can I build a map to view which DNS servers are accessed by all the clients?
- Can I build a map to show how different servers interact with each other?
Advanced options let you control how your data is sampled and summarized. You can also set timeouts to prevent graph queries from adversely affecting the cluster.
Using RBAC with Arista Analytics
- admin: Admin user accounts have full read and write access to the CLI as well as to the Kibana UI.
- non-admin: Non-admin users typically have read only access. They can be defined only by an admin user.
To create and enable new user accounts, complete the following steps.
Time-based User Lockout
Starting in the DMF 8.0 release, DANZ Monitoring Fabric supports time-based user lockout functionality. Users will be locked out of login for t2 time when attempting with n incorrect passwords within t1 time.
Locked out users have to be cleared of lockout or they have to wait for the lockout period to expire before attempting login with the correct password. The feature is disabled by default.
Controller-1(config)# aaa authentication policy lockout failure <number of failed attempts> window <within t1 time>duration <lockout for t2 time>
- Value range for failure can be from 1 to 255.
- Value range for window and duration can be from 1 to 4294967295 seconds (2^32-1).
Controller-1(config)# aaa authentication policy lockout failure 3 window 180 duration 900
Controller-1# show aaa authentication lockout
User name Host Failed LoginsLockout Date Lockout Expiration
---------|-------------|-------------|------------------------------|------------------------------|
admin 10.240.88.193 1 2020-09-08 16:07:36.283000 PDT 2156-10-15 22:35:51.283000 PDT
To clear the lockout for a user, admin-group users can issue the following command: clear aaa authentication lockout user <username>
To clear all the locked out users, admin-group users can issue the following command:
clear aaa authentication lockout
Controller-1# clear aaa authentication lockout user admin
Controller-1# show aaa authentication lockout
None.
admin@Controller-1:~$ sudo pam_tally2 -u recovery
Login Failures Latest failure From
recovery 9 09/08/20 16:16:04 10.95.66.44
admin@Controller-1:~$ sudo pam_tally2 --reset --user recovery
Login Failures Latest failure From
recovery 9 09/08/20 16:16:04 10.95.66.44
admin@Controller-1:~$ sudo pam_tally2 -u recovery
Login Failures Latest failure From
recovery
Elasticsearch RBAC examples
Admin User and Group: The admin user is by default added to the admin group and the superuser role in elasticsearch. No configuration is needed for it.
Read-only Access: By default, the BSN read-only role exists that maps to Floodlight as well.
Dashboard Access Only:
Integrating Analytics with Infoblox
Infoblox provides DNS and IPAM services, which can be used to integrate with Arista Analytics. To use, associate a range of IP addresses in Infoblox with extensible attributes, then configure Analytics to map these attributes for the associated IP addresses. The attributes assigned in Infoblox then appear in place of the IP addresses in Analytics visualizations.
Configuring Infoblox for Integration
Configuring Arista Analytics
Adding Flow Enhancement via Infoblox IPAM Integration
This feature integrates subnets and corresponding extensible attributes from an Infoblox application into Arista Analytics’ collection of IP blocks and corresponding list of attributes.
Arista Analytics provides an enhanced display of incoming flow records using these extensible attributes from the Infoblox application.
Configuring the Flow enhancement
Configure the feature in Kibana by selecting the Fabric page and opening the Analytics Configuration integration panel.
tab on theThe list of IP blocks and associated external attributes appears in the Infoblox application and under the extensible attributes and their values.
tab. The columns shaded in gray represent theEditing IPAM Integration
- Infoblox:
- host: The IP address or DNS hostname of the Infoblox application.
- user: Username for Infoblox application.
- password: Password for Infoblox application.
- keys_fetched:
- The list of extensible attributes from the connected Infoblox application to be added to the Analytics Node ip_block tags. If an entered extensible attributes matches the name of an existing ip_block tag, it will not be added.
- keys_aliased:
- Mapping default Analytics Node ip_block tags to extensible attributes in the Infoblox application. Add additional mappings from ip_block tags to extensible attributes as required. Empty field values are ignored. Each mapping from the ip_block tag to the extensible attributes indicates:
- Add the extensible attributes to the Analytics Node’s ip_block tags. If an extensible attributes appears in both the integration configuration keys_fetched list and as a value in the keys_aliased mapping, it will only be added once to the Analytics Node ip_block tags list. It will not be added if it is already in the ip_block tags.
- For IP addresses coming from the Infoblox application, the value of the extensible attributes should replace the value of the corresponding ip_block tag. The extensible attributes and the Analytics Node tag become aliases of each other.
- Mapping default Analytics Node ip_block tags to extensible attributes in the Infoblox application. Add additional mappings from ip_block tags to extensible attributes as required. Empty field values are ignored. Each mapping from the ip_block tag to the extensible attributes indicates:
For example, in the above example integration configuration, VPC is in keys_fetched, and segment is in the values of keys_aliased, but both are already in the ip_block tags list, so they are not added again, as seen below. However, Site and ASNUM are not already in the tags list and are added to the end of the tags list.
As a result of these configuration changes, view the following enhancements to the flow records in the Flows by Time chart.
tab and scroll to theSuppose the sFlow packet source and/or destination IP addresses fall within the IP subnets in the Infoblox IPAM dashboard. In that case, their flow records will be augmented with the extensible attributes from Infoblox as specified in the integration configuration.
For example, the source and destination IP address of the 10.240.155.0/HQ:54149 > 10.240.155.10/HQ/HTTPS flow fall within the 10.240.155.0/24 subnet in the Infoblox IPAM dashboard.
When expanding this flow in the Flows by Time chart, since VPC is in the integration keys_fetched, the sVPC value is VPC155.
Site is in the integration keys_aliased values, and a sSite value of HQ appears. Since Desc is aliased to Site (an extensible attribute), sDesc takes on the value of Site. Segment is in the keys_aliased values; hence, sSegment with a value S155 appears.
Observe similar attributes for the destination IP address in the flow record. All these values come from the Infoblox IPAM dashboard shown above. ASNUM does not appear as a field in the flow record below despite being in the integration keys_aliased values because it is not configured or associated as an extensible attribute to the subnets in the Infoblox IPAM dashboard.
Troubleshooting
If the flow records that you expect to be augmented with InfoBlox extensible attributes are missing these attributes, please verify that the Infoblox credentials you provided in the integration configuration are correct. After confirming the credentials and the relevant flow records are still missing the Infoblox extensible attributes, please generate a support bundle and contact Arista Networks TAC.
Limitation
- When removing a tag in the middle of the ip_block tags list and saving the configuration, the relevant flow records may have incorrect values in their attributes during the minute following this change. After this brief period, the flow records will have the correct attributes and corresponding values.