kentik Product Updates logo
Back to Homepage Subscribe to Updates

Product Updates

Latest features, improvements, and product updates on Kentik's Network Observability platform.

Labels

  • All Posts
  • Improvement
  • Hybrid Cloud
  • Core
  • Service Provider
  • UI/UX
  • Synthetics
  • Insights & Alerting
  • DDoS
  • New feature
  • BGP Monitoring
  • MyKentik Portal
  • Agents & Binaries
  • Kentik Map
  • API
  • BETA
  • Flow
  • SNMP
  • NMS
  • AI

Jump to Month

  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • July 2021
  • June 2021
  • May 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • October 2020
  • September 2020
  • June 2020
  • February 2020
  • August 2019
  • June 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • September 2018
  • August 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • April 2016
ImprovementCoreUI/UXNew featureFlowNMS
2 months ago

NMS+Flow=♥♥♥: Unified "Device Experience" makes them better together


Feature Overview

Together at last! We've made major improvements to how devices are viewed and managed on the Kentik platform by unifying multiple device management, performance detail, and traffic analysis pages into a unified devices experience. We've combined our three different "Device Listing/Admin" pages and two different "Device Details" pages, bringing forward the best of each.

Happy Valentines Day, from Kentik!

Unified Device Administration

These three previously separate sections of the platform have been combined into one:

  • Settings > Network Devices: where previously we managed "flow source" devices
  • NMS > Devices: were previously we managed "NMS" device performance
  • Network Explorer > Devices: where we showed aggregate traffic from multiple devices

Unified Device Details

We also combined and refined these two previously separate Device Details pages into one:

  • NMS > Devices > (device_name): which provided performance information
  • Network Explorer > Devices > (device_name): which provided for traffic analysis

Main Benefits

This is the initial step in a broad reaching project to make our NMS and Flow experiences more cohesive with a focus on the reality that there aren't "Flow devices" and "NMS devices", there are simply "devices we collect data from."

The most obvious key benefits are:

  • One single, centralized, collection-protocol-agnostic place to administer all devices, providing a more seamless experience when investigating network traffic and/or device performance
  • One single search capability: instead of NMS Devices and Networking Devices, universal search capability now returns only one single result, better aligning with reality
  • For each device, we also now display all data in a single tabular place

Key Workflows

The changes in this new set of features center around three workflows: navigation, administration, details.

Navigation Changes

As part of this unification, we’ve re-wired multiple navigation links:

  • Top Talkers > Devices → now leads to /infrastructure/devices where it used to lead to /core/quick-views/devices/
  • Settings > Devices → now also leads to /infrastructure/devices
  • NMS > Devices  → now leads to /infrastructure/devices
  • Any Metrics Explorer or Data Explorer device link now leads to /infrastructure/devices/

These endpoints are not changing for now:

  • /settings/interfaces still exists, while /infrastructure/devices//interfaces now offers an improved (more filtering, more powerful), list of interfaces narrowed down to the
  • /nms/interfaces for now remains as a single, global interfaces screen for all NMS devices, while /infrastructure/devices//interfaces now offers an improved (more filtering, more powerful), list of interfaces narrowed down to the

Unified Device Administration

This screen becomes the singular place where users browse their inventory and add/remove devices from their Kentik experience. It presents the following characteristics:

  • Two main tabs: “Traffic” and “Manage”

    • Traffic corresponds to our well known Network Explorer /core/quick-views/devices traffic related, top-talker screen
    • Manage corresponds to the merged and improved /nms/devices  and /settings/devices screens
  • Three distinct “View Modes”, each corresponding to a column arrangement within the main Manage tab:
    • Monitor is a default column-set focused around performance monitoring
    • Admin is a default column-set focused around Kentik administration
    • Custom lets the user select and organize the specific columns they want 
  • More powerful filtering and grouping options

Unified Device Details

Our new and improved Device Details page makes navigating between "metric" and "flow" use-cases much simpler. Spotted an issue with flow and want to check on device health? Instead of navigating the menu to a different page, users can just easily change tabs. Devices will have different tabs depending on whether they have NMS metric or Flow traffic data collection protocols enabled on them.

  • Overview - performance and vitality summary of the device
  • Interfaces - filterable, searchable, data-rich list of all interfaces on the device 
  • Connections - filterable, searchable, data-rich list of LLDP/manual topology connections
  • Traffic - enriched traffic flow and "top talker" information for the device
  • Hardware - vitality information from device components, such as fans and power supplies 
  • BGP Neighbors - peer AS names, session states, local and remote IPs, and summary info
  • Telemetry - New! This tab highlights data collection methods and if they're working or not

Feature Requests & Bugs

This is a new feature and we're actively seeking your feedback and ideas to make it better. Reach out through your customer success rep or directly to the Kentik NMS Product Manager (Jason Carrier, jcarrier@kentik.com) if you'd like to influence the future development of this feature.


Avatar of authorJason Carrier
ImprovementCoreFlowNMS
5 months ago

Bringing NMS and Flow Telemetry together, one release at a time.

Today, we're sharing the first step in a journey to seamlessly integrate Kentik NMS with our Flow platform. This is just the beginning of a series of iterations that will bring them together in a more cohesive and powerful way.

Read on as we show you a new and easy way to visually correlate NMS charts with Traffic data.


Metrics Explorer vs. Data Explorer

A novel take on an existing type of product, Kentik NMS' approach relies on taking advantage of what made our Flow Telemetry platform a hit: open exploration using Metrics Explorer, the little brother of our award-winning approach you know and love in Data Explorer. In other words, while Data Explorer is the business intelligence (BI) platform to your Network Traffic data, Metrics Explorer is the BI platform to your SNMP or Streaming Telemetry data.

When we launched Kentik NMS, our goal was to marry an NMS with world-class Traffic Analysis to provide our customers with the most cutting-edge and useful network observability platform available. To that end, we’ve learned a lot about how our initial users were using it and took some notes:

  • Not everyone who's gained NMS Metrics Explorer expertise is comfortable with Data Explorer, especially given the latter is beyond feature-rich because of years of successive improvements
  • A lot of troubleshooting workflows follow the same pattern: identify a peak or a trough on a chart, then inspect traffic to investigate what factors might be contributing to this pattern –  rinse, repeat... – very often an iterative process

Correlation, Causation, AI, and the Network Engineer

Recent days have marked the rise of ML/AI where every product (and Kentik is no exception) will show you machine learned insights about something you did not know about your network. 

Additionally, we get reminded more often than not that correlation does not equal causation, as absurdly illustrated in the meme below:

Yet, years of practitioner experience in this industry tell us that a vast majority of network troubleshooting activities always end up in trying to identify a bump or a trough on a chart by looking at other charts to identify a probable root cause.

In this process, the network engineer is always better equipped when they can leverage a UI/UX that makes it easy for them to quickly eyeball multiple charts on top of each other, with a perfectly aligned time range.

So, we took a few simple use cases and iterated to provide a UI that helps the network engineer correlate SNMP and Traffic charts together:

  • "A port on a device is running hot, what could be the reason?"
  • "What could be the reason behind the CPU of this router peaking?"

Often times, what we noticed was that the right tool was more about allowing users to quickly iterate through hypotheses, going from one finding to the next, quickly ruling out dead ends. With this as the target user methodology, we came up with the small but powerful capability described in the next section.

Introducing the Metrics Explorer bottom drawer

In Metrics Explorer, you may now notice a little kebab menu at the end of each row. If your query yields a Site, Device, or Interface, you will now be offered with a contextual menu which allows you to summon a traffic breakdown for that specific row:



Selecting any of these entries will summon the "Dimensions Selector", allowing the user to choose any set of up to 8 traffic dimensions to break traffic down for this Site, Device, or Interface – here's an example selecting Source IP and Destination IP for this device. As you can see:

  • a bottom drawer opens up with a nested Data Explorer traffic query that's perfectly lined up with the Metrics Explorer one to facilitate visual correlation
  • this drawer can either be minimized, discarded, or a new tab can be opened with this very query pre-populated by clicking "Open in Data Explorer"
  • discarding the bottom drawer to replace with a new set of traffic query dimensions is also pretty straightforward, allowing for fast-paced troubleshooting iteration


Tell us what you think! What's next?

This feature tested pretty well with our field teams, but we're curious what you think of it! Let us know how we can make it better in future iterations.

We've already started thinking of other areas where we want to bring this traffic inspection bottom drawer:

  • Add it to the Capacity Planning workflow so that users could directly look into the reason for why an interface is facing imminent congestion
  • Bringing it into the NMS Device screen: it has an "Interfaces" tab, which would definitely benefit from the ability to inspect traffic breakdown for any interfaces here
  • ... and then what about the reverse? What about being able to see the CPU traffic chart for a Traffic Breakdown that has the Device name in it?

Stay tuned for near future announcements around our plans to bridge our NMS and Flow Analytics worlds together!

Avatar of authorGreg Villain
New featureFlow
a year ago

Enhanced Flow Ingest with IPFIX IE 315 Support

Kentik is excited to announce an enhancement to our telemetry ingest capabilities with the support for IPFIX Information Element (IE) 315. Let’s dive into how this Information Element is used and why users should care about it.


Understanding the Flow Collection and Export Process

The flow collection process of network devices consist of three stages:

  1. sampling of the packets: selective capturing of the network packets to reduce data volume
  2. aggregation and caching: flow metadata and counters stored in device’s flow cache table
  3. export of the expired flows: sending flow data to a flow collector

During packet sampling, the information from the packet headers and device’s interfaces are extracted to form a unique identifier (key) for each flow in the flow cache table. New packets that match an existing flow key will increment flow’s byte and packet counters, while unmatched packets will trigger a creation of the new flow record. 

Flows are exported to the collector based on two conditions:

  • the inactive-timeout period: no new packets have been detected for a flow, which means that the flow is inactive. 
  • the active-timeout period: current state of flow counters is exported even though the flow is still active.

The Evolution of Flow Collection in Modern Networks

At the time when the flow collection technology was developed, traffic volumes were significantly lower and network devices could “afford” to capture and export data of all traffic flows. However, with today's networks where hundreds of devices are handling traffic at gigabits or terabits per second throughputs, complete flow capture has become impractical. To effectively manage these traffic volumes, the sampling has become a reality in most use cases with typical sampling rates ranging from one in a few hundred to one in several thousand packets.

For near real-time traffic analysis, users would generally set inactive-timeout at 15 seconds and active-timeout at 60 seconds. For DDoS detection use cases, these counters can be configured even with lower values. Given the high sampling rates and rapid flow expiration, the effectiveness of traditional flow caching on network devices is now questionable. 

According to the research performed by Juniper Networks, with the flow sampling in the range of 1:1000s and active-timeout of 60 seconds, it is expected that around 90% of the exported flows will have only 1 packet matched. In such environments, the flow caching process on network devices does not bring much benefits. An alternative approach of directly exporting packet headers to a collector seems more effective and this is where IPFIX IE 315 comes into play. 

Introducing IPFIX IE 315

IPFIX IE 315, known as dataLinkFrameSection, carries a sample of the network packet headers, starting from the L2 protocol header up to a maximum sample length determined by the network device capabilities. This capability varies by vendor and device models, but typically supports sampling of 64 to 160 bytes. 

The typical IPFIX flow data record includes: ingress and egress interface index, flow direction, and the length and data of the sampled frame. 

Vendor support

Juniper Networks

Juniper Networks offers support for IPFIX IE 315 through its IMON (Inline Monitoring) feature. It is supported on MPC10E and MPC11E linecards for MX-series devices, MX304, linecards LC480 and LC9600 for MX10K and certain line cards for PTX10K. The feature is implemented in hardware, so there is a minimal delay and no restrictions in the volume of exported data. It supports export of 64 to 126 bytes of packet’s header. More information about the implementation, configuration and support on devices can be found in Juniper’s technical documentation. 

The flow data record includes the following IPFIX IEs:

IE NameIDLength (bytes)Description
ingressInterface104SNMP index of the packet’s ingress interface
egressInterface144SNMP index of the packet’s egress interface
flowDirection611Direction of the packet sampling (0 - Ingress, 1 - Egress)
dataLinkFrameSize3122Length of the sampled data link (L2) frame
dataLinkFrameSection315variableCarries N octets from the selected data link frame


Cisco

Cisco supports IPFIX IE 315 on its NCS 5000 and ASR 9000 devices. It uses slightly different flow records with IPFIX IE 410 (sectionExportedOctets) to store the length of the sampled frame section and does not include flow direction field. The size of exported frame is up to and including L4 header, to the maximum of 160 bytes that can be exported.

The flow data record includes the following IPFIX IEs:

IE NameIDLength (bytes)Description
ingressInterface104SNMP index of the packet’s ingress interface
egressInterface144SNMP index of the packet’s egress interface
sectionExportedOctets4102The observed length of the packet section
dataLinkFrameSection315variableCarries N octets from the selected data link frame


Benefits of using IPFIX IE 315

The approach of IPFIX IE 315 shifts the packet decoding and metadata extraction work from a network device to a flow collector. This reduces the processing requirements on network devices and eliminates the need for maintaining a flow cache table, leading to lower CPU and memory usage and potentially simpler hardware designs. Moreover, the immediate export of packet samples enhances the detection time of DDoS attacks.

Support in Kentik

Kentik Flow ingest on the SaaS supports use of the IPFIX IE 315, with the Kentik’s default Device Type “NetFlow-enabled device”. The feature is also available on Kentik kproxy version 7.43.0 and it supports both Juniper and Cisco implementations.

Avatar of authorDušan Pajin
Hybrid CloudNew featureFlow
a year ago

Azure Firewall logs as a new log data source (GA)

We are happy to announce the general availability of Azure Firewall logs as a datasource following its initial introduction in May.

Customers can consolidate flow records generation by using Azure Firewall as the primary source of flow information sent to Kentik. The ingest process is identical to the one used for NSG flow logs and requires customers to store records in their storage accounts.

Customers can filter traffic flows traveling through a particular firewall in the Data Explorer by using the “Logging Resource Category” and “Logging Resource Name” dimensions and then excluding NetworkSecurityGroupFlowEvent (NSG flow logs):

Note: Azure Firewall flow logs don’t have traffic throughput information, so users must choose flows/s as the metrics instead of bit/s. This will create a visualization similar to the graph shown above, and allow Kentik to see flows going through the firewall. With these metrics, customers can determine the relative load of the Azure Firewall and attribute flows to the firewalls.

For customers seeking traffic throughput information from Azure Firewall Logs, the Azure team has advised using “Fat Flows”. However, at the time of publishing this announcement, the Fat Flows feature is in preview and unavailable for API ingestion. Once it is fully supported in the API, Kentik will add “Fat Flows” support.

Avatar of authorIevgen Vakulenko
CoreFlow
2 years ago

Flow Ingest: Support for Flow timestamps

Kentik now supports the collection of the NetFlow and IPFIX timestamp fields (starting kproxy v7.38.0). The additional 3 fields are available to be viewed in the Raw Flow Viewer:

  • Flow Start - Timestamp of the flow start
  • Flow End - Timestamp of the flow end
  • Duration - Calculated duration of the flow as Flow Start - Flow End

NetFlow v5 and v9

In the case of the NetFlow v5 and v9, Start and End Flow timestamps are calculated using System Uptime and Unix Seconds fields from the NetFlow header and the following fields from Flow records:

Field TypeValueLength (bytes)Description
LAST_SWITCHED214System uptime at which the last packet of this flow was switched
FIRST_SWITCHED224System uptime at which the first packet of this flow was switched

IPFIX

In the case of the IPFIX, the timestamps are determined in the following two ways:

  1. Using flowStartSysUpTime, flowEndSysUpTime and systemInitTimeMilliseconds fields:

    IDNameTypeDescriptionUnits
    21flowEndSysUpTimeunsigned32The relative timestamp of the last packet of this Flow. It indicates the number of milliseconds since the last (re-)initialization of the IPFIX Device (sysUpTime). sysUpTime can be calculated from systemInitTimeMilliseconds.milliseconds
    22flowStartSysUpTimeunsigned32The relative timestamp of the first packet of this Flow. It indicates the number of milliseconds since the last (re-)initialization of the IPFIX Device (sysUpTime). sysUpTime can be calculated from systemInitTimeMilliseconds.milliseconds

    Both fields in the description refer to the additional field which is equivalent to the SysUpTime:

    IDNameTypeDescriptionUnits
    160systemInitTimeMillisecondsdateTimeMillisecondsThe absolute timestamp of the last (re-)initialization of the IPFIX Device.milliseconds


  2. Using some of the IPFIX-specific fields for the flow start and flow end timestamps. Not all device’s IPFIX implementation used these fields, but if they are present, they are preferred and used:

    IDNameTypeDescriptionUnits
    150flowStartSecondsdateTimeSecondsThe absolute timestamp of the first packet of this Flow.seconds
    151flowEndSecondsdateTimeSecondsThe absolute timestamp of the last packet of this Flow.seconds
    152flowStartMillisecondsdateTimeMillisecondsThe absolute timestamp of the first packet of this Flow.milliseconds
    153flowEndMillisecondsdateTimeMillisecondsThe absolute timestamp of the last packet of this Flow.milliseconds
    154flowStartMicrosecondsdateTimeMicrosecondsThe absolute timestamp of the first packet of this Flow.microseconds
    155flowEndMicrosecondsdateTimeMicrosecondsThe absolute timestamp of the last packet of this Flow.microseconds
    156flowStartNanosecondsdateTimeNanosecondsThe absolute timestamp of the first packet of this Flow.nanoseconds
    157flowEndNanosecondsdateTimeNanosecondsThe absolute timestamp of the last packet of this Flow.nanoseconds
    158flowStartDeltaMicrosecondsunsigned32This is a relative timestamp only valid within the scope of a single IPFIX Message. It contains the negative time offset of the first observed packet of this Flow relative to the export time specified in the IPFIX Message Header.microseconds
    159flowEndDeltaMicrosecondsunsigned32This is a relative timestamp only valid within the scope of a single IPFIX Message. It contains the negative time offset of the last observed packet of this Flow relative to the export time specified in the IPFIX Message Header.microseconds

sFlow

sFlow is packet sampling technology, which means that there is no flow, hence no flow start or flow end time.

sFlow also does not contain any timestamp, so the behavior is the following:

  • Flow Start: save the time of arrival/processing of the sFlow packet
  • Flow End - same as Flow Start
  • Duration - 0

Raw Flow Viewer

In the Raw Flow Viewer, these additional flow timestamps need to be selected as Dimensions to be shown in the output, as shown below:


Avatar of authorDušan Pajin
Hybrid CloudNew featureFlow
2 years ago

Flow Logs Sampling Configuration

We released a new configuration knob that allows customers to change the sampling rate for AWS and Azure on their own without contacting Kentik team.

That will allow customers to consume flow logs at the preferred rate fitting into the licensing strategy, assigning priority for certain types of traffic and being flexible by changing the sampling rate at any time and separately for each flow log exporter.

Licensing will be enforced after the sampling, so customers can use heavier sampling in some cases, and saving the licensed FPS for the another S3 buckets containing flow logs.

There is a slight difference in available options for AWS and Azure.



AWS flow log sampling

Historically Kentik was supporting a “legacy” mode of sampling where for the large files with flow logs we were randomly picking 10,000 flow records per file in S3 bucket and ingesting only those records into Kentik Data Engine. Since the number of the flow in a file can vary this was considered an “adaptive sampling” where larger files were getting more heavily sampled comparing to the smaller files. Another option was no sampling i.e.  all the records were consumed from the file.

Moving forward we now support 3 options for AWS:

  • Legacy sampling - random 10,000 flow records per file.
  • Sampling rate - where user can provide the sampling rate in 1:N format (meaning 1 out N records to be picked up for an ingest into Kentik Data Engine), where N should be between 2 and 2000.
  • Unsampled - all the records in a flow log file will be taken into ingest. Effectively that is the same as sampling rate 1:1.

Sampling rate can be configured when new flow log file is added, or changed for the existing exporter.

Azure flow log sampling

Flow log exporters for Azure before this release were supporting only Unsampled mode, where all the flows from the flow log file were processed by the Kentik Data Engine.

Since for some situations full flow log visibility might be not required, we added sampling knob that allows users to configure sampling rate 1:N format (meaning 1 out N records to be picked up for an ingest into Kentik Data Engine), where N should be between 2 and 2000.


Avatar of authorIevgen Vakulenko
CoreFlow
2 years ago

Flow Ingest: Support for MPLS Label 3

Kentik now supports collection of the NetFlow and IPFIX fields for position 3 MPLS Label in the label stack , which we previously not collected from the received flows. The related fields is shown in tables below:

NetFlow v9 VLAN fields:

Field TypeValueLength (bytes)Description
MPLS_LABEL_3723MPLS label at position 3 in the stack. This comprises 20 bits of MPLS label, 3 EXP (experimental) bits and 1 S (end-of-stack) bit.

Resource: https://www.cisco.com/en/US/technologies/tk648/tk362/technologies_white_paper09186a00800a3db9.html

IPFIX VLAN fields:

ElementIDNameAbstract Data TypeDescription
72mplsLabelStackSection3octetArrayThe Label, Exp, and S fields from the label stack entry that was pushed immediately before the label stack entry that would be reported by mplsLabelStackSection2. See the definition of mplsTopLabelStackSection for further details.

Resource: https://www.iana.org/assignments/ipfix/ipfix.xhtml

This field is collected from NetFlow/IPFIX protocols and stored in the Kentik’s MPLS Label 3 and MPLS Label 3 EXP dimensions.

The support is available in kproxy starting from version v7.37.0. The example of the Data Explorer query is shown below:


Avatar of authorDušan Pajin
ImprovementService ProviderFlow
2 years ago

6PE support for BGP-based Flow enrichment

Kentik added support for 6PE technology and BGP IPv6 Labeled-Unicast address family to be used for Flow enrichment.

The IPv6 Provider Edge (6PE) is the technology that enables isolated IPv6 networks to communicate using MPLS LSPs over an IPv4 MPLS backbone network.

The diagram below demonstrates the typical scenario for the use of the 6PE technology. CE routers which are at the border of the IPv6 islands, advertise their IPv6 routes to the 6PE routers of the MPLS network. These PE routers are the only dual-stack routers, which support both IPv4 and IPv6. The 6PE router advertise the received IPv6 routes to other 6PE routers using MP-BGP session over IPv6 Labeled-unicast address family. These route have:

  • Original IPv6 route received from CE
  • Inner MPLS label value, which would be used in the 6PE router’s data-plane to encapsulate packets toward IPv6 island’s networks.
  • Next-hop with the IPv6-mapped IPv4 address which is in the form of “::FFFF:”.
    • The mapped IPv4 address is the address of the advertising 6PE router.
    • It determines the Outer MPLS label which will encapsulate IPv6 packets inside the MPLS network

To perform enrichment based on the 6PE information, Kentik’s user should establish the BGP session with the IPv6 Labeled-unicast AF between their 6PE router and Kentik. Based on the received IPv6 LU routes the Kentik’s ingest layer would be able to enrich the Flow’s received from those 6PE routers. All “standard BGP” dimensions would be populated, but more specifically:

  • The “Next-hop IP” dimension will be populated with the next-hop IPv4-mapped IPv6 address from the received route
  • Based on the IPv4-mapped address, the Kentik ingest would identify the next-hop 6PE router and populate “Ultimate-Exit Device” dimension and based on that, the other “Ultimate-Exit” dimensions.

Additionally, to instruct the use of Labeled-unicast routes, a user needs to select additional configuration for 6PE routers in the Kentik.  At the Settings → Devices page, select the 6PE router device and “Edit device”, then at the BGP Tab, at the “BGP route selection” drop-down menu select the option  “VPN table, fallback to Labeled-Unicast table, fallback to Unicast table”.

The example of the Data Explorer output with the 6PE BGP next-hop dimensions is shown below:


Avatar of authorDušan Pajin
ImprovementService ProviderFlow
2 years ago

BGP route selection modes

Kentik has added a new configuration option, which determines how the BGP routes are selected for flow enrichment process. To make the whole process clear enough we should start with the basics.

BGP sessions

BGP session between customer’s router and Kentik can be established over:

  • IPv4
  • IPv6

Since these are “Multiprotocol BGP” sessions, for each of the sessions, it is possible to enable multiple Address Families, for example: Unicast, Multicast, Labeled-unicast, L3VPN, Flowspec, etc.

These Address Families are defined with AFI (Address Family Identifiers) and SAFI (Subsequent Address Family Identifiers) attributes. They are regulated by IANA and the exact values can be found on the following links:

  • IANA AFI numbers: https://www.iana.org/assignments/address-family-numbers/address-family-numbers.xhtml
  • IANA SAFI numbers: https://www.iana.org/assignments/safi-namespace/safi-namespace.xhtml

The Kentik side of the BGP peering with customer’s devices will be enabled with the Unicast, Labeled-Unicast and L3VPN families by default. For the BGP “IPvX” session from the Kentik side will have the following AFs enabled:

  • “IPvX” unicast
  • IPv4 and IPv6 labeled-unicast
  • “IPvX” L3VPN - (IPv6 L3VPN address family is not used)

Received routes from each of these address families are stored in the separate route table, which is check during the Flow enrichment process.

NOTE: IPv6 VPN routes are received, but not used for the enrichment

The Flowspec address family will be enabled only if the customer explicitly enable it in the device configuration on the Kentik portal.

BGP attributes enrichment process

Assignment of the “Route Prefix/LEN” dimension

The assignment of the Src and Dst Route Prefix is the following:

  • Src and Dst Route Prefix dimensions are first populated from the Flow information using Src and Dst Mask field from Flows - if applicable.
  • Src and Dst Route Prefix will be overwritten further in the ingest processing if there is a matched BGP route.
  • The way to know if the Src or Dst Route Prefix is coming from flow or BGP is by observing other BGP route attributes:
    • if the Route prefix originates from the flow information the dimension “Next-hop AS Number” will be “0 - -Reserved AS-,ZZ” and the dimension “AS Path” will be empty.
    • if the Route prefix is overwritten by the BGP information, the BGP related dimensions such as “Next-hop AS Number” and “AS path” will be populated

VRF metadata collection

As the part of the SNMP interface discovery process, Kentik SaaS or Kentik kproxy will perform the VRF discovery and interface association. This information about the VRFs is collected over SNMP using MPLS-L3VPN-STD-MIB, if the device supports it. The devices from Cisco and Juniper Networks support this MIB. We have also developed support for for Nokia’s proprietary MIBs.

For each VRF, Kentik collects:

  • Name
  • Description
  • Route Distinguisher (RD)
  • Route Target (RT)
  • Interface association

BGP route matching process

The enrichment of the BGP/Route related Flow dimensions is performed as a result of matching the Flow’s IP address against the BGP route received from customer’s device over BGP sessions. The default behavior of the matching process is the following:

  • Flow’s Src interface is checked if it is assigned to the VRF.
    • If the source interface is in the VRF, flow’s Dst IP address is looked-up against the BGP VPNv4 routes with the RD associated with the source interface’s VRF:
      • If there is a route match, the route will be assigned to the flow
      • If there is no match, or there is no BGP VPNv4 table at all, or even no L3VPN AF established as part of the BGP peering, the match will not be found and BGP route dimensions are not populated.
    • If the source interface is not in the VRF, flow’s Dst IP address is looked-up against the “global” BGP table containing Unicast IPv4/IPv6 AF routes.
  • The same process is performed for flow’s source IP address route lookup, based on the destination interface association with the VRF.

BGP route selection configuration

To address some additional scenario’s that we have seen in the customer’s network, Kentik added the configurable option to influence the BGP route selection process related to which BGP routes will be used for matching process.

This configuration is available at the Settings → Devices → Edit Device dialog → BGP Tab.

At the dialog, there is a new drop down menu called “BGP Route Selection” with the following three options:

  • VPN table for VRF interface, Unicast table for non-VRF interface (default option)
  • VPN table, fallback to Unicast table
  • VPN table, fallback to Labeled-Unicast table, fallback to Unicast table

The following table describes the behavior of each configuration option:

Dropdown menu optionVRF interfacenon-VRF interface
VPN table for VRF interface, Unicast table for non-VRF interface- use only L3VPN routes- use only Unicast routes
VPN table, fallback to Unicast table- use L3VPN
- no match: use Unicast
- use L3VPN
- use Unicast
VPN table, fallback to Labeled-Unicast table, fallback to Unicast table
- use L3VPN
- no match: use Labeled-Unicast
- no match: use Unicast
- use L3VPN
- use Labeled-Unicast
- no match: use Unicast


Avatar of authorDušan Pajin
ImprovementCoreFlow
2 years ago

Flow Ingest: Support for VLAN Fields in NetFlow/IPFIX

Kentik now supports collection of the NetFlow and IPFIX fields for source/destination VLAN, which we previously not collected from the received flows. 


The related VLAN fields are shown in tables below:

NetFlow v9 VLAN fields

Field TypeValueLength (bytes)Description
SRC_VLAN582Virtual LAN identifier associated with ingress interface
DST_VLAN592Virtual LAN identifier associated with egress interface

Resource: https://www.cisco.com/en/US/technologies/tk648/tk362/technologies_white_paper09186a00800a3db9.html

IPFIX VLAN fields

ElementIDNameAbstract Data TypeDescriptionReference
58vlanIdunsigned16Virtual LAN identifier associated with ingress interface. [RFC5102]
59postVlanIdunsigned16Virtual LAN identifier associated with egress interface. [RFC5102]

Resource: https://www.iana.org/assignments/ipfix/ipfix.xhtml

These two fields are collected from NetFlow/IPFIX protocols and stored in the Kentik’s Source VLAN and Destination VLAN dimensions.

The support is available in kproxy starting from version v7.36.0. The example of the Data Explorer query is shown below:


Avatar of authorDušan Pajin