kentik Product Updates logo
Back to Homepage Subscribe to Updates

Product Updates

Latest features, improvements, and product updates on Kentik's Network Observability platform.

Labels

  • All Posts
  • Improvement
  • Hybrid Cloud
  • Core
  • Service Provider
  • UI/UX
  • Synthetics
  • Insights & Alerting
  • DDoS
  • New feature
  • BGP Monitoring
  • MyKentik Portal
  • Agents & Binaries
  • Kentik Map
  • API
  • BETA
  • Flow
  • SNMP
  • NMS
  • AI

Jump to Month

  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • July 2021
  • June 2021
  • May 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • October 2020
  • September 2020
  • June 2020
  • February 2020
  • August 2019
  • June 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • September 2018
  • August 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • April 2016
ImprovementCore
2 years ago

Streaming Telemetry: device configuration and status

The Streaming Telemetry (ST) is now officially supported in the Kentik Portal. Users can enable Streaming Telemetry from the Device configuration dialog which is shown in the screenshot below.

This will enable Kentik Saas Ingest or Kentik kproxy to start receiving Streaming Telemetry from that Device.

The details about the configuration of the kproxy for using Streaming Telemetry can be found on the this KB article.

The status of the Straming Telemetry is shown on the Settings → Devices page with the additional “ST” flag. This flag shows the status only for devices which have ST enabled. The Streaming Telemetry status is additionally shown when the device is selected or in the Network Explorer Device Quick view page under the “More Info” Tab. Example screenshots are shown below.

Device Inventory page showing the Streaming Telemetry status with enabled filtering

Network Explorer Device Quick-view page showing Streaming Telemetry status

At the moment, Kentik supports the following ST formats:

  • Junos native JTI telemetry sent over UDP. Currently supported sensors are for interfaces statistics:
    • /junos/system/linecard/interface/
    • /junos/system/linecard/interface/logical/usage/
  • Cisco IOS XR native dialout telemetry with self-describing GPB format which is sent over TCP. Currently supported sensor path is for interfaces statistics:
    • Cisco-IOS-XR-infra-statsd-oper:infra-statistics/interfaces/interface/latest/generic-counters

If a user wants to configure device to send ST directly to Kentik SaaS ingest, it should send it to port 20023. If the ST is sent to kproxy, it can be sent on any port, which is configurable as part of the kproxy configuration (a port 9555 in used Kentik’s documentation as an example).

More information about the ST can be found in our Knowledge Base:

  • SNMP and ST
  • Kproxy configuration for ST
  • Example device configuration snippets for Juniper MX and Cisco XR

Please let us know if you are interested to enable Streaming Telemetry from your devices and if you would like to have support for additional sensors or other Telemetry formats?

Avatar of authorDušan Pajin
Hybrid CloudNew featureFlow
2 years ago

Flow Logs Sampling Configuration

We released a new configuration knob that allows customers to change the sampling rate for AWS and Azure on their own without contacting Kentik team.

That will allow customers to consume flow logs at the preferred rate fitting into the licensing strategy, assigning priority for certain types of traffic and being flexible by changing the sampling rate at any time and separately for each flow log exporter.

Licensing will be enforced after the sampling, so customers can use heavier sampling in some cases, and saving the licensed FPS for the another S3 buckets containing flow logs.

There is a slight difference in available options for AWS and Azure.



AWS flow log sampling

Historically Kentik was supporting a “legacy” mode of sampling where for the large files with flow logs we were randomly picking 10,000 flow records per file in S3 bucket and ingesting only those records into Kentik Data Engine. Since the number of the flow in a file can vary this was considered an “adaptive sampling” where larger files were getting more heavily sampled comparing to the smaller files. Another option was no sampling i.e.  all the records were consumed from the file.

Moving forward we now support 3 options for AWS:

  • Legacy sampling - random 10,000 flow records per file.
  • Sampling rate - where user can provide the sampling rate in 1:N format (meaning 1 out N records to be picked up for an ingest into Kentik Data Engine), where N should be between 2 and 2000.
  • Unsampled - all the records in a flow log file will be taken into ingest. Effectively that is the same as sampling rate 1:1.

Sampling rate can be configured when new flow log file is added, or changed for the existing exporter.

Azure flow log sampling

Flow log exporters for Azure before this release were supporting only Unsampled mode, where all the flows from the flow log file were processed by the Kentik Data Engine.

Since for some situations full flow log visibility might be not required, we added sampling knob that allows users to configure sampling rate 1:N format (meaning 1 out N records to be picked up for an ingest into Kentik Data Engine), where N should be between 2 and 2000.


Avatar of authorIevgen Vakulenko
Hybrid CloudCoreNew feature
2 years ago

Data Explorer: Filtering Based on Tags for Azure

Customers who are using Azure tags within their cloud infrastructure can import those tags into Kentik and use them as a Custom Dimensions within Kentik Data Explorer.

This import can be configure as any other Custom Dimensions configuration, under Settings - Custom Dimensions - Add Custom Dimensions

After that user is able to pick up which Azure Entities will be used to generate the Custom Dimensions from:

After Kentik will populate custom dimensions that were chosen (can take up to 30 min), they became fully available in a Data Explorer, under the section “Custom”.

Note: other custom dimensions are shown on the picture as well. Dimension that was generated from Azure tag is highlighted.

We also wanted to use this opportunity and remind that AWS tags are supported as Custom Dimensions as well 🙂.

Happy flow hunting!

Avatar of authorIevgen Vakulenko
Hybrid CloudCoreNew featureKentik Map
2 years ago

Azure NSG denied traffic visibility

It’s possible now to check for a traffic flows that were denied by the NSG rules configured on a Subnet or VNET level.

There are two ways how you can see that traffic:

  • It’s available on a Kentik Map as a sidebar “Details” widget (similar to existing AWS functionality)
  • You can search for them in a Data Explorer using source and destination Firewall Action as a dimensions, and change the metric to the flow/s.
    This feature will be a significant aid in troubleshooting the NSG firewall issues and decrease mean time to resolution.



Avatar of authorIevgen Vakulenko
Hybrid CloudCoreNew feature
2 years ago

Data Explorer: Filtering based on tags for AWS

Kentik Data Explorer supported filtering based on instance tags for a while now, but now we are expanding the tag support for other cloud objects as well, such as: VPC, Subnets, ENI, VPC endpoints and Transit Gateway attachments.

No matter what object you assigned a tag on, you can use Data Explorer for filtering traffic flows related to those tags.

In order to start using the AWS tags you need to create custom dimension through the Settings - Custom Dimensions - Add Custom Dimension.

After that you can automatically populate your custom dimensions with AWS tags and pick what fields you want to use in your Data Explorer for filtering.

After Kentik will populate custom dimensions that were chosen (can take up to 30 min), they became fully available in a Data Explorer, under the section “Custom”.

This will allow you to filter the traffic flows using the tagging and naming convention of your organization, for instance to see the traffic of a particular team, business unit or division.

Happy flow hunting!

Avatar of authorIevgen Vakulenko
ImprovementSynthetics
2 years ago

DNSSEC validation in DNS Monitor tests

DNS was designed in the 1980s when the Internet was much smaller, and security was not a primary consideration in its design. As a result, when a recursive resolver sends a query to an authoritative name server, the resolver has no way to verify the authenticity of the response. DNSSEC was designed to address this issue.

DNSSEC adds two important features to the DNS protocol:

  • Data origin authentication allows a resolver to cryptographically verify that the data it received actually came from the zone where it believes the data originated.
  • Data integrity protection allows the resolver to know that the data hasn't been modified in transit since it was originally signed by the zone owner with the zone's private key.

Up until today, the DNS Server Monitor Test only allowed a user to monitor the DNS resolution for a given hostname from specified Name Servers. Users can be alerted if the resolution time crosses a particular threshold or if an unexpected DNS response code is received, or a non-allowed IP is answered.
However, these tests previously did not validate the DNSSEC trust chain of the received record.
Enter DNSSEC Validation.

How can you configure DNSSEC validation?

When enabled for a given domain, the test will recursively check the validity of each signing entity in the chain from the authoritative name server up to the root server. The result will be either a positive or a negative response. The DNSSEC record is either fully verified or it isn’t.

When the option is active, the test results will show the DNSSEC validation status for each one of the Agents involved in the test.

Validity of DNSSEC is based on querying DS and DNSKEY for any of the successive parts of the domain name: for a DNS test target of subdomain.domain.tld, each of tld., domain.tld., subdomain.domain.tld. and . (root) will be tested.

Traces for the DNSSEC validation for each agent will be available by clicking on their respective status icon on the previous screengrab.

DNSSEC validation impact on subtest health

Health options remain the same as the DNS Server Monitor test. DNSSEC validation will have a boolean result. If validation is successful it’s a healthy result, if not, it's critical. 

If enough agents have a critical results (see screenshot above) to meet the sub-test threshold condition, an alert will be triggered.

IMPORTANT NOTE: App Agents vs Network Agents

Be advised that setting DNSSEC validation is available to all agents except Private Network Agents. As a reminder, our new Private App Agents not only include all of the capabilities of the legacy Network Agents, but also include the capabilities required for advanced Web tests such as Page Load Tests and Transaction Tests.

If you currently run our legacy Network Agents, please consider replacing them with our new App Agents to gain access to all of the feature we will add in the future. Kentik's entire fleet of Network Agents has already been migrated, and support for the Network Agents will be phased out in 2023 (more to come on this soon)

Avatar of authorGreg Villain
ImprovementSynthetics
2 years ago

DSCP QoS values for Network tests

As of now, users can set their own DSCP values in any Synthetic Monitoring Network test.


What's new?

While Network Tests initially defaulted to Diffserv codepoint "0: Best Effort" for every Network test, they can now be set to any value, allowing our users to more realistically test the behavior of their own network and test along all classes of services they implement.

To see it in action, edit or create any IP Hostname Network test: in the Advanced Options, both Ping and Traceroute aspects of it can be set to different DSCPs.

IMPORTANT NOTE: App Agents vs Network Agents

Be advised that setting DSCP values for the aforementioned tests is available to all agents except Private Network Agents. As a reminder, our new Private App Agents not only include all of the capabilities of the legacy Network Agents, but also include the capabilities required for advanced Web tests such as Page Load Tests and Transaction Tests.

If you currently run our legacy Network Agents, please consider replacing them with our new App Agents to gain access to all of the feature we will add in the future. Kentik's entire fleet of Network Agents has already been migrated, and support for the Network Agents will be phased out in 2023 (more to come on this soon)


Avatar of authorGreg Villain
ImprovementCoreSNMP
2 years ago

SNMP config strings now hidden in the device screen

This may be a minor, but a widely requested improvement on the device screen: all SNMP community and SNMPv3 passphrases are now obfuscated in the UI.

This complies with widespread company policies to never display passwords in a web UI.

See screenshot below


Avatar of authorGreg Villain
CoreFlow
2 years ago

Flow Ingest: Support for MPLS Label 3

Kentik now supports collection of the NetFlow and IPFIX fields for position 3 MPLS Label in the label stack , which we previously not collected from the received flows. The related fields is shown in tables below:

NetFlow v9 VLAN fields:

Field TypeValueLength (bytes)Description
MPLS_LABEL_3723MPLS label at position 3 in the stack. This comprises 20 bits of MPLS label, 3 EXP (experimental) bits and 1 S (end-of-stack) bit.

Resource: https://www.cisco.com/en/US/technologies/tk648/tk362/technologies_white_paper09186a00800a3db9.html

IPFIX VLAN fields:

ElementIDNameAbstract Data TypeDescription
72mplsLabelStackSection3octetArrayThe Label, Exp, and S fields from the label stack entry that was pushed immediately before the label stack entry that would be reported by mplsLabelStackSection2. See the definition of mplsTopLabelStackSection for further details.

Resource: https://www.iana.org/assignments/ipfix/ipfix.xhtml

This field is collected from NetFlow/IPFIX protocols and stored in the Kentik’s MPLS Label 3 and MPLS Label 3 EXP dimensions.

The support is available in kproxy starting from version v7.37.0. The example of the Data Explorer query is shown below:


Avatar of authorDušan Pajin
ImprovementCore
2 years ago

Data Explorer: new "Compare over previous period" feature

When digging around in enriched Network Telemetry data, you'll find yourself noticing a bump, or a trough in any of the displayed time series and think to yourself: 

"Hmmm... is that peak an anomaly, or did it behave the same last day|week|... at the same time?"

Ask yourself no more, "Compare over previous period" is here!


In a nutshell, this new Data Explorer feature lets you look at the same time range you are currently looking at, but in the past, and help you visualize variations between each of your query result's time series within the same time range in the past.

This feature comes with a streamlined redesign of the Time Range query panel in Data Explorer.

Hitting the Compare over previous period switch will unveil options to be able to compare the current time range, to the same one at a configurable time in the past, such as: "this hour with the same hour yesterday, or a week ago..."

 

Upon selecting this option, your data-table will now include two new tabs:

  • A Previous Period tab showing you the TopN time-series for the same time range in the past
  • A Comparison Summary tab, outlining in an sortable fashion such useful insights such as: 
    • previous current rank for the time series vs previous rank,  
    • rank variation for the time series
    • and the percentage of change (sortable) in the selected metrics between Previous and Current periods

From any of these 3 tabs, you will then be able to select any time series and look at a combined line chart displaying both the current and previous period for the selected time series.

Switching between Current Period and Previous Period will display the full set of time series for either periods - always leveraging the same convention:

  • Plain Line:        Current Period
  • Dashed Line:   Previous Period

Now go ahead, play around with the feature and let us know what you think of it.
As always, you'll always find in Kentik Product Management a friendly ear to suggest improvements to this feature (and any other for that matter), do let us know!

Avatar of authorGreg Villain