kentik Product Updates logo
Back to Homepage Subscribe to Updates

Product Updates

Latest features, improvements, and product updates on Kentik's Network Observability platform.

Labels

  • All Posts
  • Improvement
  • Hybrid Cloud
  • Core
  • Service Provider
  • UI/UX
  • Synthetics
  • Insights & Alerting
  • DDoS
  • New feature
  • BGP Monitoring
  • MyKentik Portal
  • Agents & Binaries
  • Kentik Map
  • API
  • BETA
  • Flow
  • SNMP
  • NMS
  • AI

Jump to Month

  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • July 2021
  • June 2021
  • May 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • October 2020
  • September 2020
  • June 2020
  • February 2020
  • August 2019
  • June 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • September 2018
  • August 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • April 2016
ImprovementSynthetics
2 years ago

DNSSEC validation in DNS Monitor tests

DNS was designed in the 1980s when the Internet was much smaller, and security was not a primary consideration in its design. As a result, when a recursive resolver sends a query to an authoritative name server, the resolver has no way to verify the authenticity of the response. DNSSEC was designed to address this issue.

DNSSEC adds two important features to the DNS protocol:

  • Data origin authentication allows a resolver to cryptographically verify that the data it received actually came from the zone where it believes the data originated.
  • Data integrity protection allows the resolver to know that the data hasn't been modified in transit since it was originally signed by the zone owner with the zone's private key.

Up until today, the DNS Server Monitor Test only allowed a user to monitor the DNS resolution for a given hostname from specified Name Servers. Users can be alerted if the resolution time crosses a particular threshold or if an unexpected DNS response code is received, or a non-allowed IP is answered.
However, these tests previously did not validate the DNSSEC trust chain of the received record.
Enter DNSSEC Validation.

How can you configure DNSSEC validation?

When enabled for a given domain, the test will recursively check the validity of each signing entity in the chain from the authoritative name server up to the root server. The result will be either a positive or a negative response. The DNSSEC record is either fully verified or it isn’t.

When the option is active, the test results will show the DNSSEC validation status for each one of the Agents involved in the test.

Validity of DNSSEC is based on querying DS and DNSKEY for any of the successive parts of the domain name: for a DNS test target of subdomain.domain.tld, each of tld., domain.tld., subdomain.domain.tld. and . (root) will be tested.

Traces for the DNSSEC validation for each agent will be available by clicking on their respective status icon on the previous screengrab.

DNSSEC validation impact on subtest health

Health options remain the same as the DNS Server Monitor test. DNSSEC validation will have a boolean result. If validation is successful it’s a healthy result, if not, it's critical. 

If enough agents have a critical results (see screenshot above) to meet the sub-test threshold condition, an alert will be triggered.

IMPORTANT NOTE: App Agents vs Network Agents

Be advised that setting DNSSEC validation is available to all agents except Private Network Agents. As a reminder, our new Private App Agents not only include all of the capabilities of the legacy Network Agents, but also include the capabilities required for advanced Web tests such as Page Load Tests and Transaction Tests.

If you currently run our legacy Network Agents, please consider replacing them with our new App Agents to gain access to all of the feature we will add in the future. Kentik's entire fleet of Network Agents has already been migrated, and support for the Network Agents will be phased out in 2023 (more to come on this soon)

Avatar of authorGreg Villain
ImprovementSynthetics
2 years ago

DSCP QoS values for Network tests

As of now, users can set their own DSCP values in any Synthetic Monitoring Network test.


What's new?

While Network Tests initially defaulted to Diffserv codepoint "0: Best Effort" for every Network test, they can now be set to any value, allowing our users to more realistically test the behavior of their own network and test along all classes of services they implement.

To see it in action, edit or create any IP Hostname Network test: in the Advanced Options, both Ping and Traceroute aspects of it can be set to different DSCPs.

IMPORTANT NOTE: App Agents vs Network Agents

Be advised that setting DSCP values for the aforementioned tests is available to all agents except Private Network Agents. As a reminder, our new Private App Agents not only include all of the capabilities of the legacy Network Agents, but also include the capabilities required for advanced Web tests such as Page Load Tests and Transaction Tests.

If you currently run our legacy Network Agents, please consider replacing them with our new App Agents to gain access to all of the feature we will add in the future. Kentik's entire fleet of Network Agents has already been migrated, and support for the Network Agents will be phased out in 2023 (more to come on this soon)


Avatar of authorGreg Villain
ImprovementCoreSNMP
2 years ago

SNMP config strings now hidden in the device screen

This may be a minor, but a widely requested improvement on the device screen: all SNMP community and SNMPv3 passphrases are now obfuscated in the UI.

This complies with widespread company policies to never display passwords in a web UI.

See screenshot below


Avatar of authorGreg Villain
CoreFlow
2 years ago

Flow Ingest: Support for MPLS Label 3

Kentik now supports collection of the NetFlow and IPFIX fields for position 3 MPLS Label in the label stack , which we previously not collected from the received flows. The related fields is shown in tables below:

NetFlow v9 VLAN fields:

Field TypeValueLength (bytes)Description
MPLS_LABEL_3723MPLS label at position 3 in the stack. This comprises 20 bits of MPLS label, 3 EXP (experimental) bits and 1 S (end-of-stack) bit.

Resource: https://www.cisco.com/en/US/technologies/tk648/tk362/technologies_white_paper09186a00800a3db9.html

IPFIX VLAN fields:

ElementIDNameAbstract Data TypeDescription
72mplsLabelStackSection3octetArrayThe Label, Exp, and S fields from the label stack entry that was pushed immediately before the label stack entry that would be reported by mplsLabelStackSection2. See the definition of mplsTopLabelStackSection for further details.

Resource: https://www.iana.org/assignments/ipfix/ipfix.xhtml

This field is collected from NetFlow/IPFIX protocols and stored in the Kentik’s MPLS Label 3 and MPLS Label 3 EXP dimensions.

The support is available in kproxy starting from version v7.37.0. The example of the Data Explorer query is shown below:


Avatar of authorDušan Pajin
ImprovementCore
2 years ago

Data Explorer: new "Compare over previous period" feature

When digging around in enriched Network Telemetry data, you'll find yourself noticing a bump, or a trough in any of the displayed time series and think to yourself: 

"Hmmm... is that peak an anomaly, or did it behave the same last day|week|... at the same time?"

Ask yourself no more, "Compare over previous period" is here!


In a nutshell, this new Data Explorer feature lets you look at the same time range you are currently looking at, but in the past, and help you visualize variations between each of your query result's time series within the same time range in the past.

This feature comes with a streamlined redesign of the Time Range query panel in Data Explorer.

Hitting the Compare over previous period switch will unveil options to be able to compare the current time range, to the same one at a configurable time in the past, such as: "this hour with the same hour yesterday, or a week ago..."

 

Upon selecting this option, your data-table will now include two new tabs:

  • A Previous Period tab showing you the TopN time-series for the same time range in the past
  • A Comparison Summary tab, outlining in an sortable fashion such useful insights such as: 
    • previous current rank for the time series vs previous rank,  
    • rank variation for the time series
    • and the percentage of change (sortable) in the selected metrics between Previous and Current periods

From any of these 3 tabs, you will then be able to select any time series and look at a combined line chart displaying both the current and previous period for the selected time series.

Switching between Current Period and Previous Period will display the full set of time series for either periods - always leveraging the same convention:

  • Plain Line:        Current Period
  • Dashed Line:   Previous Period

Now go ahead, play around with the feature and let us know what you think of it.
As always, you'll always find in Kentik Product Management a friendly ear to suggest improvements to this feature (and any other for that matter), do let us know!

Avatar of authorGreg Villain
ImprovementCore
2 years ago

Data Explorer: Advanced Time Series query options

Kentik’s Data Explorer is enriched with additional query options which influence how time series will be visualized and calculated. These options are located in the Advanced section of the query panel as shown on the screenshot below


The Query execution performed against the Kentik Data Engine consist of the two steps:

  1. Initial flow aggregation into samples - Flow Aggregation which is performed on the configurable time Window. For all the query (filter) matching flow’s the summation of bytes or packets will be performed, grouped (broken down) by all selected Flow Dimension. The resulting flow aggregation samples will always represent an average Bit Rate (Packet Rate) across all the summed flows, because the sum of all bytes/packets is divided by the Flow Aggregation Window.
  2. Construction of the Time Series - From all the Flow Aggregation samples returned from the initial aggregation, the Time Series will be constructed for selected Top N series by volume. Time Series Buckets are constructed by aggregating multiple Flow Aggregation samples using the selected Aggregation function.

Additional Data Explorer Time Series query options consist of the following 3 options:

  • Flow Aggregation Window: This configures the time window which is used for the initial Flow Aggregation into samples. Available options are 1, 5, 10, 20 minutes for Full Dataseries and 1, 6, 12 hours for Fast data series. If the Auto is selected, the behavior will be the same as the usual Data Explorer behavior so far. current Further limitations apply:
    • 1 minute available for up to 3 day wide queries
    • 5 minutes available for up to 14 day wide queries
    • 10 minutes available for up to 30 day wide queries
    • 20 minutes available for up to 60 day wide queries
    • no limitation for the use of Fast dataseries.
  • Time Series Bucket Width: Time Window used for the construction of the Time Series from Flow Aggregation Samples. The available options are 10 min, 20 min, 30 min, 1h, 6h, 12h, 1 day, 1 week.
    • This configuration option only takes effect if the Time Series Bucket Aggregation is not "None", 
    • The value always has to be larger than the selected Flow Aggregation Window.
    • The option “Auto” always maps to 1h buckets.
  • Time Series Bucket Aggregation:This option defines the function which will be performed while aggregating multiple Flow Aggregation Samples into Time Series Buckets. The available options are:
    • None - does not perform any aggregation to the Time Series Buckets, so only Flow Aggregation Samples are shown
    • Average - Calculates the average value of the Flow Aggregation samples in each Time Series Bucket
    • Count - Counts number of Flow Aggregation sample occurrences within the Time Series Bucket
    • Sum - Calculates the sum of all the Flow Aggregation samples in each Time Series Bucket
    • Maximum - Selects the Maximum value of the Flow Aggregation samples in each Time Series Bucket
    • Minimum - Selects the Minimum value of the Flow Aggregation samples in each Time Series Bucket
Avatar of authorDušan Pajin
ImprovementCoreService Provider
2 years ago

Connectivity Cost: Major Holiday Update !

Connectivity Costs is one of the workflows in the Edge module that seems to be the most successful with our users. It allows users to centralize the financial modeling of their Transit, Peering and IX contracts in one place, and track costs for each of these throughout the month, and every past month.
For a refresher on this workflow, take a minute to read this blog post from 2021 on Why and How to Track Connectivity Costs.

Today, we add quite a few highly anticipated features to it, read on!


Landing page calendar drawer now collapsible

The drawer with the invoice and contract expiry schedule is now collapsible, which was a common request from users whose contract dates are always the same: it can now be hidden, and the preference will stick for each user.

Dynamic Interface Selection

In the previous iteration of Connectivity Costs, interfaces has to be manually selected and added to the relevant Cost Group in order to bind them to a contract and financial model. For networks with a vast footprint, these external facing interfaces keep coming in and out of the picture, and users demanded to be able to manage this part of the configuration more efficiently. This is now done: from the already popular Capacity Planning workflow, we have ported the Dynamic and Static Interface Selector module into Connectivity Costs.

In addition, we've improved it significantly to make things even easier to maintain - amongst others, users can now assign interfaces in a cost group based on:

  • Interface Name and Description regular expressions
  • Device Names, including regular expressions (comes in handy if a portion of your device names is Site or Function related - think edge-01_ams01 for an Edge Router in Amsterdam)
  • Device Labels, allowing you to exclusively focus on specifically tagged devices
  • Last but not least: users can now choose between Logical Only interfaces, Physical Only interfaces, or both - for instance, depending on which of the physical or logical part of an Aggregate logical interface your SNMP accounting is going to come from

Interface or Group level additional costs expressed in any currency

Users can add Cost Group level (monthly or yearly) costs to their cost groups, but can add Interface Level costs as well. Until now, the currency of the parent Cost Group would be used to evaluate these additional costs into the daily computations.

With this release, users can now select the currency for both. Here's an example of where it comes in handy: optional interface charges are often used to document cross-connects. Our customers also tend to use local providers in whichever area they are sourcing bandwidth from. This often results in Bandwidth as documented in the Cost Group financial model being expressed in the local currency, and data-center cross connects (usually from a global data-center contract) expressed in another currency. Problem solved !

While at it, we made the Interface-Level UI for adding/editing costs more usable, see for yourself - in the example below, the cost group is in dollars and the interface charges for the cross connect in Euros. 

New Computation Methods

Based on feedback from our users, we also added a few new computation method flavors as to how our engine should compute monthly invoices. To the existing Peak of Sums and Sum of Peaks we added two new models:

  • Peak of Sums (Multiple interface sets)
  • Peak of Sums (Max of In and Out)

And to make sure our users select the right one when configuring a cost group, we also gave better in-UI explanations of how each computation method works in the back-scenes:

What comes next ?

In future quarters, we are going to both extend the workflow to deal with non-edge costs, and also create functionality for users who have documented their costs into the workflow to be able to price any slice of their traffic.

Stay tuned, and in the meantime do let us know what you'd like to see next in this workflow !

Avatar of authorGreg Villain
API
2 years ago

Kentik APIv6 - endpoints reached stable release

Several Kentik’s APIv6 endpoints reached the Stable Release status:

  • BGP Monitoring API - Provides management and access to the data of the Kentik’s BGP Monitoring functionality.

    • Admin service: provides management of the BGP Monitor objects of the Kentik’s Synthetics product
    • Data service: provides access to the BGP related data collected from various Vantage Points used by Kentik. A user is able to retrieve data about the global IP BGP prefixes, like reachability, AS paths changed and routes for the particular timeframes.
  • Cloud Export Configuration API - Provides management of the Kentik’s Cloud Export objects. These objects provide a relevant configuration for Kentik to be able to retrieve and export Flow logs and metadata from user’s Public Cloud infrastructure. The API endpoint provides configuration capabilities of these object and basic status information about the active export processes. Currently supports the following public cloud providers:

    • Amazon Web Services (AWS)
    • Microsoft Azure
    • Google Cloud (GCP)
    • IBM Cloud
  • Label Management API - Provides CRUD operations of the Label objects, which are used to tags certain objects in Kentik, for example, devices, synthetic tests, and ksynth agents in order to create logical groups. This API endpoint is used to manage Labels, however, the application of a Label to a given object is done with the API corresponding to the type of that object.

    Object typeAPI for attaching labels
    DeviceDevice Apply Labels
    Synthetic monitoring testSyntheticsAdminService API
    Synthetic monitoring agentSyntheticsAdminService API
    BGP monitorBgpMonitoringAdminService API
  • Notification Channel API - Provides List and Search operations of the Kentik’s Notification objects. Each Notification channel includes a Type (e.g. email, Slack, PagerDuty, etc.) and a set of Targets (recipients). The use of this API is currently subject to the following limitations:

    • Creation, modification, and deletion of channels is not supported.
    • There is no support for the notification channels created in Kentik's “v3” portal.
Avatar of authorDušan Pajin
ImprovementSyntheticsBGP Monitoring
2 years ago

BGP Monitor: Upstream Leak testing is out

BGP Monitor tests in Kentik Synthetic Monitoring came out including tests for the following elements:

  • Reachability: % of BGP Vantage Point threshold to determine whether prefixes are visible “enough” through the public internet
  • Allowed Origin: whether detected originators are part of an allowed-list (manually specified, or via RPKI) - this is commonly referred to as “Origin Hijack Monitoring”

The health of a BGP Monitor test was then the worse of these two tests, across all prefixes registered in the test, with the specificity that “Allowed Origin” could only be healthy or critical.


The “Allowed ASNs” test has now been renamed “Origin Hijack detection” to match what the industry is calling it.

Additionally, we have added “Upstream Leak Detection” - here’s the practical use for it:

In a normal situation, you only want your Upstream IP Transit Providers to announce your prefixes to the rest of the world: under no circumstance do you usually want your peers to announce your prefixes to the rest of the world as if they were your transit provider. They should keep these routes to themselves, and only use them to go from their network to yours (announcing them to their peers will break that partition).

Enters #4 step of the updated BGP Monitor test where you can now enter the ASNs of your “official” Upstream Transit Providers and we will inspect the 1st hop in the AS Path of all announcements of these prefixes (and of their more specific children).

Remember that with all BGP Announcements collected from the BGP Vantage Points, come an AS_PATH that gives the following information:

…. various ASNs …

If the is not part of your allowed list of Transit Providers for any of the prefixes (and their more specifics), the entire BGP Monitor test will be flagged as critical for “Upstream Leak”.

For further reference, the diagram below details Origin Hijack vs Upstream Leak


Avatar of authorGreg Villain
Hybrid CloudKentik MapBETA
2 years ago

Kentik Kube (beta) has arrived

We’re excited to announce our beta launch of Kentik Kube, an industry-first solution that reveals how K8s traffic routes through an organization’s data center, cloud(s), and the internet.

With this launch, Kentik can observe the entire network — on prem, in the cloud, on physical hardware or virtual machines, and anywhere in between. Kentik Kube enables network, infrastructure, platform, and DevOps engineers to gain full visibility of network traffic within the context of their Kubernetes deployments — so they can quickly detect & solve network problems, and surface traffic volumes from pods to external services.


Kubernetes cluster running on AKS, displaying traffic and latency to the front end of an online shopping site.

Why we built Kentik Kube

Kubernetes has become the de facto standard for cloud-based applications. As companies migrate their workloads, ensuring the reliability, connectivity and performance from user applications and their clusters, to the entire infrastructure and internet is critical.

Very often, pods and services experience network delays that degrade a user’s experience. It is difficult to identify which Kubernetes services and pods are experiencing network delays. The complexity of microservices leaves engineers wondering if the network reality matches their design, who are the top requesters consuming Kubernetes services or which microservices are oversubscribed, and how the infrastructure is communicating both within itself or across the internet.

Kentik Kube use cases

We built Kentik Kube to provide visibility for cloud-managed Kubernetes clusters (AKS, EKS, and GKE) as well as on-prem, self-managed clusters using the most widely implemented network models. Teams responsible for complex networks can:

Improve network performance

  • Discover which services and pods are experiencing network latency
  • Identify service misconfigurations without capturing packets
  • Configure alert policies to proactively find high latency impacting nodes, pods, workloads or services.

Gain end-to-end K8s visibility

  • Identify all clients and requesters consuming your Kubernetes services
  • Know exactly who was talking to which pod, and when.

Validate policies and security measures

  • See which pods, namespaces, and services are speaking with each other to ensure configured policy is working as expected.
  • Identify pods and services that are communicating with non-Kubernetes infrastructure or the internet — when they should not be.

How Kentik Kube works

Kentik Kube relies on data generated from a lightweight eBPF agent that is installed onto your Kubernetes cluster. It sends data back to the Kentik SaaS platform, allowing you to query, graph and alert on conditions in your data. This data coupled with our analytics engine, enables users to gain complete visibility and context for traffic performance inside and among Kubernetes clusters.

Mapping your network with Kentik Kube

Kentik Kube provides east-west and north-south traffic analytics inside and among Kubernetes clusters. 


Network map showing EKS clusters communicating between AWS regions.

Kentik Kube can display details so you can see if your route tables, NACLs, etc. are all configured correctly. You can drill down into a cluster to see if there are latency or other issues. Our eBPF telemetry agent deployed into these clusters lets you see the traffic between nodes and pods as well as any latency.


Kentik Kube showing latency


How to get started with Kentik Kube

Kentik Kube is now in beta. You can apply to trial the beta by clicking on the Kentik Kube section of the menu. Please share your feedback with us. We’d love to hear what you think.

Avatar of authorChristoph Pfister