kentik Product Updates logo
Back to Homepage Subscribe to Updates

Product Updates

Latest features, improvements, and product updates on Kentik's Network Observability platform.

Labels

  • All Posts
  • Improvement
  • Hybrid Cloud
  • Core
  • Service Provider
  • UI/UX
  • Synthetics
  • Insights & Alerting
  • DDoS
  • New feature
  • BGP Monitoring
  • MyKentik Portal
  • Agents & Binaries
  • Kentik Map
  • API
  • BETA
  • Flow
  • SNMP

Jump to Month

  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • July 2021
  • June 2021
  • May 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • October 2020
  • September 2020
  • June 2020
  • February 2020
  • August 2019
  • June 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • September 2018
  • August 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • April 2016
CoreNew feature
yesterday

Password strength constraints in Kentik Portal

As Kentik's user base constantly increases, our engineering team is spending a proportionally increasing amount of time evaluating ways to harden the platform's security aspects.

In our ongoing quest towards SOC2 compliance, we recently increased our user password strength stance, please read on to learn more.


Disclaimer: good passwords are not enough

While this feature update is largely focused on how we now incentivize users to select stronger passwords, Kentik recommends to their security-focused users that rely in priority on stronger mechanisms than plain password auth:

  • 2Factor authentication (2FA), whether TOTP Authenticators or YubiKeys
  • Better yet, SAML2 centralized SSO (Single Sign-On) platforms which themselves require 2FA

Initial phase: managing password strength

Password strength is one of the more common measures to harden security around SaaS-based services - it isn't sufficient in itself, but it is necessary. Starting now, users will be required to use a password that complies to a minimum strength level, which we have decided to evaluate using a publicly accessible library named ZXCVBN:

https://github.com/dropbox/zxcvbn

Note: 
Our choice for the password strength library was largely based on this article here: https://www.usenix.org/conference/usenixsecurity16/technical-sessions/presentation/wheeler

This constraint for now applies to two types of users:

  • New users at account activation time
  • Existing users going through the "Forgot Password" steps

This password evaluation step will show up in the login interface as displayed below

The user-submitted password will be evaluated in real-time and assigned one of these levels 5 levels below, with a password being accepted with a minimum level of Good.

  1. No strength
  2. Weak
  3. Fair
  4. Good
  5. Strong

As long as the minimum password strength is not met, the [Set Password] button will be disabled and will exhibit the following alert tooltip:

Until a satisfactory password strength level is met, advice will be displayed below the input field guiding the user toward a stronger one.

This advice is based on a variety of factors such as length, variety of characters, dictionaries of common nouns used, l33tspeak…

Additionally, when resetting your password, it can not contain any of the previous 5 values, else the following will be displayed:

Next Up, password rotation

While dependent on the SOC2 assessor, password rotation is also one of the common demands around password security.

Therefore, the next phase will consist in having password-only users reset their passwords at regular intervals (90 days by default). We will at this point do the following:

  • Allow SuperAdmin users to set a different frequency
  • 2FA and SSO users will be exempt from such requirements, as they already rely on strong authentication mechanisms.

This change should happen before April 2023 and will be additionally signaled to the impacted users in-product before being rolled out.

Avatar of authorGreg Villain
Hybrid CloudNew featureFlow
2 months ago

Flow Logs Sampling Configuration

We released a new configuration knob that allows customers to change the sampling rate for AWS and Azure on their own without contacting Kentik team.

That will allow customers to consume flow logs at the preferred rate fitting into the licensing strategy, assigning priority for certain types of traffic and being flexible by changing the sampling rate at any time and separately for each flow log exporter.

Licensing will be enforced after the sampling, so customers can use heavier sampling in some cases, and saving the licensed FPS for the another S3 buckets containing flow logs.

There is a slight difference in available options for AWS and Azure.



AWS flow log sampling

Historically Kentik was supporting a “legacy” mode of sampling where for the large files with flow logs we were randomly picking 10,000 flow records per file in S3 bucket and ingesting only those records into Kentik Data Engine. Since the number of the flow in a file can vary this was considered an “adaptive sampling” where larger files were getting more heavily sampled comparing to the smaller files. Another option was no sampling i.e.  all the records were consumed from the file.

Moving forward we now support 3 options for AWS:

  • Legacy sampling - random 10,000 flow records per file.
  • Sampling rate - where user can provide the sampling rate in 1:N format (meaning 1 out N records to be picked up for an ingest into Kentik Data Engine), where N should be between 2 and 2000.
  • Unsampled - all the records in a flow log file will be taken into ingest. Effectively that is the same as sampling rate 1:1.

Sampling rate can be configured when new flow log file is added, or changed for the existing exporter.

Azure flow log sampling

Flow log exporters for Azure before this release were supporting only Unsampled mode, where all the flows from the flow log file were processed by the Kentik Data Engine.

Since for some situations full flow log visibility might be not required, we added sampling knob that allows users to configure sampling rate 1:N format (meaning 1 out N records to be picked up for an ingest into Kentik Data Engine), where N should be between 2 and 2000.


Avatar of authorIevgen Vakulenko
Hybrid CloudCoreNew feature
2 months ago

Data Explorer: Filtering Based on Tags for Azure

Customers who are using Azure tags within their cloud infrastructure can import those tags into Kentik and use them as a Custom Dimensions within Kentik Data Explorer.

This import can be configure as any other Custom Dimensions configuration, under Settings - Custom Dimensions - Add Custom Dimensions

After that user is able to pick up which Azure Entities will be used to generate the Custom Dimensions from:

After Kentik will populate custom dimensions that were chosen (can take up to 30 min), they became fully available in a Data Explorer, under the section “Custom”.

Note: other custom dimensions are shown on the picture as well. Dimension that was generated from Azure tag is highlighted.

We also wanted to use this opportunity and remind that AWS tags are supported as Custom Dimensions as well 🙂.

Happy flow hunting!

Avatar of authorIevgen Vakulenko
Hybrid CloudCoreNew featureKentik Map
3 months ago

Azure NSG denied traffic visibility

It’s possible now to check for a traffic flows that were denied by the NSG rules configured on a Subnet or VNET level.

There are two ways how you can see that traffic:

  • It’s available on a Kentik Map as a sidebar “Details” widget (similar to existing AWS functionality)
  • You can search for them in a Data Explorer using source and destination Firewall Action as a dimensions, and change the metric to the flow/s.
    This feature will be a significant aid in troubleshooting the NSG firewall issues and decrease mean time to resolution.



Avatar of authorIevgen Vakulenko
Hybrid CloudCoreNew feature
3 months ago

Data Explorer: Filtering based on tags for AWS

Kentik Data Explorer supported filtering based on instance tags for a while now, but now we are expanding the tag support for other cloud objects as well, such as: VPC, Subnets, ENI, VPC endpoints and Transit Gateway attachments.

No matter what object you assigned a tag on, you can use Data Explorer for filtering traffic flows related to those tags.

In order to start using the AWS tags you need to create custom dimension through the Settings - Custom Dimensions - Add Custom Dimension.

After that you can automatically populate your custom dimensions with AWS tags and pick what fields you want to use in your Data Explorer for filtering.

After Kentik will populate custom dimensions that were chosen (can take up to 30 min), they became fully available in a Data Explorer, under the section “Custom”.

This will allow you to filter the traffic flows using the tagging and naming convention of your organization, for instance to see the traffic of a particular team, business unit or division.

Happy flow hunting!

Avatar of authorIevgen Vakulenko
ImprovementNew feature
7 months ago

New Experience for Product Feature Requests

This month, we are introducing a new experience for submitting product improvement feature requests for the Kentik platform.

In this completely new experience, anyone who uses Kentik can submit feature requests for product improvements, vote and comment on existing requests and get timely updates.



What’s New?

Navigation to the new feature request portal is found in the Contact Support window, where a link for feature requests will automatically open a new tab with the feature request portal.

This link will bring you to the portal dashboard where you can click “Make a suggestion” to open the feature request form. The feature request form consists of the problem you are trying to solve, a details section for in depth criteria for the feature request and a current workaround section (if applicable).

When beginning to type the request in the first open text field, if there are any existing feature requests that sound similar, they will show. From here, you can vote or add comments on any existing feature requests. Once your feature request is submitted any status changes that are made to the feature request will be emailed directly to you, so you can follow your request from idea to release.


Voting on Existing Feature Requests

One of the most impactful advantages of this new feature request system is the ability to go and vote on feature requests that already exist. Voting on requests can be done in two ways.

  1. In the above paragraph it was mentioned that as you are typing a feature request on the submission form, existing feature requests that are relevant will populate. Clicking the ‘I want this’ button will cast a vote for that feature request without having to resubmit the same request.
  2. On the dashboard landing page, there is a list of recently submitted feature requests. These can be read and voted on at any time by all members of the feature request portal.

Commenting is also available on existing feature requests. Dropping comments on requests with details to your specific use case is helpful for other Kentik users to ideate on how to use the Kentik system, but also incredibly helpful to know what feature requests are in high demand.

Feature Request Status Changes

As feature requests get submitted, they will be regularly evaluated by our teams. When feature requests change status, the submitter and all individuals who have voted or commented on the request will get an email updating them on the status change. This level of transparency is to keep those that need the new feature aware of when they can expect the work to be started and eventually completed.


Let us know what you think in the comments. Happy submitting!



Avatar of authorJosh Jensen
SyntheticsNew feature
7 months ago

Alert on Unexpected DNS IP Mappings

Ensuring that DNS is healthy and working as expected is fundamental to the performance and availability of any network service.

Kentik's DNS Server Monitor and DNS Server Grid tests already allow users to monitor and alert on the DNS Resolution Time - an important metric to track to ensure that no delays are introduced at this step of the interaction.

We've now enhanced these tests with the ability to monitor and alert on the IP mappings returned by the DNS servers being queried. By specifying an allowed list of IP addresses we can alert when an unexpected mapping is returned.


The Allowed DNS Results field under Advanced Options allows users to specify the expected IP results. If the test returns an IP that isn't in this allowed list the test health will be marked as Critical and an alert is triggered based on the alert activation settings.



Avatar of authorSunil Kodiyan
SyntheticsNew feature
7 months ago

Synthetics - TLS Certificate Expiry Check

Websites and web applications rely on SSL certificates to demonstrate their authenticity to end users connecting to them. 

Whether the web application is one the customer owns themselves or one that they consume as a service from a service provider if the server certificate attached to the application expires or is invalid, users will not be able to connect to them securely. It is therefore important to monitor the health of these certificates and alert when one is about to expire or is no longer valid.

With the TLS Certificate Expiry Check feature we have introduced a new column in the Results page of HTTP(S) and Page Load tests displaying the server certificate’s expiry status and we've expanded test Health options to alert when the certificate is about to expire.

When the target website's certificate is valid we will display the expiry date of the server certificate as seen below.


When the certificate is invalid (or expired) we will display an error message indicating the cause of the failure.

Health options now includes the option to alert the user when the certificate is nearing expiry to prevent site outages due to invalid certs.

If the website's certificate is within the specified threshold the Certificate Expiry column values will be colored accordingly.

To disable all certificate checks, enable the "Ignore TLS Errors" switch under Advanced Options > General Options. This will remove the "Certificate Expiry" column in the test results page.


Avatar of authorSunil Kodiyan
SyntheticsNew featureBGP Monitoring
7 months ago

BGP in HTTP(S) and Page Load Tests

In previous releases we provided the option to include network layer data in web tests by enabling ping and trace route in HTTP(S) and Page Load tests. This allowed us to quickly correlate application and network metrics to identify the root cause of an application's performance or availability issues.

Expanding on our cross layer visibility we can now include BGP data in web tests as well. Allowing us to further improve the time it takes to identify the root cause of issues as originating in either the application, network or routing domain.


Where previously you would need to create a separate BGP Monitor test you can now examine the same BGP data in an adjacent tab on the results page itself. No need to jump between test views!

When included, BGP data will be visible as an additional tab in the test results page.

Putting it altogether, we can now analyze all layers of information relevant to a target application's health on the same screen, including

Application metrics and Network Ping metrics in the Results tab.

Network Trace metrics in the Path View tab.

and BGP metrics in the new BGP tab.

To add BGP data to an HTTP(S) or Page Load test turn the switch to "Enable bgp monitoring" (disabled by default) and enter the necessary details as you would for a regular BGP Monitor test including health options for Allowed ASN(s), RPKI status and Reachability (Advanced Options). 

Refer to the KB for more details on configuring BGP Monitor tests.




Avatar of authorSunil Kodiyan
SyntheticsNew feature
9 months ago

Synthetics: Transaction Test!

We're excited to announce that our initial launch of Synthetic Transaction Monitoring (STM) is now GA!

Transaction tests expand on our existing suite of web/app layer testing capabilities. While an HTTP/API test tests the availability of the front door (hosting web server) of an application and a Page Load test gives you the ability to test the performance of how a website loads, a Transaction test goes much deeper and allows you to simulate a user's journey as they interact with an application, like logging into an email application or checking out a shopping cart or searching for a stock ticker symbol.

For instance, an e-commerce site has many user workflows which enable a customer to login, search through selections, select payment and delivery options and finalize a transaction. Delays or errors in any of these steps can impact the customer experience and lead to lost revenue. STM will enable you to benchmark the performance of each of these transaction stages and troubleshoot specific performance issues.

Steps to perform STM in Kentik:

  1. Mimic the actions of your users as they would log into your application and perform a transaction.
  2. Records these actions using the Chrome DevTools Recorder.
  3. Open your Kentik account and create a new Transaction Test.
  4. Paste the script exported from Chrome DevTools Recorder.
  5. Select the agents (private or public) that you’d like to run the test from.
  6. Run the test and analyze results.

Here is an example of a Transaction recorded on Chrome DevTools.

We can now export the Transaction script as a Puppeteer script and paste it into new Kentik Transaction script as below 

You can select any of our public Application agents to test from, and/or easily deploy your own private Application agents which we can supply (for Docker, x86 and ARM). STM tests can be set as both automatic and periodic on time intervals.

Presentation of Synthetic Test Monitoring Results

Results are presented on a timeline that shows transaction completion time. Performance is measured against dynamically calculated baselines and lags in performance are colored - orange is a warning, red is critical. 

Selecting a point on the line will indicate total completion time and the rolling standard deviation baseline. 

Screenshots captured during the transaction process provide insight into the script execution flow and aid in troubleshooting. 

The Waterfall tab shows the load order and load duration of each element in the Document Object Model (DOM) of every page visited.

 

Analyzing the results give us insights into how our users experience the performance of an application in real time from different geographies allowing us to proactively respond to user experience issues before they are reported by real users. When used in conjunction with Kentik's suite of network tests we can identify the root cause of performance issues as originating in the network or application stack within minutes.

Avatar of authorSunil Kodiyan