OpsLyft Product Update Tour – August 2021

The month of August got over so quickly!  We are sure that we are not the only ones who feel this way.

But we are glad to share that over the last months we have executed more with each coming day on our vision to help organizations manage their cloud more efficiently.

In August, we have made fantastic strides on our product with some brand-new feature updates.

Let’s go on a quick August Feature Tour –

DevOps Timeline

DevOps Timeline solves the problem of limited and siloed visibility of infrastructure events. It gives a culmination of all the DevOps activities in one place with real-time data coupled with the cost impact of each feature, project, or product. Hence, you don’t have to waste your time in deep diving into reports and different monitoring dashboards to figure out the root cause in case of events like unplanned downtimes, security breaches, unplanned infrastructure modifications, etc.

In addition to that – DevOps Timeline has a browsable user experience such that it is extremely easy for you to find exhaustive information associated with any DevOps event. Know more about DevOps Timeline.

DevOps Insights

DevOps Insights consolidates all the data regarding the resources in a single dashboard instead of showing siloed analytics through multiple dashboards. In addition to insights on leakages, it publishes the possible solutions or quick fixes that can be implemented right away.

With the help of the DevOps Insights dashboard, you’ll be able to see every detail of the resources that are unused/underutilized contributing to higher costs and identify the resources which are consistently high on the consumption of the system resources and can crash anytime if not supplied with more capacity. You can look at the performance of individual resources through a graph and a weekly/monthly timeline showing the breakdown of the utilization of that resource.

What does DevOps-Insights have next to offer?

We want to stay true to our core value proposition to save as much time and effort of DevOps engineers as we can.

To do that, we are working on finding ways to provide built-in automation as remediations for the identified leakages as insights, which currently are published as recommended solutions only. Also, we want to make sure that while building automation, the infrastructure security is not compromised at all. We believe that when we show the impact associated with each automation when it’s executed, you would completely automate your entire DevOps lifecycle.

Take a closer look at DevOps Insights.

Conditional Cost Anomaly Detection

We built Conditional anomaly detection to detect the root cause of cost and usage spikes. It provides in-depth root cause analysis through which you can collect insight to proactively take actions and minimize unintentional spending.

Conditional anomaly detection uses custom-built machine learning (ML) models to continuously monitor unusual spending patterns across different AWS and GCP services on a day-to-day basis. Whenever there is a significant increase in cost, an alert is triggered.

This feature evaluates each of the services you use individually on an hourly and day-to-day basis to analyze the root cause of the anomaly, such as projects, services, or the usage type that is driving the cost increase.

What does Conditional Cost Anomaly Detection have next to offer?

We are working on building a solid solution that aims at detecting conditional anomalies along with point anomalies I.e., moving from infrastructural level to application level.

By integrating the data sources of product and business analytics i.e., performance and operational data in form of logs and metrics, we correlate them to the cost data. This allows users to watch for anomalous costs in other business contexts — clarifying whether an anomaly is an actual mistake or just a false positive.

After moving from point anomalies to conditional anomalies comes the collective anomalies. For detecting such anomalies, we will be building a model with algorithms that understand the relationships between different time series at the time of detecting and investigating anomalies.

Right now, Conditional anomaly detection is processing on the service level but we aim to exercise at the resource level in the future.

Know more about Conditional Cost Anomaly Detection.

Cloud Asset Management

We have made an all-in-one solution that provides a collective view of cloud resources and their meta-information involving lifecycle policies, creation dates, VPC information, costs, owner information, subnet information, etc. There are too many resources and the information is spread out over different pages of AWS cost explorer. To collect clean and accurate information for compliance requirements, engineers have to manually collate the resources. CAM simplifies this process by providing a portrait view of cloud resources spread across the entire cloud ecosystem.

You can view and analyze all your resource level costs with an intuitive UI. Also, you can see cost metrics of various resources/services at different levels of the infrastructure and can share the information (CSV + UI) with team members with a single click.

Where are we going with it?

  1. We have built an effective software solution that reduces the time and effort required for managing assets on AWS. Now we are moving towards building a single cloud asset management platform for all cloud providers from AWS, GCP, or Microsoft Azure at once i.e.; Unified Inventory Management System for Multi-Cloud.
  2. On top of that, we are working on building an Architectural Mapping that shows the link between all the resources. Through this architectural mapping, users can easily identify and track connections between the resource components, their relationships, and inter-dependencies.
  3. We are also working on the additional capability of change management where users can manage, track and view the changes happening to the resources ensuring your inventory is up-to-date.

Learn more about Cloud Asset Management.

Looking Ahead

In the next few weeks/months, here’s what’s coming at ya,

Business KPI:

Tech executives who are responsible for maintaining IT budgets have a hard time improving the margins on the products sold by the company concerning cloud cost because of a lack of visibility and tools to map cloud costs to products/services sold.

To not bear the repercussions of the reactive nature of cloud cost, preventive measures are needed beforehand. We are building a business KPI solution through which users get to see a holistic view of cloud cost which includes cost trends by business units like teams, products, customers, users, initiatives and features. You also get a hierarchical Budgeting control and approval mechanism on top of these business lines to ensure that infra capacity is set according to application needs.

Savings Plan and Reserved Instances Recommendation:

EM’s find it time-consuming and difficult to find which RI/SP they need for their AWS account and compare the savings made by any two specific plans so that they can cut on cloud costs. They currently have to put in much manual effort to figure out which SP/RI’s are being utilized in their account and which of them are getting wasted so that they can identify the unused plans and use them to the full extent. To address this problem, we are building a Savings Plan and RI Recommendation solution that provides-

  • Visibility over active RI/SP,
  • The recommendation for the purchase of new RIs,
  • Notifications for expiring/underused/fully used RI.

Data Transfer Cost:

DevOps engineers while optimizing cloud spend on AWS, find it time-consuming and hard to reduce data transfer costs. This is due to the inadequate and distributed visibility of the source & destination of the resources or services that are exchanging data in form of traffic within or outside the cloud environment.

We are building a solution that identifies the source & destination regions/resources/AZ’s for any kind of data transfer within and outside cloud environments. With this, you can track visibility & gain insights on how to reduce data transfer costs. You can also easily see and apply recommendations and actions, and save on your data transfer cost and avoid any Cost leakages.

Want to know more

If you are struggling with/facing any of the cloud problems we’ve discussed above, schedule a demo and take a look at the solutions we have built to handle the same for you.
We will give you a complete walkthrough of it and tell you more about how it can benefit your organization.
Feel free to contact us for other DevOps-related queries at contact@opslyft.com

We are hopeful that we maintain our rate of innovation, brainstorm with a lot of developers and engineers and keep delivering the latest features of our product to you. See you next month!

Leave a Comment

Your email address will not be published. Required fields are marked *