Attend a Live Product Tour to see Sedai in action.

Register now
More
Close

Optimizing AWS ECS Costs: Sedai Demo & Walk-through

By
Benjamin Thomas

Benjamin Thomas

Published on
Last updated on

March 28, 2024

Max 3 min
Optimizing AWS ECS Costs: Sedai Demo & Walk-through

Introduction

In this article, we'll provide you with a comprehensive overview of Sedai, an autonomous cloud-management platform designed to optimize performance, cost, and availability for your AWS ECS (Elastic Container Service) clusters. If you'd prefer to watch a video and get more context about other strategies to optimize ECS, be sure to check out the accompanying webinar video here.

ECS Cost Reduction Demo Using Sedai

Sedai is an autonomous cloud-management platform, which will help you optimize your performance cost and availability autonomously.  Once an account is onboarded by connecting to the other AWS account, you can see all the ECS clusters.  The actual cost of each cluster will be reported:

And once you drill down into specific clusters the cost opportunities at three levels can also be seen:

The three categories of savings are (1) service optimization, (2) optimizing the container instances and (3) purchasing levers. All of these will be shown.  Sedai can operate in Recommendation Mode but once it is in Autonomous Mode, all the actions to implement the service and instance optimization will be taken care of by Sedai. Below is an example of the potential service optimization gains from CPU and Memory optimization.

For this particular cluster, all three levels of optimization would achieve 37% cost savings:

ECS Service Optimization

So each service, in this case, has a different recommended configuration. 

ECS Container Instance Optimization

Once they are all done, your container instance profile will be different. The number of container instances that you need for recommended instances will be different. You can see the current and recommended configuration below. Once the services are optimized, you have the right profile of container instances. Once the services are optimized, you have the right profile of container instances.

ECS Purchasing Optimization

You then have to pull various purchasing levers such as savings plans and see costs under different commitment levels. 

By pulling all these three levers together (Service Optimization, Container Instance Optimization, Purchasing Options), you can save a large amount of cost.

Viewing Optimization Results

See below for how the optimization results will look like as onboarded accounts are optimized in Sedai.

There will be pages and pages of autonomous optimizations here. This helps establish the case of autonomous optimization as the volume of individual services and actions to be taken will be hard for a team to do on a day-to-day basis. The team would be doing it continuously as new releases occur and traffic patterns change. But if you have an autonomous platform, the platform is going to do that for you every time when a software is released.

Viewing Optimization Results by Group

If you use tagging now, you can see how each app group is being optimized.  For example, you would know how much your development costs and you could optimize. You can also set specific settings and configurations for each group. Below the development group is selected:

Example Optimization Impacts

In the example below, the service was optimized. What happened is the latency came down and performance improved. At the same time, we saved costs by allocating the resources in the right way. This needed less memory, but it needed more CPU. That way, you didn't have to run that many tasks.

Since we gave more CPU, the application was able to run faster.  So in this case it was possible to save on cost as well as latency.

Choosing ECS Cost & Performance Optimization Goals

In Sedai, here's how you can set the optimization goals for the system to focus on cost, latency, or a mix. Once you’re in the platform and once all the cloud accounts are added, for each account, optimization can be enabled and disabled; Sedai can run in Recommend mode or Autonomous mode.

You could also set the settings if you want to hit a balance for cost and duration.

If you want to decrease cost, but don't want performance to take a hit, then for example you can set the performance degradation to be a maximum of 2%. Or if you want to decrease the duration, but don't want cost to rise by a certain amount that is possible. So you can balance cost optimization and performance optimization.

Getting Started

Let's dive into how you can easily get yourself signed up.

Signing up for Sedai is very easy. Go to www.Sedai.io

Click the Start Free button. We offer a free tier, which you could use, see the value, and then decide if you want to use it long-term. Once you click on Start Free, you'll get a signup page. You'll get a screen where you can select your AWS account. 

We support Lambda, ECS, EC2, EKS, as well as Kubernetes. Once the AWS option is selected, 

We'll walk you through where you could attach the right role.

And then Sedai will connect and identify your topology, and start working.

It's very easy to onboard. Once onboarded, set the right settings.

By default, we'll be in Recommend mode but once Autonomous, we could autonomously take actions to optimize cost and performance. 

Sedai is like an expert team who's working 24/7 for you.

Q&A

Q:  With economic volatility in mind, is ECS service scale-down, if necessary, as easily accomplished as scale-up through Sedai? 

A: Yes. When you scale down services, you want to make sure you don't do it abruptly. So you have your autoscalers in place, and your cool-down period should be the right one to manage your application startups. So you don't want to abruptly scale down. Same way to scale up. When you scale up, you want to make sure that the warm-up values are given right for every service, and that your desired and the maximum-minimum looks right.

Q: Are there general guidelines on when to use Fargate versus EC2 capacity providers within ECS clusters?

A: Absolutely. It all comes down to, does your organization wants to get into the business of managing and patching your EC2 instances? Or you want to offload using it to AWS Fargate so that you don't get into that business altogether. Secondly, does your application require elevated privileges? There are also considerations that you need to make whether your application requires elevated privileges on the underlying operating system or not. If you require elevated operating-- elevated privileges on the operating system, what level of privileges do you require? There is an option called Capabilities that ECS Fargate offers and you can go through that list and identify if you need any of those capabilities or not. Depending on that, you can choose to decide whether you want to run it in Fargate mode or EC2 mode. Iteratively, this should be an ongoing exercise, in my opinion, whether you want to optimize for cost, it should start with understanding or having visibility in your operations also, as in how much you're spending on the overall operations and maintenance of the underlying EC2 systems, including people cost, any automation that you have done to automate that, and so forth. And then factor that into account for considering an AWS Fargate as an option or not.

Q: With EC2, users have the flexibility to choose different performance tiers in that instance type, CPU-focused, memory-focused, or network-focused. But with Fargate, are there any general guidelines on how to optimally choose tier for applications with different performance requirements?

A: There are limited options of what is available currently with AWS Fargate. You don't get a variable in selecting that, hey, I want starting X range to Y range. You will get, I want X number of vCPUs and Y number of memory. If it fits into your application profile needs, select that category versus selecting something else. This gets back to the right-sizing discussion, how to understand the profiling of your workloads, and tie it back to the requirements of the underlying operating system or Fargate mode and select the instance types based on it.

Q:  How does ECS autonomous cost optimization work in large multi-AWS account tenants?

A: Today, we run customers who have more than 100 accounts, 100 AWS accounts. How it works is, that each account is accessed or added into Sedai on a roll-basis way so that it's safe and secure. Then once it's added, the settings that I was showing on how you would want to optimize can be controlled at the account level. If there are tags that are defined for business units, and if you want a chargeback, and your goals are across accounts for a specific business unit, then you have the right tag. You have tagged an organizational unit or a business unit as a merchant. You could also  change your settings just for a merchant even if the merchant is across accounts. So account-level settings, group-level settings, and even resource-level settings, everything is available in Sedai, yes.

Q: How does Sedai work with Fargate to optimize ECS clusters?

A: So in Fargate, you cannot change from 4 vCPU to 3.6 vCPU. You can only change it from 4 vCPU to 2 vCPU because that's the predefined bucket available for CPUs. And your memory also works in steps. You can only change from 8 GB to 7 GB. You cannot go from 8 GB to 7.5 GB. Sedai considers all the Fargate buckets, and it optimizes to that. We consider the same steps that are available in Fargate, and we either step down the profile or step off the profile based on that.

Q: Can Sedai perform storage optimization?

A:  Yes, storage optimizations can be undertaken for several services including Amazon EBS and Amazon S3. Check with our team for the latest supported services. In storage, you have the over-provisioning, and under-provisioning problem as compute. Most of the time, both provisioned and utilized storage grow, and there are cases when it grows and comes down also, and typically companies over-provision.

Q: When optimizing costs with Sedai in, say, ECS or Fargate, are there any risks that will limit the capability to handle abrupt peaks that could be handled if you had more CPU and memory margin-left?

A: Yes. When Sedai optimizes services, we consider the traffic seasonality. We consider how the application starts up, how it performs at the peak, how it performs at the low level, how it performs on an average, and then we identify the right profile for the application. It doesn't stop there. We also add the right autoscalers in place so that when you get traffic that is growing, your autoscalers will kick in. And it constantly does it. It doesn't optimize once and stops it. For every software release, Sedai re-analyzes the services. If a recent release had a higher memory profile, Sedai would optimize that and then put the right appropriate autoscalers in place for that. Safety is one of our primary tenants. That is handled in Sedai.

Q: How does Sedai manage Amazon accounts outside of the US or Canada, specifically internationally?

A: Today, Sedai is SOC 2 Type 2 CCPA compliant. We do have international customers today. We have a separate SaaS deployment for our Europe customers and North America customers. We have customers from Europe as well. One of our customers is partly owned by the French government as well. We have multiple instances of our SaaS instance itself where those international compliant apply.

Was this content helpful?

Thank you for submitting your feedback.
Oops! Something went wrong while submitting the form.