Get Started
Ready to turn rightsizing into real reductions, and prove it on your bill? Let us show you how.

October 21, 2025
October 21, 2025
October 21, 2025
October 21, 2025

Modern Kubernetes teams do a lot of “right” things, including auto‑scaling, rightsizing, golden images, and sensible defaults. And yet the cloud bill doesn’t budge. Why? Because reclaiming CPU and memory within pods doesn’t always collapse nodes. Savings are “soft” unless the cluster actually sheds capacity.
Sedai’s newest Kubernetes capabilities close that gap. We surface granular details about which cluster components are the biggest offenders on your bill, and we compact clusters to free up and eliminate wasted nodes. With Kubernetes v1.33’s in‑place pod resizing, we apply resource changes instantly — no restarts, no reschedules. And, with more granular cost attribution, you can tie more of your bill back to the workloads that drive it.
This blog will take you through some of the latest Kubernetes optimization features in Sedai.

After right-sizing pods, clusters often look like Swiss cheese: pockets of free capacity scattered across nodes. Traditional autoscalers focus on unused nodes; they rarely repack partially used ones. Sedai’s Cluster Compaction actively re-packs workloads to free whole nodes, then (with optional cloud credentials) removes those idle nodes.
The result? Turns “soft savings” into hard savings by eliminating nodes once workloads are consolidated.
Kubernetes 1.33 introduces the ability to change CPU/memory without restarting a pod. Sedai now uses this to apply rightsizing in place.
Requirements: Cluster running Kubernetes v1.33+; Sedai detects and uses the capability automatically.
The result? Faster optimization cycles, no downtime, and higher confidence to apply frequent, incremental tuning in production.
Kubernetes cost isn’t just CPU and RAM. With Sedai, teams can now see more of the bill tied back to the workloads that drive it, including:
The result? Better, more granular visibility into your Kubernetes spend.
1. Does Sedai replace my autoscaler?
No. Autoscalers add/remove capacity based on load. Sedai tunes autoscaler settings to make them work better, and complements autoscalers by defragmenting partially used nodes after rightsizing so entire nodes can be removed.
2. Do I need to provide extra permissions to use any of these features?
Only for automated node deletion during Cluster Compaction. Everything else runs with standard Kubernetes access.
3. Does Sedai optimize GPUs?
We attribute GPU costs now (including fractional/time‑sliced scenarios). Full GPU optimization is on our roadmap.
4. What’s required for in‑place resizing?
A cluster on Kubernetes v1.33+. Sedai automatically uses the capability; no special setup beyond upgrading.
October 21, 2025
October 21, 2025

Modern Kubernetes teams do a lot of “right” things, including auto‑scaling, rightsizing, golden images, and sensible defaults. And yet the cloud bill doesn’t budge. Why? Because reclaiming CPU and memory within pods doesn’t always collapse nodes. Savings are “soft” unless the cluster actually sheds capacity.
Sedai’s newest Kubernetes capabilities close that gap. We surface granular details about which cluster components are the biggest offenders on your bill, and we compact clusters to free up and eliminate wasted nodes. With Kubernetes v1.33’s in‑place pod resizing, we apply resource changes instantly — no restarts, no reschedules. And, with more granular cost attribution, you can tie more of your bill back to the workloads that drive it.
This blog will take you through some of the latest Kubernetes optimization features in Sedai.

After right-sizing pods, clusters often look like Swiss cheese: pockets of free capacity scattered across nodes. Traditional autoscalers focus on unused nodes; they rarely repack partially used ones. Sedai’s Cluster Compaction actively re-packs workloads to free whole nodes, then (with optional cloud credentials) removes those idle nodes.
The result? Turns “soft savings” into hard savings by eliminating nodes once workloads are consolidated.
Kubernetes 1.33 introduces the ability to change CPU/memory without restarting a pod. Sedai now uses this to apply rightsizing in place.
Requirements: Cluster running Kubernetes v1.33+; Sedai detects and uses the capability automatically.
The result? Faster optimization cycles, no downtime, and higher confidence to apply frequent, incremental tuning in production.
Kubernetes cost isn’t just CPU and RAM. With Sedai, teams can now see more of the bill tied back to the workloads that drive it, including:
The result? Better, more granular visibility into your Kubernetes spend.
1. Does Sedai replace my autoscaler?
No. Autoscalers add/remove capacity based on load. Sedai tunes autoscaler settings to make them work better, and complements autoscalers by defragmenting partially used nodes after rightsizing so entire nodes can be removed.
2. Do I need to provide extra permissions to use any of these features?
Only for automated node deletion during Cluster Compaction. Everything else runs with standard Kubernetes access.
3. Does Sedai optimize GPUs?
We attribute GPU costs now (including fractional/time‑sliced scenarios). Full GPU optimization is on our roadmap.
4. What’s required for in‑place resizing?
A cluster on Kubernetes v1.33+. Sedai automatically uses the capability; no special setup beyond upgrading.