Exploring Kubecost: A Comprehensive POC on Cost Optimization and Resource Recommendations

Exploring Kubecost: A Comprehensive POC on Cost Optimization and Resource Recommendations

In this blog, we share our detailed Proof of Concept (POC) implementation of Kubecost, highlighting its cost optimization capabilities, resource recommendations, and comparisons with tools like Vertical Pod Autoscaler (VPA), Kubernetes Resource Recommendations (KRR), and Goldilocks. By using Kind and Kubeadm-managed clusters, we evaluated Kubecost's effectiveness in diverse environments.


Understanding Resource Requests and Limits in Kubernetes

1. Resource Requests

Resource requests define the guaranteed minimum resources (CPU and memory) allocated to a container. Pods cannot be scheduled if sufficient resources aren’t available on a node.

2. Resource Limits

Resource limits cap the maximum resources a container can consume. Exceeding these limits can lead to CPU throttling or pod termination due to out-of-memory errors.

How Kubecost Works

Kubecost analyzes historical resource usage and trends to optimize resource requests and limits, balancing performance and cost-efficiency. Below is an overview of its recommendation mechanism:

Raw Metrics → Prometheus Storage → OpenCost ETL → Kubecost Analyzer → Recommendations

1. Data Collection Using Prometheus

Kubecost relies on Prometheus to gather real-time and historical metrics, including:

  • CPU Usage: container_cpu_usage_seconds_total

  • Memory Usage: container_memory_working_set_bytes

  • Network Traffic: container_network_receive_bytes_total & container_network_transmit_bytes_total

  • Disk I/O: node_disk_io_time_seconds_total

Prometheus serves as a time-series database, enabling Kubecost to store, analyze, and visualize data patterns.

2. Data Processing and Recommendations

Memory Recommendations

Memory Request = 1.5 * avg(quantile_over_time(0.99, container_memory_working_set_bytes[7d]))
  • Uses the 99th percentile of memory usage over 7 days.

  • Multiplies by 1.5 to account for spikes.

CPU Recommendations

CPU Request = 1.5 * quantile_over_time(0.99, irate(container_cpu_usage_seconds_total[7d]))
  • Based on the 99th percentile of CPU usage spikes captured with irate.

  • Adds a 50% buffer for unexpected load.

3. Cost Allocation and Custom Pricing

Kubecost correlates resource usage with pricing models, providing granular cost breakdowns. For custom environments, users can upload a CSV with asset-specific pricing details.

Custom Pricing Workflow

  1. Prepare CSV:

     EndTimeStamp,InstanceID,Region,AssetClass,InstanceIDField,InstanceType,MarketPriceHourly,Version
     ,node-123,us-east-1,node,spec.providerID,m5.large,0.1,
    

    Create ConfigMap:

     kubectl create configmap csv-pricing --from-file=custom-pricing.csv
    

    Update Helm Values:

     pricingCsv:
       enabled: true
       location:
         URI: /var/kubecost-csv/custom-pricing.csv
    

    Deploy or Update Kubecost:

     helm upgrade kubecost kubecost/cost-analyzer -f values.yaml -n kubecost
    

    Comparing Kubecost with Other Tools

    1. | Tool | Key Features | | --- | --- | | Vertical Pod Autoscaler (VPA) | Dynamically adjusts resource requests but lacks cost insights. | | Kubernetes Resource Recommender (KRR) | Recommends CPU requests (95th percentile) and memory requests (+15% buffer). | | Goldilocks | Visualizes resource recommendations using VPA but lacks cost allocation. |

      Kubecost goes beyond by integrating cost optimization with resource recommendations, enabling informed trade-offs between cost and performance.

How Kubecost Compares to OpenCost

While Kubecost utilizes OpenCost under the hood, they differ in scope:

FeatureOpenCostKubecost
Cost MonitoringYesYes
Resource RecommendationsNoYes
Custom PricingYesYes
Network MonitoringLimitedDetailed


Cluster Considerations: Self-Managed vs. Cloud-Managed

FeatureSelf-Managed ClustersCloud-Managed Clusters
Setup ComplexityManual configuration requiredSeamless integration with providers
Dynamic PricingRequires custom configurationsSupported out-of-the-box
ControlFull control over resourcesLimited to provider offerings

Benefits of Kubecost Cloud vs. Self-Hosted

FeatureKubecost CloudSelf-Hosted Kubecost
SetupManaged, quick deploymentManual, customizable
MaintenanceAutomatic updates and scalingUser responsibility
CostSubscription-basedInfrastructure-dependent

Frequently Asked Questions

What Does Kubecost Provide?

  1. Cost Allocation: Maps Kubernetes resource usage to costs.

  2. Resource Recommendations: Optimal requests/limits based on historical data.

  3. Cost Optimization: Insights for reducing wastage and improving efficiency.

  4. Custom Pricing: Supports tailored cost models for hybrid/on-prem environments.

How Does It Work?

Kubecost integrates Prometheus for monitoring, OpenCost for ETL, and its analyzer for cost and resource insights. It leverages quantile calculations for accurate recommendations and cost breakdowns.


Kubecost empowers Kubernetes users to achieve cost-efficiency while maintaining application performance. With its versatile features and deep integration into Kubernetes environments, it is an invaluable tool for optimizing resource utilization and reducing costs.