Hacker News new | past | comments | ask | show | jobs | submit login

Is there a way to allocate cost to every pod on a node when node cost is given without break down by resource type and pod resources are not in same ratio as node resources?

Lets say node has 8 CPUs and 32 GB RAM (1:4 ratio). If every pod uses same ratio for its CPU:MEM then math is simple: node cost is split across all pods proportional to their resource allocation.

How to make fair calculation if pod resource ratio is different? In extreme it is still simple - lets say there is a pod with 8 CPU and 2 GB RAM, because no pods can fit into node whole node cost is allocated to that pod.

What if running pod is 6 CPU and 16 GB RAM and another pod with 2 CPU and 16 GB RAM is squeezed in. How to allocate node cost to each? It can't be just node cost / # of pods, because intiutively beefier pods should recive larger share of node cost as they prevent more smaller pods to fit in, but how exactly to calculate it? "weight" of pod on CPU dimenstion is different than on MEM dimension.




Red Hat Insights Cost Management does cost calculation and it calculates exactly how much each pod is costing, no matter what ratios, or node sizes, or discounts you may have.

It looks at what nodes are running on each cluster, how much each node is costing (it reads the actual cost from your cloud bill, including any discounts you may have), then it looks on which node(s) each pod is running, and then it calculates how much each pod on each node is costing.

https://docs.redhat.com/en/documentation/cost_management_ser...

It's free for Red Hat customers, both for cloud costing (AWS, Azure, GCP, OCI) and OpenShift costing. No support for EKS, AKS or other third-party Kubernetes, though.

https://access.redhat.com/products/red-hat-cost-management

https://console.redhat.com/openshift/cost-management/


> then it calculates how much each pod on each node is costing.

How _exactly_ do they do it? Whats the math?


I believe for AWS, they use these ratios: https://github.com/opencost/opencost/blob/c2de805f66d0ba0e53...

So in your example, 6 CPU + 16GiB is roughly 2x more than 2 CPU and 16GiB, so if that node cost say $6/hr, you'd expect it to be allocated $2 to the first and $4 to the second.

They have these weights for various clouds here: https://github.com/opencost/opencost/tree/c2de805f66d0ba0e53...

I'm sure someone will correct me if I'm wrong here, I'm not actually familiar with opencost, don't trust what I'm saying.


would like to know if someone's got a more objective approach;

what we currently do is just a maxOf;

take CostPerGB (memory) and CostPerCore (cpu); and costPerPod = max(pod.requests.cpu * CostPerCore, pod.requests.memory * CostPerGB)

at an overall basis, this was ~20% off to actuals when we'd checked this last, so we clearly call out that the costs are "indicative" and not exact.


I thought about it, but then 2 pods each almost maxing out one dimension, for isntance 7.5 CPU 0.5 GB and 0.5 CPU and 31.5 GB will account together for more than node cost.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: