Hacker News new | past | comments | ask | show | jobs | submit login
Kubernetes Cost Management with the New OpenCost Plugin for Headlamp (headlamp.dev)
57 points by yolossn 4 months ago | hide | past | favorite | 19 comments



Is there a way to allocate cost to every pod on a node when node cost is given without break down by resource type and pod resources are not in same ratio as node resources?

Lets say node has 8 CPUs and 32 GB RAM (1:4 ratio). If every pod uses same ratio for its CPU:MEM then math is simple: node cost is split across all pods proportional to their resource allocation.

How to make fair calculation if pod resource ratio is different? In extreme it is still simple - lets say there is a pod with 8 CPU and 2 GB RAM, because no pods can fit into node whole node cost is allocated to that pod.

What if running pod is 6 CPU and 16 GB RAM and another pod with 2 CPU and 16 GB RAM is squeezed in. How to allocate node cost to each? It can't be just node cost / # of pods, because intiutively beefier pods should recive larger share of node cost as they prevent more smaller pods to fit in, but how exactly to calculate it? "weight" of pod on CPU dimenstion is different than on MEM dimension.


Red Hat Insights Cost Management does cost calculation and it calculates exactly how much each pod is costing, no matter what ratios, or node sizes, or discounts you may have.

It looks at what nodes are running on each cluster, how much each node is costing (it reads the actual cost from your cloud bill, including any discounts you may have), then it looks on which node(s) each pod is running, and then it calculates how much each pod on each node is costing.

https://docs.redhat.com/en/documentation/cost_management_ser...

It's free for Red Hat customers, both for cloud costing (AWS, Azure, GCP, OCI) and OpenShift costing. No support for EKS, AKS or other third-party Kubernetes, though.

https://access.redhat.com/products/red-hat-cost-management

https://console.redhat.com/openshift/cost-management/


> then it calculates how much each pod on each node is costing.

How _exactly_ do they do it? Whats the math?


I believe for AWS, they use these ratios: https://github.com/opencost/opencost/blob/c2de805f66d0ba0e53...

So in your example, 6 CPU + 16GiB is roughly 2x more than 2 CPU and 16GiB, so if that node cost say $6/hr, you'd expect it to be allocated $2 to the first and $4 to the second.

They have these weights for various clouds here: https://github.com/opencost/opencost/tree/c2de805f66d0ba0e53...

I'm sure someone will correct me if I'm wrong here, I'm not actually familiar with opencost, don't trust what I'm saying.


would like to know if someone's got a more objective approach;

what we currently do is just a maxOf;

take CostPerGB (memory) and CostPerCore (cpu); and costPerPod = max(pod.requests.cpu * CostPerCore, pod.requests.memory * CostPerGB)

at an overall basis, this was ~20% off to actuals when we'd checked this last, so we clearly call out that the costs are "indicative" and not exact.


I thought about it, but then 2 pods each almost maxing out one dimension, for isntance 7.5 CPU 0.5 GB and 0.5 CPU and 31.5 GB will account together for more than node cost.


I was going through Opencost documentation which this project uses and it looks you need to setup AWS Athena if you want the cloud cost to be displayed for AWS: https://www.opencost.io/docs/configuration/aws#aws-cloud-cos...

Does the Athena does the actual processing/computation of costs ? What is the usual cost for running Athena ?

It also seems strange that I have to put the IAM keys into secrets instead of using IAM role for service account for configuring it.


The Cost and Usage Report (CUR) from AWS is just a fine-grained listing of all the resources in your account and their cost. It can be dumped out on different schedules (hourly, daily, monthly) and in different formats (CSV, Parquet).

It is pretty common to configure the CUR files to be dumped into your S3 account and query them via Athena. Athena is billed as $ per TB scanned ($5 last time I looked), so the cost will be based on how often the data is being queried. Downside is that each query can take quite a while to execute depending on data size.

The other common option is to ingest the CUR data into Redshift which gives you better control / options for performance, manipulation, etc. but requires that you set up and manage Redshift.

Hard to tell exactly what the Athena cost here would be as it depends on the number of assets in the account and the frequency in which you are querying the CUR. However, you can issue quite a bit of Athena queries on CUR data for most AWS use cases without incurring too much cost. Unless you have a rapidly changing environment (e.g. hundreds of k of assets turning over daily) or just tons of standing assets, you should be safe to assume hundreds a day at the most? Probably much less for most use cases. This is assuming they are querying once and storing rather than real time querying all the time and normal usage patters, etc.


Is the cost shown only the prices incurred post plugin integration or is there a way to show retroactive costs by comparing k8s object creation dates for example?


I went sniffing around with <https://github.com/opencost/opencost/issues?q=is%3Aissue+his...> and <https://github.com/opencost/opencost/issues?q=is%3Aissue+ret...> and didn't see anything, so it may be worth creating an issue to put in on their radar. I would also presume you already have the CUR exports for the old times and just want the analysis done using the annotated k8s data?


So is Headlamp the state of the art in Kubernetes cluster management ever since Mirantis first enshittified Lens and then tucked away its sources?


We, the Headlamp project, don't make any claims about being state-of-the-art as that's hard to define. But we do think Headlamp ranks high among having the best user experience and believe the fact that we're a 100% open-source project is a huge plus compared to some other projects in the space.

I think one area that we are rather different than other projects is that Headlamp is not only focused on end-users but also for teams looking to build their own Kubernetes UX by leveraging the Headlamp plugin system. Our thinking is that this will foster broader community participation and make Headlamp the most viable project in the space.

If you find that there is anything missing please file an issue and we'll consider it: https://github.com/headlamp-k8s/headlamp/issues/new


Thanks - I'll seriously have to give Headlamp a go. I'm still using an old build of OpenLens but that's not gonna keep working forever.


I really like k9s. It’s plugin model is super easy to work with, if a bit constrained, as well.


I'm usually a fan of TUIs and think they can be incredibly powerful, but with k9s I couldn't feel comfortable in the day I spent trying it out. I think the problem is that I'm not intimately familiar with kubernetes, being more on the dev rather than ops side, and all that power of the TUI comes at the cost of some discoverability which I desperately need as I fuck around and find out.


Yeah that’s a fair point.

I am a dev as well but I have been working with kubernetes a long time so I generally know what I need to be looking at.


I didn't try Headlamp, but I moved to AptKube from Lens and been happy since then. It might not be a best in class, but it is snappy and doesn't require any cloud accounts.


Is AptKube free? Their website makes it appears it's a paid subscription regardless if it's personal use or not.


There is no free version and comparing to something like Jetbrains IDEs price is a bit high for such a small tool. It is made by a single dev in a market where not that many paying companies, so higher price is understandable.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: