Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: See the impact on your cloud costs as you code
179 points by rumno0 80 days ago | hide | past | favorite | 98 comments
Hey folks, my name is Owen and I recently started working at a startup (https://infracost.io/) that shows engineers how much their code changes are going to cost on the cloud before being deployed (in CI/CD like GitHub or GitLab). Previously,

I was one of the founders of tfsec (it scanned code for security issues). One of the things I learnt was if we catch issues early, i.e. when the engineer was typing their code, we save a bunch of time.

I was thinking … okay, why not build cloud costs into the code editor. Show the cloud cost impact of the code as the engineers are writing it.

So I spent some weekends and built one right into JetBrains - fully free - keep in mind it is new, might be buggy, so please let me know if you find issues. It is check it out: https://plugins.jetbrains.com/plugin/24761-infracost

I recorded a video too, if you just want to see what it does:

https://www.youtube.com/watch?v=kgfkdmUNzEo

I'd love to get your feedback on this. I want to know if it is helpful, what other cool features we can add to it, and how can we make it better?

Final note - the extension calls our Cloud Pricing API, which holds 4 million prices from AWS, Azure and GCP, so no secrets, credentials etc are touched at all.

If you want to get the same Infracost goodness in your CI/CD, check out https://www.infracost.io/cicd




I have more macro questions about this. Sometimes I find engineers aren’t best placed to evaluate cost at all. What might be perceived as expensive (say relative to a salary) is not expensive at all in the context of the business problem being solved.

Think there’s a book called “measure what matters” and the idea is what we measure shapes companies and behaviour. So I’d be very careful about implementing anything like this in my org.


You're right, there are multiple roles for FinOps in an organization from the engineer up to finance and engineering management.

This is tackling one aspect of this - highlighting the cost to the engineer.


Why not provide it to them all then? Have you considered piping your results to a (possibly cloud hosted) dashboard that might offer (possibly aggregated) real-time cost estimates to people higher up the corporate chain? A high-level view of the cost of what is being created, if you will.


This is effectively what Infracost Cloud is (https://www.infracost.io/)

It adds things like management of guardrails and policies that can be maintained by a finops role, or engineering manager. PRs can block or notify if thresholds are going to be breached by a specific PR.


What about money doesn't matter to a company? Why should you isolate an engineer from that as though it doesn't matter?

As an electrical engineer, we scrounged for every cent on devices when they were being released. A TVS diode can be replaced with a capacitor? That's a cent. A cheaper processor? Several tens of cents. Over ten million devices plus the legacy of your design into the next generations, that's a lot of money. Someone else negotiates the price of the TVS diode and the processor, but that doesn't mean you should be isolated from the cost. A thing that does X and is more expensive is worse than a thing that does X cheaply.

Software engineers spend fractional cents on requests executed thousands of times per second. That's the same scale as the electrical engineering example when you do this.

As far as I can tell, most good software engineers are very cost aware. That doesn't mean they don't do stuff. They just understand the cost of that stuff. This does seem to be a big divide between senior and junior engineers, as well.


The "problem" with software is that the margins are "too high", so companies _usually_ don't care. The general attitude I've encountered is that the engineering salary dominates the cost, so it's usually not worth spending time optimizing runtime costs.

This is eventually taken to the extreme and you occasionally see posts here titled something like "how we saved millions by doing X", where the thing they were originally doing was extremely wasteful.

For the last decade, we've also been in a "free money" mode. Where companies were happy to spend money as long as it led to growth. Optimizing for cost wasn't a priority.

That has leaked into electrical engineering as well. There were a few products that shipped full Raspberry pis in an enclosure in the name of velocity. The savings possible there were probably in the dollars, not cents, not to mention the supply chain issues. And yet companies did it in the name of velocity.


>There were a few products that shipped full Raspberry pis in an enclosure in the name of velocity. The savings possible there were probably in the dollars, not cents, not to mention the supply chain issues. And yet companies did it in the name of velocity.

Sometimes it's better to have a product that you can sell, than waiting to perfect the product to sell.

Let's leave aside the RPi availability issues, you could do this with any other discovery board.

Say you want to sell your first 1000 of products x, y, and z to see how the product market fit is. You might find that x and y don't sell at all, but z was sold out in the blink of an eye.

Now you can design a custom PCB for z, knowing that it will probably sell well. And until that is finished you continue selling the off the shelf version. Some profit is better than no profit, and consistency is good for your brand.


> Why should you isolate an engineer from that as though it doesn't matter?

Because it's not their job and therefore they don't have the information necessary to know what is or isn't expensive.

Their job is to come up with the best design using common sense when it comes to cost.

Cost of a part isn't the only thing that matters either. Ok, we can save 1 cent if we swap this part out. But now we have to purchase lot sizes that are 10x bigger. How does that impact production? How does that affect operational budgets? Etc.

The engineers engineer and the bean counters count beans. They meet and find a happy place.

Why would you even want that extra workload? The fact that you think it's so easy just goes to show why actual accountants do the accounting, managers do the managing, procurement does the procuring, and operations does the operating.

"I know calculus, accounting is easy!" Sure, if you throw out all of the variables other than unit cost.


> Because it's not their job and therefore they don't have the information necessary to know what is or isn't expensive.

It is your job, when you make something, to understand how it works. How much it costs to operate is an integral part of how it works.

> Ok, we can save 1 cent if we swap this part out. But now we have to purchase lot sizes that are 10x bigger. How does that impact production? How does that affect operational budgets? Etc.

A 10x larger lot size makes each part per unit cheaper and it simplifies production to eliminate items from your BoM. Understanding that sort of thing in an order-of-magnitude sense is also part of your job.

By the way, you don't need to know this stuff precisely to make good decisions. You need to know it roughly. You still need to know it, though.


But business process cost is often not part of a software engineers job, or at least, many do not see it as one. I have often seen software engineers spend days or weeks building something that can be purchased off the shelf for less than a couple hundred dollars.


This seems insane to me. It's not that hard to have a ballpark awareness of how much things cost. You need accountants etc because you need to ensure that all of the little things are done, not so that everyone else in the business can be unaware of the business case for their product.

Yes, you shouldn't care about lot sizes and cost breaks (unless they are significant deciding factors), but knowing that one operation is going to cost 2x another, largely equivalent operation is relvant/important.


I'm not saying you should be completely ignorant of price, I'm saying use the standard parts and practices first, then worry about costs in revisions after you've built something. Premature optimization and so on. Maybe you can save money on a different powder coat color so you don't have to worry about skimping on parts or power conditioning or something like that. The project is usually bigger than the engineers little corner.


That's fair. Get something working > Doing it right/perfectly the first time, most of the time.

That being said, you should be able to have enough awareness to avoid anything particularly egregious - eg. avoid things that you would never want in a final product anyway in the first place. I guess we probably agree on that when you say standard parts etc.


Exactly, there is no "engineering" separate from the beans.

Anyone can make a bridge that stands. It takes an engineer to make one that barely stands. The same largely goes for computer systems.


When the economy is booming and there's a lot of money sloshing around, developers can get away with shrugging and saying "not my problem" to things like cloud costs and energy efficiency.

Those same developers don't get to complain on the rainy day, when someone who knows how to save the company millions of dollars a year gets a better deal.


Engineer’s don’t build bridges that barely stand up, they have a huge margin at completion. Instead they build bridges based on some agreed upon lifespan.


They do make the cheapest thing counting that margin. Immediately past that margin of safety, your bridge will fall down. In other words, given an expected load of 10 tons and a 50 year lifetime (as you mentioned), an engineer will try to design the cheapest possible bridge that holds 50 tons (5x load) and will last 70 years.


Being able to handle 50 tons at year 70 may be well over 200 tons at year 1, which is a hell of a lot more than a simple 5x margin.


Sure. Pick a specified load at a specified time and the engineer's job is to build the cheapest possible bridge at that load and time. Raising the safety factor doesn't change the objective function. The objective function is still "cheapest within spec."

This is why bridges are not just huge chunks of titanium. Those would work very well at handling 50 tons at year 70, but would be shitty bridges because they are too expensive to build.


I agree that cost is a factor, but cheap is really only one of many factors being optimized. The actual cheapest design is often some organic looking design that comes out of software optimizer, but you lower risk by using proven designs.


Why do you lower risk by using proven designs?

Could it possibly be because it is cheaper to insure, repair, and work with those designs?

Risk is expensive. That's why you reduce it. If it weren't expensive, you would take it on.


> Why do you lower risk by using proven designs?

That’s not necessarily the risk being avoided. The customer may be better off, but project management’s risk vs reward looks very different than the customer’s risks and rewards. Get enough stakeholders together and …

So no, people really do regularly reject the cheapest design that fulfills the spec.


  > The engineers engineer and the bean counters count beans. They meet and find a happy place.
I'd say that the engineers never initiate contact with the bean counters, and the bean counters very very rarely initiate contact with the engineers.


You are very right. Good Hardware/Software Engineers are cost aware at some appropriate level which may not be the detailed bean-counter level. For example, I used to interact with Marketing/Sales folks to get some data on unit cost, sales targets etc. which i then used as motivation to better focus on prioritizing features and time frames. Nothing clears your mind of irrelevant data then knowing what exactly matters as the bottom-line for a company.


The time you spent scrounging for cheaper parts needs to be compared to the opportunity cost and the actual production scale.

Software engineers, especially at startups, are building proofs of concept that they need to iterate on very quickly. Spending significant thought on reducing opex is an absolutely terrible call in the early stages of dev.

If you have a profitable product, or one that would be if you can squeeze costs, then you play the optimization game.

The difference between electrical engineering and software engineering is that software can be updated after it’s deployed. The ability to get all of those gains later changes the strategy completely


> Software engineers at startups, are building proofs...

I fixed that one for you. Software engineers not at startups are usually not building proofs-of-concept. They are usually building production systems. And at the early stages of development of production systems, it can save a lot of headache down the road to think ahead. Yes, you will be slower in the moment, but the number of times I have heard a startup say "we did great with our first version, but then realized we needed to completely rewrite the thing to scale" (when the rewrite takes literally a week longer than the first version) has been too high.


>They are usually building production systems.

This is completely false. Most startups go through massive pivots. I’ve been through 6 of them (2 successful exits) and literally everyone made drastic changes to every code base to target completely different markets and features.

I’m telling you from experience, if you can’t recognize providing business value as an engineer recognizing opportunity cost, you will stagnate as a junior engineer or even destroy the company as a senior one.


Most software engineers don't work at startups.


Most software engineers don’t work on any code that scales to even thousands of users so optimization is a massive waste there too.


The difference between electrical engineering and software engineering is that software can be updated after it’s deployed. The ability to get all of those gains later changes the strategy completely

This attitude is one vendors have leaned into way too hard, and it's why there are so many consumer products out there which shipped with half-baked, buggy firmware.


It’s not an attitude, it’s how the industry works. You’re an uninformed fool if you think otherwise and fast moving businesses will eat you alive.

>which shipped with half-baked, buggy firmware.

Because consumers pay for this shit. It’s the world we live in. The people working on the bug free version ship a year later after the buggy competitor ate the whole market. They’ve also taken the recognized revenue to take a higher valuation and hire more engineers to rewrite their buggy system than won the market.

Bugs don’t matter unless it’s a competitive market where they can change a sale. A startup operating in that kind of market is already fucked.


In turn and with respect, I argue your perspective is myopic and completely misses the huge desire/opportunity in today's market for reliable products. (This is why Apple got so far ahead of their competition).


Many software companies spend money like they have no care in the world. Inflated salaries and all kinds of silly perks. Feels pointless to care about saving them money when they're going to hire Beyonce to be at the holiday party.


It's performance optimization - you're well positioned to make decisions about execution times and app responsiveness but too often app devs are gatekept away from the cost data. Tools like this try to fix that. This is net good.


In this context I'd expect cost to relate to hardware and bandwidth. Simplest example might be bucketing memory or storage requirements against the different tiers of offerings.

Personally, baking this kind of spreadsheet work into a pipeline was a moderate pain in the butt and hazardous to fully automate.


I'd be careful about implementing it without the appropriate support at least. The cheapest option is to do no work, that's not going to get you very far. The cost of the infra always has to be held up against the value of the product.


I like the idea that a PR could surface a cost $1000 a month say then finops reviews that PR.

The dev doesn't worry about it unless it is obviously crazy, I.e. above a team budget. If so they sanity check the code.


A finops person will need to understand the cost and the alternatives. This could add a lot of unnecessary friction to a necessary and reasonable activity.


This is why the engineer should be the one understanding the cost.


Yes. Everyone involved should. It is a non-functional requirement.


should they? The business needs to understand the cost trajectory of the service. But it's very easy for large organizations to be penny-wise and pound foolish.

A company employees engineers in the hopes of reaping value significantly higher than their pay in the future. Analysis takes time, time requires money - reviews require multiple people's time. If you are looking at 1k per month in expenses - but spend 1 month debating methods of saving ~500 bucks per mont at the cost of ~1 engineer. You can easily be looking at a 20+ month time to return on investment. Computers generally halve in cost every 2 years - so you're time to reach a 2x ROI can easily be 4-5 years. Most businesses need a return on capital of ~10% to justify investors not simply placing their funds in the stock market.


Bikeshedding is bad in all forms. Bikeshedding over cost is just one more thing to avoid. That is not an excuse to be completely ignorant of cost, though.


Yeah I’m all for engineers knowing about the economics of the business but the people saying “this is ridiculous of course engineers should know what things cost” are substantially undercounting the number of engineers who value their own time at $0


The main page doesn't mention Terraform explicitly (except for the user feedback section). Are there plans to make this compatible with e.g. AWS CDK or Pulumi?


Yes, we have got quite a lot of requests for those. On the roadmap for sure!


Good idea mate. This is the kind of thing that should have heaps of users.

I wanted to watch the YouTube video but I didn’t have access to audio, and gave up. Video took to long and had too few visual cues - I think many people will do the same.

So you should make a super short gif (10-20 seconds max) that gets the point across extremely fast and visually - no audio.

Post it on Reddit on webdev, you’ll get a shit tonne of people installing this.

I’ve had lots of success with this simple method (sold a personal project with 700,000 users)


Thanks for the feedback - appreciate the comments on the video, I can see that without audio the first minute or so is a bit dull -

Hope this Gif is clearer - https://github.com/infracost/jetbrains-infracost/blob/244ee8...


It’s much improved! My last feedback would be to zoom your editor a lot more so folks on mobile can see things better :)


This seems to be solving a problem that most enterprise software engineers don't have. If you're at a large company that has already chosen its preferred cloud, you're probably already spending $xxM+ with them annually and you probably get CUDs & other incentives to keep your business growing with them.

That said, I've never personally seen any cloud project where there wasn't a business case review that didn't include a review of architectural plans/assumptions and expected cloud service consumption profiling. I think what's different here is that you can help engineers close a feedback loop that is often left open between when a project is planned & scoped and when 1) the landing zone (or data pipelines) is created, and 2) when the app/system is moved from test/QA into production. This could be helpful for the project/product manager, especially, because anything that looks out of whack with the forecast should be escalated promptly.


To a certain extent I agree - I've worked on projects where a resource amounts to a rounding error in the overall cloud bill; for many organizations, cost is an important NFR for a project.

Going off the other threads of this post, how much engineers should know or care about billing seems to be open for debate; I think an engineer needs to appreciate that often cost is one of the many trade-offs that need to be accounted for


This is amazing, I think it's best applied at CI/CD first, but having the plugin is good too for ICs to get it into a company.

How "smart" is it? Say I make a copy of a large database in TF and redeploy it into a staging env, will it know that the storage is $200 a month?

Also maybe put the supported clouds on the front page? I work multi-cloud so was worried it was only AWS.


Hey - great feedback!!

If you're using tfvars for your envs, its "smart". With easily inferable env names for tfvars, we will generate estimations for each env.

In the video, I give an example where for the prod env, I turn on multi-az and have a larger storage and the price goes up.

Good suggestion about supported clouds, I'll update!


Looks great based on the video!

As a data engineer in a consulting company: this would be a good way to get an idea of cost once we've written infra for projects.

Couple of questions:

1. Is Bicep supported?

2. Any plans for a VS Code plugin? (I just saw in your docs you have a plugin for this, great!)

3. How are you handling pricing of resources that are dependent on consumption?


1. Not at the moment, but we're looking at other IaC tools to support on the road map.

2. There is a vscode extension - https://marketplace.visualstudio.com/items?itemName=Infracos...

3. That is a challenge, we use usage files, but in the future would like to pull it from the cloud account to be more accurate and the best suggestions. https://www.infracost.io/docs/features/usage_based_resources...


Looks great! The markdown link to your site is broken.


All fixed, appreciated!


I just installed to test this out and my cost isn't being calculated at all. I took a ekctl yml file and had chatGPT convert it to a terraform file; the terraform file validates; but nothing is discovered by Infracost plugin. I have it off my <project root>/deploy/terraform/. I wish there was some more feedback from the plugin why it cannot locate or visualize my cost.


I just gave this a try with https://github.com/eksctl-io/eksctl/blob/main/examples/01-si...

ChatGPT gave me a decent chuck of terraform and running Infracost against it gave me a $515 monthly cost.

If you go to the settings of the plugin and get the absolute path of infracost binary then run

infracost breakdown --path .

in the dir with the terraform, does it give you a breakdown?


This is nice but tackle it this ways might be unnecessarily complicated. I don't always need to look at the cost while writing code and it could fail to track, say, resources dynamically created with something like CDK.

More useful for me would be to plug directly into the account OR to analyse the cloud formation template, which can be easily obtained regardless of the IaC you're using.


The next step is certainly to look at the cloud account to get information on usage - this completes the picture of both for intended resources to create and what is already there.

Thanks for the feedback


Do you have some testimonials from customers to show us how do they use it and what value do they get out of it?

In my org 50k-100k cloud bill I don't see that much use for it (mainly we are not increasing our costs significantly and we roughly know what we are paying for).

I can see a lot of value for indie/small/medium sized biz but the commercial of that is too high in my opinion.


Hey, thanks for the question - Infracost does three things at the moment: Cost estimation before code is merged (and this is what is in JetBrains now with this plugin); then it checks the code for best practices being followed (like if the code is using old instance types, or old volume types, or there is no retention policies in place etc), it will tell the engineers exactly how to fix those. and finally, it also checks to make sure all resources are tagged properly both with the key and value being checked. Again it tells the engineer how to fix it if there isn an issue.

So overall, the two main benefits are cost avoidance, and then engineering time saving or toil reduction, since all the issues get fixed before the code is merged. The product is being used by over 3,000 companies now in CI/CD; we have a few case studies on the website: https://www.infracost.io/safe-fleet/


I've been an early user of Infracost and I cannot recommend it enough. It being in the CI/CD system is good but it being in the IDE is excellent. Cost is an important architecture concern and yet it's one of the most opaque things to deal with. This IDE plugin for Infracost fixes cost being opaque.

Excellent work, thanks for this.


Great feedback - thanks


This is nice but not very useful to me. What would be more useful in my use case is tooling to audit my cloud accounts periodically and reclaim garbage which is not used or re optimize my usage (smaller instances or databases depending on usage patterns) which directly saves me money.


I think this is the next natural evolution - bring in usage information directly from the cloud accounts then offer right-sizing suggestions and, like you say, reclaim garbage on demand.

Definitely one for the road map!


For runtime cost analysis, you could try Steampipe [1] with it's Powerpipe "thrifty" [2] mods. They run dozens of automatic checks across cloud providers for waste and cost saving opportunities.

If you want to automatically make these changes (with optional approval in Slack) you can use the Flowpipe thrifty mods, e.g. AWS [3].

It's all open source and easy to update / extend (SQL, HCL).

1 - https://github.com/turbot/steampipe 2 - https://hub.powerpipe.io/?objectives=cost 3 - https://hub.flowpipe.io/mods/turbot/aws_thrifty


Steampipe is amazing. I am using it daily for about 4 months now


Determining accurately if we can safely scale down an instance is one of the hardest things we do, I can not think of a way to determine this in an automated fashion.


I agree - there will always be an element of engineering knowledge required.

It's not dissimilar to AWS urging people to use Flex or Graviton instances, only we can decide if our workload will run appropriately!


performance testing? metrics analysis?


Datadog Cloud Cost Management does that: https://www.datadoghq.com/product/cloud-cost-management/

(disclamer: I know about it because work for Datadog, I'm sure there are other competiting products that do the same)


Does it have Terragrunt (*.hcl) support? I see it mentioned no where but the infracost CLI has (https://www.infracost.io/docs/features/terragrunt/).


The plugin is using the CLI under the hood - so it should work as if you're using the CLI


Mhm doesn't seem to work. It's not recognizing the terragrunt.hcl


oh okay, that's a shame. I'll put together a simple terragrunt and see what's going on


Looks interesting! I think cost of running resources would be more helpful


Thanks, that's generally more readily available in the billing explorer for most cloud providers.

There is a definite case for pulling usage data from the cloud account to make suggestions about right sizing though, that's a definite roadmap item


Would love to see a plugin/adapter for Snowflake.

BigQuery has a neat feature where it estimates the cost __before__ you run a query, which has often come in handy.


Hey, we do cool stuff with gbq billing, which helps reduce computing costs. Basically, we show how much money every pipeline costs coming in BQ, executed in BG, or leaving it. We do not query your data (do not need permission to read or edit). Everything is done using GBQ logs. I mean, we do not use your information_schema either. What we saw from our clients is at least 10% compute cost reduction in 10-30 days after deployment. Happy to give a fee trial to test it out.

Here is one of the examples https://mastheadata.com/yalos-cost-management-blueprint-elev...


How crisp are the estimates of cost? I can see how they might depend heavily upon the dataset, especially when the algorithms are supralinear like O(n^2) or O(n^3).


Our cost api is very regularly updated from the pricing data made available by the cloud providers.

Metadata about the resources that require cost estimation are rolled up and sent to the pricing API, it's generally pretty quick process even with large projects


Could similar be done for co2 emissions?


Something similar (but not quite the same) exists through sources like CCF ( https://www.cloudcarbonfootprint.org/ ) or Climatiq ( https://www.climatiq.io/docs/api-reference/computing ) thought I don't think anyone has anything built-in to Terraform or your editor, which would be neat.

Disclaimer: I work at Climatiq


There was an interesting page on the FinOps Foundation site about this https://www.finops.org/wg/sustainability/

There is a potential intersection, but that might have to go further down the roadmap


Usually CO2 emissions correspond to cloud costs. Optimize your cloud costs, you optimize your emissions.


This spiritual kin of the efficient market hypothesis mindset. But it's not like that, if you look at pricing at cloud provider regions and vs local electricity mix off Electricity Maps. (eg Ireland vs nordic countries)


If I may dare - that would be a novelty while this is something many would pay for.

If it's just a straightforward conversion, then sure...


Do you plan to take discounts into account?

Can you base usage off previous billing data for existing resources?


We support custom price books - https://www.infracost.io/docs/infracost_cloud/custom_price_b...

As for usage, at the moment it's driven from a usage file - https://www.infracost.io/docs/features/usage_based_resources...

In the future, we'd like to infer usage directly from the cloud account to give the most accurate view and do right size suggestion


Congrats!

It's almost a REPL for cloud costs! Amazing!


Thanks!!


Hi Owen! This is Itay from Aqua, what a lovely surprise stumbling across this post :) This looks awesome, congrats and good luck


Hey Itay, thanks, and thanks for commenting, great to hear from you!!


I was half hoping this would tell me how many dollars worth of CPU cycles my shitty code was wasting for no reason.


That sounds like an idea for my next plugin :thinking:


that what i thinking, after some job interview i realized nobody gives a shit how many cpu cycle you use, they only care about jira points.


So true.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: