Hacker News new | past | comments | ask | show | jobs | submit login
Which Programming Languages Use the Least Electricity? (2018) (thenewstack.io)
278 points by Sindisil on March 29, 2019 | hide | past | favorite | 253 comments



Interesting results, but I think the farther you go down the list the more the fact that you're using the Computer Language Benchmark Game affects what you're seeing, as not all languages get the same attention.

For example, Javascript and Typescript. Theoretically, I would assume those to be very close, since one compiles to the other. And for memory usage, they are very close. But for running time, Typescript is an order of magnitude slower. That looks suspiciously like entirely different algorithms were used in each implementation (with one being obviously superior in terms of performance), and if that's happening in this specific case, where else is it also causing problems in the analysis?


That’s an interesting discrepancy, I might have had the same gut reaction. But you’re using that assumption to cast slippery-slope doubt on the whole project without knowing anything specific. It’s possible you’re right, but maybe find out which algorithms are being used first? Perhaps there is a reason that TypeScript actually is an order of magnitude slower than JavaScript. The Benchmark Game project was specifically set up to allow people to scrutinize the algorithms used and make fair comparisons, so it might be unwise to start with the assumption that the most basic aspect of the project failed spectacularly and that something really obviously stupid and easy to fix is happening. I am at least giving benefit of the doubt and wondering what might be wrong with TypeScript. It’s not implausible since other languages take just as long.


You can write literally any JavaScript program with TypeScript (falling back to the `any` type if you really need to), so this doesn't really work in this case.

Also, typical TypeScript programs are faster than typical JavaScript programs, because JavaScript JITs like predictable object shapes and monomorphic functions for the same reasons that other language implementations require them. Libraries like lodash and bluebird take advantage of this fact without using TypeScript, but TypeScript steers you towards these patterns.


>typical TypeScript programs are faster than typical JavaScript programs

Person who works on js engines here :)

You would think this is the case, but in actuality, js and ts are about on par in performance (assuming similarly written code). This is in fact due to the fact engines optimize for idiomatic js patterns, not idiomatic typescript patterns. often these will align, but in some cases (usually revolving around generics and inheritance) well written js will actually fall through the optimization pipeline faster due to following more patterns that have specific optimization checks.


Any articles/resources on writing high-performance JS that you would recommend? I don't use JS for work, but just for random fun stuff. So just curious to learn more about it.


Check out gl-matrix.js, that’s one designed for being fast. It’s a simple math library, but the main way it achieves fast is by not allocating memory.

After having worked on JS for some large web apps that need good graphics performance, the two rules of thumb in my head for making JS fast are: 1- avoid dynamic memory allocation, and 2- avoid using the functional primitives like map.

The first one is more or less true in all languages, memory allocation always costs a lot, and if you can pre-allocate memory and/or re-use memory along the way, the code will run faster. This means paying attention to what things in JavaScript will allocate memory under the hood, use of dicts, use of 3rd party library and framework functions, etc.

The second one is a bummer; I love using the functional primitives. But map is slower than a for loop, all else being equal. It has gotten relatively faster over time. I mostly use functional everywhere, and only resort to for loops in performance critical code.


> 2- avoid using the functional primitives like map

I understand that map requires creating a new array, and that is already included in point 1. What overhead are functional primitives subject to apart from memory allocation? e.g. forEach


Using continuations. The function call itself and the local scope wrapped up into it can be varying degrees of expensive. When little state, because it’s inline or nearby for example, it might be optimized down to near what a for loop does. But if there’s a function call at all once it’s executed, that alone is slower than the for loop. Remember it’s a function call per element. If the function is further away with more scope, it can be more expensive both memory wise and time wise.

BTW, it's easy to test the basic primitives. I use Chrome snippets.

    test = (name, fn) => {
      const timeLimitMs = 1000
      let start = Date.now(), count = 0
      while (Date.now() - start < timeLimitMs) { fn(); count++ }
      console.log(name, count)  
    }
    var N = 1000000
    let a = new Array(N)
    test('for loop', _=> { for (var i = 0; i < N; i++) a[i] = i })
    test('map',      _=> { a = a.map((x,i) => i) })
    test('forEach',  _=> { a.forEach((v,i,a) => a[i] = i) })


    for loop 941
    map 37
    forEach 69
This is on my Mac in Chrome. So forEach is faster than map, but for loop is more than 10x faster than forEach. That's for loops with trivial work, of course. If the inside of the loop is expensive, the loop/map ratio will be lower.


If I modify that code to pre-initialise an array with the values 0 to N - 1, and then copy the value from that array to a new array (rather than using the loop index), then both map and forEach are faster than the for loop for me.


That sounds fairly surprising, considering map has to allocate, and allocate is very expensive, but please share your code & I’ll try it.

Like so?

    const N = 1000000
    let a = new Array(N), b = new Array(N)
    for (let i = 0; i < N; i++) a[i] = i
    test('copy loop',    _=> { for (var i = 0; i < N; i++) b[i] = a[i] })
    test('copy map',     _=> { b = a.map((x,i) => a[i]) })
    test('copy forEach', _=> { a.forEach((v,i,a) => b[i] = a[i]) })
I get: copy loop 973, copy map 38, copy forEach 49. Same as before, but this time I tried Chrome in Windows.

Are you using a different browser? I know that sometimes other browsers have very different results.

In any case, it's somewhat irrelevant if there are cases that optimize and cases that don't. When idiomatic functional code is sometimes up to 30x slower than a for loop, it can't be used in perf critical sections. Even if it's only Chrome and only certain cases. The forEach perf needs to be always reliably performant before I can use it without worry.


Thank you for the explanation


Just write idiomatic code, it's what all the major engines try to optimize for.


Is that changing as ts is becoming more popular? I work at Google and I think typescript is picking up quite a lot of momentum inside of Google.


So why is the TypeScript bench slower?


My best guess: probably written by someone that doeant know how to write performance TS. I say this with no skin in the game. I write neither JS nor TS. What I have observed in language benchmarks over the years, is that the benchmarks are rarely written by an expert, but usually by someone with cursory knowledge of the language. E.g. just enough to be dangerous.

Often times, these sorts of benchmarks are done with prejudice (not necessarily malice). The benchmarks are written by someone with something to prove: my chosen tech stack performs better, and let me show you why. A favorite of mine is Perl vs Python comparisons, where you see an idiomatic Perl implementation vs a non idiomatic Python implementation (or other way around). Typically in a head-to-head comparison, the benchmarks are developed by the same individual whom likely has above average knowledge in their favorite and below average in the target they're trying to show as inferior.

You'll see this time and time again in internet benchmarks comparing performance. Unless you can see the code from all benchmarks involved, my suggestion is to avoid them. I mean, for all I know, the author of the benchmark was unaware of the built in sort and instead bubble sorted.


This is unfortunately pure speculation on top of pure speculation, which is the problem I have with the top comment. You’re assuming incompetence when you could just go look it up. Why assume it’s someone who doesn’t know? Why use that to wander off into rant land about prejudices and make broad claims that internet benchmarks are bad, when you admit to having zero idea what the actual specific problem here is?

The test that lowered TypeScript’s score in the paper is called fannkuch-redux, and here are the sources in question:

https://github.com/greensoftwarelab/Energy-Languages/blob/ma...

https://github.com/greensoftwarelab/Energy-Languages/blob/ma...

They are both contributed by the same person, and there is no bubble sort involved. So now you know.

I don’t see an obvious reason one would be slower, but they’re also quite different. Maybe the algorithmic complexity is different. Maybe the cross-compilation is doing something bad with memory allocation. Note the input sizes for this test are very small, it would be easy for a difference in temporary variables the compiler injects to cause a serious problem.

What is not obvious is any prejudice, malice, or incompetence.


> Why use that to wander off into rant land about prejudices and make broad claims that internet benchmarks are bad

What are you talking about? Where did that happen?

> when you admit to having zero idea what the actual specific problem here is?

This is the comment section for a submission about an article referencing the paper. I brought it up for discussion. It is perfectly valid to bring up a question that you don't know the answer to.

> What is not obvious is any prejudice, malice, or incompetence.

Please stop.

Edit: From another comment, and some deeper digging of my own from that, you might find the archived results of the fannkuch-redux interesting. From 2017-08-01[1] to 2017-09-18[2], the benchmark changed from a running time of 1,204.93 second to a running time of 131.39 seconds. The paper was released in October 2017.

1: https://web.archive.org/web/20170901020804/http://benchmarks...

2: https://web.archive.org/web/20170918163900/http://benchmarks...


> What are you talking about? Where did that happen?

I was responding directly to @hermitdev. Did you get your threads crossed? What I'm talking about happened immediately above in the parent comment, beginning with "Often times, these sorts of benchmarks are done with prejudice" https://news.ycombinator.com/item?id=19527057

"You'll see this time and time again in internet benchmarks comparing performance."

> It is perfectly valid to bring up a question that you don't know the answer to.

I agree. It's a bummer that's not really what happened here.

>> What is not obvious is any prejudice, malice, or incompetence. > Please stop.

The parent comment explicitly stated an assumption of both incompetence and prejudice and I responded directly to that.

From the HN guidelines: "Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith."

If you'd like me not to call out speculation, then please assume good faith and don't speculate next time.

> From 2017-08-01[1] to 2017-09-18[2], the benchmark changed from a running time of 1,204.93 second to a running time of 131.39 seconds.

Yes! Now we are getting somewhere. It appears that would change the outcome of the paper. Perhaps it was a mistake. That might mean it was nothing more than an oversight that already got fixed. It doesn't mean there is any other coloring of the study at all, nor that there was any intention or agenda to make TypeScript look bad, right?


> The parent comment explicitly stated an assumption of both incompetence and prejudice and I responded directly to that.

Perhaps I misinterpreted what you said. You started the paragraph referring to the top level comment, which is me. I took the "you're" in "You’re assuming incompetence when you could just go look it up." to be a general "you", and commentary on my original comment.

> From the HN guidelines: "Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith."

I actually looked this up before the GP comment, and almost included it myself. I can see now that you were implicating the comment you replied to. I didn't think that was the case, because I apparently didn't interpret that comment remotely in the same way you did.

> Perhaps it was a mistake. That might mean it was nothing more than an oversight that already got fixed. It doesn't mean there is any other coloring of the study at all, nor that there was any intention or agenda to make TypeScript look bad, right?

I never implied it was. For that matter, I didn't really interpret the comment in question as stating that either. The more charitable interpretation is not that they are trying to make another language look bad, but that they are trying to make their favorite language look good. That doesn't require purposefully tanking one benchmark, it just requires them to be much better versed in optimizing one language than another and a lack of awareness about this. As they say, never attribute to malice what can be explained by incompetence. In fact, if you read the comment carefully, they even call out to this with the "not necessarily malice" remark.


> The more charitable interpretation is not that they are trying to make another language look bad, but that they are trying to make their favorite language look good.

The project doesn’t talk about favorites or seem to want to make certain languages look good. Jumping to the conclusion that bias is involved isn’t the good faith interpretation, even if you state with a positive sounding framing. The good faith interpretation is to take the stated project goals at face value, and assume that the participants have done a good job.


> The project doesn’t talk about favorites or seem to want to make certain languages look good. Jumping to the conclusion that bias is involved isn’t the good faith interpretation

I didn't see anywhere that the comment in question called any project bias into question, but instead noted that in a situation where work is crowd sourced, people with their own intentions and motivations will put out bad benchmarks, either in the case of the benchmarks game, or a specific benchmark or comparison put forth in an article or blog. I've personally been witness to the latter multiple times just from HN submissions.

I just want to end with, as someone that's brought up viewing comments in an uncharitable light, you seem to have done a lot of that in this discussion. You've repeatedly taken your interpretation of a comment, rephrased it in a harsher way, and the stated it as what the other person was saying as fact, and then responded to that. I would think actually trying to find a charitable interpretation should at least include a question at the beginning to confirm whether what you think is being said is entirely correct. Note that I started with that when I thought you were attributing statements to me that I did not say. My first words were a solicitation "What are you talking about? Where did that happen?" to confirm what was going on. You've been doing this from your first response to my top level commend, when you stated "But you’re using that assumption to cast slippery-slope doubt on the whole project without knowing anything specific." That's a very uncharitable rephrasing of what you think I was doing, and it certainly wasn't my intention. I've already outlines in specific exactly what I was trying to do and why, and in doing so I also stated that I felt you were misinterpreting me. There's a clear trend here as I see it, and you repeatedly bringing up good faith assumptions just puts it into clear highlight.

I think we've covered about all there is to say on this (these) topics. I'll let you have to the last word if you wish. I'll read and promise to consider any points you raise, but I don't think me responding would be very fruitful, and this discussion has digressed far enough.


Do you agree that those very different times were measurements of the same TypeScript fannkuch-redux program?

5 July, Node 8.1.3, TypeScript 2.4.1

https://web.archive.org/web/20170715120038/http://benchmarks...

1 Sep, Node 8.4.0, TypeScript 2.5.2

https://web.archive.org/web/20170922144419/http://benchmarks...

----

How should we now assess your "suspiciously like entirely different algorithms were used in each implementation" comment?


> Do you agree that those very different times were measurements of the same TypeScript fannkuch-redux program?

Yes.

> How should we now assess your "suspiciously like entirely different algorithms were used in each implementation" comment?

The suspicion was incorrect. That's why it was presented as a suspicion, not as fact. I have no reason to defend it if it's incorrect, but I still defend that it was valid to raise questions, given the facts on the ground. We've now shown there was something that changed very drastically at that time, and while it's less likely it's the benchmarks themselves (unless one or both of those are fairly out of date Node versions)[1], it still points towards something to be aware of in the results presented. Namely, they rely on a lot of underlying assumptions which should be looked at if you care about the numbers.

1: Also, I imagine the V8 devs probably considered the performance of TypeScript in that case to be a bug, given how horrible the performance regression from JavaScript is and that it's still javaScript running. It's possible that TypeScript was doing something really odd, but given the exposure and Microsoft's backing and developer time, I think that's a less likely scenario than some optimization that should have been triggered was missing, which happens quite often.


Please add a correction to your original comment, to prevent readers from being misled. (If it's closed to edits, I'm sure HN staff will open it when you ask).

> I still defend that it was valid to raise questions

Of course, it's valid to question a measurement that looks strange but your comment went further than that -- your comment, without evidence, assumed a cause; and, without evidence, implied that assumed cause led to widespread problems with the analysis.

In other words -- innuendo.


> Please add a correction to your original comment, to prevent readers from being misled.

Corrections are for facts. I put forth a theory. People being misled by a theory are not something I have limited power to affect. People representing theories read on the internet as fact have larger problems that that will solve.

This discussion is the correction, and a better one than someone would be willing to read. Were it within the 2 hour edit window, I would through in an edit, I've done so numerous times in the past. I will ask Hn to amend it's rules so I can correct a statement I made about something I suspected.

> Of course, it's valid to question a measurement that looks strange but your comment went further than that -- your comment, without evidence, assumed a cause

This is incorrect. I had evidence, I had numbers that did not line up with my understanding of how things should have been given my knowledge of the subject. I presented that as a theory, by using the word "suspect". All I implied is that if that theory was correct, which I made sure to not assert as fact, then it might affect some other languages. I did not assume a cause, I assumed a possible cause, and presented it as such.

I am very particular with my language. I try not to state things as fact when they are not. I try my absolute hardest (and I believe I succeed) to always speak in good faith, where I'm trying to raise a point I think is worthwhile or ask a question where I think there is benefit. I'm actually rather bothered by how some people interpreted my words and intentions, and that includes you. I'm bothered by how you've interpreted my words. Since you're not the only one (although I do believe you're in the minority), I'll assume there's something I could have done better to represent my point. I don't think all the blame lays with me though. There should be some way for me to posit a question and advance a theory without people assuming bad faith, so my question to you is, what way is that? How could I have expressed concern over the results without triggering that interpretation from you? Because I don't think doing personal research on a problem is an acceptable prerequisite for raising a question. In this case, I could have spent hours looking into something I was unfamiliar with and come away with more answers, but many people may not have the knowledge to do so but have enough to think something is wrong. Should they just keep their mouths shut? Are we in a time where raising a concern that turns out to be unfounded (or in this case, just more complicated and slightly misdirected) is unacceptable under any circumstance? I refuse to accept that.


The honest concern is that the reported time measurements for those JavaScript and TypeScript fannkuch-redux programs seem too different.

The honest question is -- Can someone please confirm that those programs implement the same algorithm?


I thought the benchmark game is set up so every language's advocates can tune their language's programs. The only chance at a fair comparison is if every language gets the best implementation it can find for the challenges. There's nothing else that approaches fair comparison of apples and oranges.


> you’re using that assumption to cast slippery-slope doubt on the whole project without knowing anything specific.

No! I think this is a very useful project and analysis. I just think that some languages might have extremely optimized versions (or possibly more likely, some languages don't quite yet have that extremely optimized version that has propagated throughout the others) and that might be affecting specific languages in the analysis.

I think the first 5-10 entries are probably very accurate, as they are generally with very performance centric and often optimized for performance languages. As the languages and VMs/interpreters do more and are optimized less, it's much easier to miss a performance difference caused by an benchmark submission and attribute it to an inherent aspect of the language.

> I am at least giving benefit of the doubt and wondering what might be wrong with TypeScript.

I did no such thing. Note how I used the phrases "Theoretically, I would assume" and "That looks suspiciously like". I simply raised an issue of concern, in a way where it was obvious that I did not know if my concern was correct, and wondered that if it was, what else it might affect.


> I did not know if my concern was correct, and wondered if it was, what else it might affect.

That’s exactly what I mean by casting doubt. If the concern might not be correct, why lead into speculation about further concerns?

Let’s find out what the actual reason that TypeScript is measured slower, rather than guess as what else could be wrong if unverified theoretical assumptions might be correct.


> rather than guess as what else could be wrong if unverified theoretical assumptions might be correct.

You mean, I should not have explored the ramifications of what my suspicions might mean to so people might think it's worth actually looking into? What's wrong with that?

I can't help but feel that you feel compelled to defend your original position that I believe was based on a misinterpreting my point and intention.

I saw what I believed might be a problem. I noted it. I noted why. I noted what it might mean to the analysis because if my suspicion as to the reason was correct, it might not be isolated and other items might need a closer look. I did so in a way where I was sure not to claim something as factual when I wasn't certain. What part of that do you think is an inaccurate assessment of what I did or was unwarranted?

Edit: Changed misconstruing to misinterpreting, as that's what I was trying to express, and misconstruing might be interpreted as a purposeful action, which is not what I was trying to express.


You seem not to have considered the possibility that the authors may have simply made a mistake, unrelated to the origin of the programs.

The authors presented at an Oct 2017 conference. Archived benchmarks game web-pages from 2017 do not show the 10x fannkuch-redux differences that the authors report --

https://web.archive.org/web/20170918163900/http://benchmarks...


You saw what you believed might be a problem. You did not investigate whether or not it was a problem. Instead you speculated about what it might mean to the analysis.

The part that was unwarranted was the negative speculation.


You can look at their detailed data and sources here

https://sites.google.com/view/energy-efficiency-languages/ho...

Many benchmarks have no TS implementations, TS/JS results are about the same except for fannkuch-redux which is about a zillion times slower in the TS implementation. When looking at that kind of massive discrepancy in similar languages with the same runtime, the guess kbenson made was a perfectly sensible one. The authors of the study should have examined that kind of crazy outlier more closely and it's, as pointed out, not that unusual when using the benchmark game as a starting point.


> when using the benchmark game as a starting point.

In fact, on the benchmarks game website, those same programs do not show a "massive discrepancy" --

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...


I’ve noticed similar discrepancies between Perl performance in the real world and in the suite they are using.

The issue is that one of the metrics in the suite is lines of code, so people write fantastically obscure and concise functional programs in Perl when the imperative one would be 2x the LOC, but much, much faster.

(This is from a spot check years ago. Maybe they’ve fixed this somehow).


Maybe you have a vague recollection of something you once saw somewhere.

The benchmarks game does not measure LoC --

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...


as not all languages get the same attention.

...which also seems to be reflective of how much programmers using language X care about (execution) efficiency, so I'd say these results are not far off from reality.

C and C++ are another interesting pair --- proponents of the latter always love to claim "zero cost abstractions" that allow you to write very abstract code which they say the compiler can then optimise (or clean up the mess, depending on your viewpoint...) to the same output as if you did it manually in C, but the results show a very different picture.


I think it calls into question the code samples. There are very few C features not in C++.


I haven't seen the code, but maybe c&p-ing the C entrant into the C++ box to get performance parity would make the C++ entrant run afoul of typical C++ coding conventions.

(You can then have a debate about whether you want to see idiomatic C++ in benchmarks or purely the most performant C++ possible. Personally if I'm shopping for a language for a project then I'm far more likely to be interested in the former.)


> Theoretically, I would assume those to be very close, since one compiles to the other.

I don't think that's a safe assumption. The Javascript results would be a lower bound (assuming the same algorithm, etc.), but there's no telling what "extra" Javascript (and thus overhead) might be inserted by the Typescript compiler.

As far as individual results go, there's no point jumping to conclusions when the results and code are available online and easy to try for yourself:

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

Typescript is still 2-3x slower than Node in several of the benchmarks, and the results could have been different in May 2018 when the article was written.


> there's no telling what "extra" Javascript (and thus overhead) might be inserted by the Typescript compiler

TypeScript doesn't do this. With only a few exceptions, the way to turn TypeScript into JavaScript is to remove the type annotations, which leaves you with JavaScript. This is an important design goal of TypeScript: it doesn't change your code, it's just JavaScript with types. Like Babel, TypeScript can be used to compile to earlier JS versions like ES5, but I'd consider that a misconfiguration in an environment like this.

Saying that TypeScript is slower than JavaScript is almost as silly as saying "C++ with comments is slower than C++ without comments". If you see a C++ program with comments that's slower than another one without comments, it's almost certainly because they're different programs, not because the comments have any effect. From a glance through your links, it seems likely that all of the running time differences are just because it's different code being run.


> but there's no telling what "extra" Javascript (and thus overhead) might be inserted by the Typescript compiler.

That's true, but the fact we're talking about an order of magnitude running time difference is what lead me to think it's not as easily explained as that.

> there's no point jumping to conclusions

Respectfully, I don't think I did. I put out a theory, with my reasoning, fully acknowledging where it might be wrong. As someone that doesn't use TypeScript (and I wouldn't consider myself in expert in JavaScript either), I don't feel qualified to assess the respective algorithms for those languages in the benchmarks game, so I brought the issue to a larger audience so someone else might take a look if so inclined.

> Typescript is still 2-3x slower than Node in several of the benchmarks, and the results could have been different in May 2018 when the article was written.

That doesn't really invalidate my theory, as it's possible those are also instances where it's lagging in a good submission. Given that there are several tests where it actually beats the fastest JavaScript algorthim submitted, that seems at least plausible.


Typescript is a superset of Javascript, you can literally submit the faster JS programmes as the TS ones.


This may be a dumb question as I know nothing about Typescript, but is it idiomatic to do that, though? You can, for the most part, use C in C++. But if C was beating the pants off of C++ in benchmarks it wouldn't tell me anything useful if someone submitted a C program as the C++ benchmark. If I wanted to use idiomatic C, I'd use C. I expect a C++ vs. C comparison to compare the encouraged features of C++ with what's available in C.


Idiomatic TypeScript and idiomatic JavaScript are nearly the same; it's not like the C vs. C++ situation. A design goal of TypeScript is to make it possible to define types for idiomatic JavaScript programs, and I think they meet that goal pretty well. It's one reason the type system is much more advanced and flexible than, say, Java. You might see little differences here and there due to limitations in typechecking and the enum feature, but fundamentally TypeScript is just about adding structure and safety to your code rather than changing the way you code.


The benchmark is flawed. It should compare the fastest possible C++ implementation and a more idiomatic C++ implementation. The entire purpose of the language is to allow both to coexist in the same code-base.

I would be very surprised if the fastest C++ and C are actually different.


If the benchmark claimed to be perfect then "flawed" would be an important criticism.

You can see 4 or 5 C++ fannkuch-redux programs "compared"

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...


I'd be very interested in seeing a few energy benchmarks not here, for example: energy to route 1B hits to a web endpoint, energy to parse a Json, energy to grab an auth token from a web request and forward to an auth server. Energy to forward 1M 1k files from the filesystem to the network, etc.


Well, I think the reason they don't do that, and focused on the benchmark game (which I think is a good solution to their problem of needing representative sample, it just comes with its own caveats), is because much of that is often handled by libraries (possible written in C), code in the Interpreter or VM (also possibly written in C), or handled by the OS (in the case of connection handling and file access). Those are useful things to know, but don't inherently speak to the features of the language itself.


Sure but even a language neutral ballpark could be useful to compare relative sizes. Is network or disk energy used insignificant wrt CPU or ram ? Is it roughly equivalent ?


The language and the runtime are basically one package. I don’t care that node is fast because it is all C under the hood. To the end user, it’s just fast.


Since those operations will typically be I/O bound instead of CPU bound, that will be a system-level test, with the language having less effect.

In that case, the characteristics of the system will dominate.


submit a better implementation. it is not possible for any one entity to produce optimal implementations for many languages, it is up to us. or, if you dont like their rules, make your own benchmark system with new rules.


I don't use Typescript, and use Javascript only for what I have to. I don't care about the ranking of either of them, I'm just pointing out a possible flaw in the methodology that should be taken into account when looking at the numbers presented, and what I suspect is a concrete example of that.

This also isn't a criticism of the benchmark game, it's well known that not every implementation is equivalent in the time and effort put into optimizing it. It serves its purpose about as well as can be expected. Unfortunately, using it as the base of further calculations can lead to some of the known quirks of the benchmarks being exaggerated into results that are not always obviously an artifact of the underlying system, as I suspect this is. Making that obvious by pointing it out can be useful.


> Unfortunately, using it as the base of further calculations can lead to some of the known quirks of the benchmarks being exaggerated

You seem not to have considered the possibility that the authors may have simply made a mistake, unrelated to the origin of the programs.

The authors presented at an Oct 2017 conference. Archived benchmarks game web-pages from 2017 do not show the 10x fannkuch-redux differences that the authors report --

https://web.archive.org/web/20170918163900/http://benchmarks...


> Archived benchmarks game web-pages from 2017 do not show the 10x fannkuch-redux differences that the authors report

I think the Sep 1st 2017 benchmark does, though.[1] At that point it's 1,204.93 seconds, compared to the 131.39 seconds on September 18th you referenced. That makes sense, since the paper could have been finished quite a bit prior to the conference.

1: https://web.archive.org/web/20170901020804/http://benchmarks...


That's interesting. Do you agree that those very different times were measurements of the same TypeScript fannkuch-redux program?

5 July, Node 8.1.3, TypeScript 2.4.1

https://web.archive.org/web/20170715120038/http://benchmarks...

1 Sep, Node 8.4.0, TypeScript 2.5.2

https://web.archive.org/web/20170922144419/http://benchmarks...

----

How should we now assess your "suspiciously like entirely different algorithms were used in each implementation" comment?


it is absolutely a flaw in the methodology, but one I think is unavoidable without enormous resources.


I agree. I just wanted to point it out in case people took the analysis as entirely accurate without looking a little closer. I suspect the entries towards the top of the list are fairly accurate, by nature of their competitive standing in the benchmark games and their focus as languages on performance. It's much easier for an implementation difference to hide in the natural drift you see in the languages with VMs and interpreters that don't receive as much attention.

That's actually one of the reasons why the TypeScript/Javascript divide jumped out at me. They were mentioning that the bottom of the list was dominated by interpreted languages, and mentioned TypeScript by name (which surprised my given the focus JavaScript VMs have gotten), and then when I reviewed JavaScript's standing (which was more in-line with what I expected), I noticed the difference between it and TypeScipt was very pronounced, which is odd when (to my knowledge) TypseScript compiles to JavaScript, and not because it's doing a lot of convenience stuff that would slow it down. That said, I don't use TypeScript, so maybe I'm overlooking something.


I, by nature of my current job, have to write JavaScript regularly. In those cases I almost always opt for TypeScript for my sanity.

So as a regular writer of TypeScript I had the exact same question as you.


It took me about a minute looking at their published detailed data to notice the problem. They should have noticed a factor-of-15ish outlier and checked why on earth it was there.


If we assume they understood the relationship between JavaScript and TypeScript then maybe it should have been noticed.

However, the original research has been posted multiple times to proggit and HN since 2017; and I don't recall whether or not anyone noticed this problem until now --

https://news.ycombinator.com/item?id=15249289

https://www.google.com/search?q=energy+efficiency+programmin...


maybe it should have been noticed

It's a study presented at some conference so while not exactly the Higgs boson, they're showing other people data and the conclusions they derived from it. It's 100% their job to understand what their data measures and to notice that one of the measurements is completely bogus for their purposes. The fact other people hadn't necessarily noticed on messageboards before is mildly curious but it's not really their job.


afaict the evidence is - not - that "one of the measurements is completely bogus".

On the contrary; we can see from archived web pages that other measurements showed the same relatively-poor performance, with those old versions of TypeScript.


What do the archived pages have to do with this study? The study starts with a snapshot of benchmark game sources. That's a perfectly sensible way to get a bunch of implementations to bootstrap the study. But some of those implementations might be unsuitable for the study, just as (at least) one was, in their case. They don't seem to have noticed that. What's a good, benign explanation that they didn't?


> But some of those implementations might be unsuitable for the study, just as (at least) one was, in their case.

Unsuitable because?


Because it's a different implementation that happens to be 15 times slower than the implementation used in the straight JS version. That's fine for the benchmark game, it's a garbage input to a 'how energy efficient are these languages' study.

Let's say I want to measure the 'energy efficiency' of x86 assembly and JS. I'll use sorting an array of 1000 integers. In my JS implementation, I call Array.sort. In my x86 implementation, I randomly shuffle the array and check if it's sorted, if not repeat until it is. Does measuring the execution times of these tell me anything about the 'energy efficiency' of Javascript vs x86 assembly?


> a different implementation that happens to be 15 times slower

A few TypeScript versions later, that happens to be only 1.6 times slower.

There are good questions to ask about how to handle possible outliers in a study that takes a snapshot of a changing situation and then seeks to make more general claims.


It doesn't matter what happened in the benchmark game later, it's not a study about the benchmark game. It's not 'good questions', it's a huge fuckup by the authors of the study. You're simply wrong to keep saying otherwise.


Supposedly you are right and others are wrong, which doesn't leave much to be discussed.


That's not how the game works. The community "plays the game" by submitting better benchmarks when they come up with better solutions.


It's not really about how the game works, it's whether the study is using sane data. In this case, it very much isn't and it should have been obvious to them the data is bad.


The site has been through several iterations. Here is the description from 12 years ago:

https://web.archive.org/web/20070503205039/http://shootout.a...


I don't understand what that's supposed to show.


Javascript in many cases runs on v8 which is a just in time compiler. Just in time compilers provide substantial performance benefits for lines of code executed multiple times. Perhaps v8 team (google) doesn't support typescript (microsoft). https://softwareengineering.stackexchange.com/questions/2754...


No one executes TypeScript directly... It's a language that compiles to JS, which can then be run using V8 among others.


Can't run Rust without powering the HN servers, to host the screeds of its acolytes.

I jest, (and I'm a Rust admirer myself), but my more serious point is: so many different kinds of electricity go into a plush tech company with its well paid developers and our copious brain food that powers us through all our developing and debugging. If you want to talk about sustainability, ask about the lifecycle maintenance of a software base. These benchmarks are cute, but academic, and only tenuously related to any green solutions. Especially if people in this thread are taking this seriously in terms of "This is exactly how we should all be thinking about server engineering moving forward, as we aim to drastically reduce carbon footprint within 11 years", then this is an feels like an awful, awful way to measure it.

What language is most conducive to writing algorithms that are smart in terms of Big O, or designing systems that can be refactored intelligently instead of throwing boxes at the problem?


Scala is a nice sweet spot. Good type system, solid for refactoring, good for expressing algorithms. You burn tons of CPU compiling but that’s one dev footprint.

Anyone from Twitter care to comment on this?


Couple points (I've worked on app servers):

1- This is exactly how we should all be thinking about server engineering moving forward, as we aim to drastically reduce carbon footprint within 11 years. Efficiencies at the language level are one of biggest bangs for buck here. Just by redeploying, you can reduce energy consumption by perhaps double digits. Imagine how hard that is to do at the hardware or energy farm level.

Think about it: You, brave software engineer, can literally make a significant contribution to saving the world by adopting a more energy-efficient language, if you are fortunate enough to deploy something at scale.

2- Rust is killing it in these metrics but developer productivity / friendliness is important to overall success. Looking at these results I have a conjecture that is maybe provocative: The top two candidates for long-term success at supplanting Java on the server in my eyes are

a) Go

b) brace yourself.. Swift

Swift is extremely young on the server, to the point where I'd expect your natural reaction to be "WTFLOL?! never heard of it". But here's some food for thought: the Netty team, one of the top performing Java server stacks, has been recruited by Apple and is chewing through all that stuff and just launched their NIO2 release[1] which I've heard is already very close to Netty.

Go has an amazing concurrent garbage collector and has really pushed the envelope with that.[2] Swift is unique in the server world in that it uses reference counting which sidesteps the whole GC collection problem, which could translate into very low, very consistent latencies as well as memory usage. It's still quite early days for Swift, but these are the two languages I'm watching the closest.

[1] https://forums.swift.org/c/server

[2] https://blog.golang.org/ismmkeynote


Not sure why you picked on Java though. As per this study, Java is doing better than both of them. So at least if the goal is to improve on those, you need to look at Rust/C++/C/Ada.


I'm not trying to pick on it. But those languages are significantly harder to develop for. Go/Swift are trying to be general improvements that maintain and improve upon developer productivity while achieving great performance and efficient memory utilization (which if you look at the paper is one area Java is not necessarily great at).


Makes sense. It didn't come across that way since you mentioned the energy savings and then immediately mentioned those alternatives. Interestingly though, while Java used a lot of memory, the energy expenditure for DRAM is still one of the lowest for Java. And they found little correlation between memory usage and DRAM energy in general. I wonder why that is.


Yeah they only looked at peak memory. They noted it would be more useful to look at continuous memory usage in future research.

Also take note that the scale of the graphs is different. E.g. Figures 5 & 6 makes it look like Java energy is super low when it's really just similar to the compiled languages.[1]

[1] http://greenlab.di.uminho.pt/wp-content/uploads/2017/10/sleF...


Ada is not particularly difficult. It does suffer from a lack of tooling compared to some languages. It's a large language with many ways to do things and some dark corners. The same never stopped C++, Ruby, Perl, nor in its day PL/1. If you can learn C++ or Java then Ada is largely syntax changes, not wildly different semantics. If you know Pascal it's basically a huge extension with support for features like concurrency and pointers never standardized into Modula 2 or Pascal.


I have written server's in Kotlin, that run on the JVM and thus are practically the same. IMO I am far more productive then using Swift, plus you have access to a huge legacy of stable, well thought out, and documented libraries.


Go's GC is not really pushing the envelope. They have merely improved from a STW, non-compacting, non-generational collector with a 25% CPU overhead to a concurrent, non-compacting collector with reasonable overhead.

JVMs and CLR had those a decade ago. The state of the art are concurrent, compacting, region-based pauseless or millisecond-pause collectors.


Comparing Go's GC to one of the many specialized fine-tunned Java GCs is pointless, I find.

I'm sure you're also aware that recent Go's GC pauses are sub millisecond for most use cases:

"We now have an objective of 500 microseconds stop the world pause per GC cycle." - 2018 Go team

https://blog.golang.org/ismmkeynote

My personal experience with microservices is to expect STW pauses in the 350 microsecond range. The best part is that it requires zero tunning or developer's attention while still being light on memory usage. Can't say the same for Java's default GC.


I was talking about technology state of the art, not out-of-the-box experience.

OpenJDK's default collector - parallel or G1GC, depending on version - is not the best available among JVMs and if your goal is pause times then yes, it will be worse than Go's. But if you switch to say C4 or ZGC you'll get comparable pause times and compacting on top and being able to scale to terabyte heaps.

10 years ago we had Metronome and CMS, which are more comparable to Go's collector.

And pause-times are not everything. Throughput and fragmentation resistance matter too. Compacting collectors fare much better on the latter metric. I don't know how the former is now, but those slides talked about 25% GC overhead in older versions of Go, that's utterly terrible.

Go is facing one challange that java doesn't: internal pointers. But the CLR's collectors have to deal with those too, so that's not terra incognita either.


Of course gains are to be expected when switching from Java's default GC to something specialized.

> 25% GC overhead in older versions of Go, that's utterly terrible.

25% overhead of what? And compared to what? Just throwing numbers in the air and saying it's terrible makes no sense.

The only 25%'s if could find in the slide were these: https://blog.golang.org/ismmkeynote/image6.png

2014 Go: 25% of CPU used by GC

2018 Go: 25% of CPU used during 2x STW GC of < 500 microseconds

So even if STW GC occurred as frequent as every second (which it doesn't in my use cases), this would amount to 0.025% of the CPU being used for GC, not 25%.


The 2014 numbers read to me that 25% of all CPU cycles are spent on GC. If you re-read my previous post I was talking about that old version. The point was that Go started improvements from a fairly bad place, were gains are still relatively easy to obtain.

> Of course gains are to be expected when switching from Java's default GC to something specialized.

So? That's irrelevant for what's state of the art.


I re-read our conversation thread to understand our communication mismatch and indeed you were taking about Go's GC not being novel, which it isn't. I apologize for the unproductive conversation, it's all on me.


Also using a Swift like ARC implementation server side that doesn't seem like a good idea. Memory leaks are so common in iOS development. Its just they are usually small enough and an iOS application lifetime is so short it doesn't matter. An ARC implementation that handles cyclical references seems like a more stable solution for a server.

-Note- I'm not saying one is better than the other. ARC works great on iOS for creating applications.


Two completely serious questions here,

1 - Why would you want to replace java on the server? (Seems obvious that it provides the best balance of productivity and performance per watt.)

2 - Assuming a company would want to replace jave, why, on earth, would we not use C++? (Or, perhaps, rust if we want to use a newcomer?) Why are we better off going all the way down this list to go and swift?


> 1 - Why would you want to replace java on the server?

Well one reason, if you look at the paper, is memory consumption. One of the ways Java gets good performance despite being a GC'd language is less efficient memory usage. (Although there are many advancements in this area akin to what Go has accomplished but as the paper's metrics show, often Java uses a lot of mem.)

Also probably lots of people will agree that it has accumulated some cruft over the years and predates lots of modern trends and things like concurrency and functional programming and whatnot require more complicated code patterns.. a classic example is the getter/setter verbosity of a simple value type (not trying to start a flame war here).

However it's fair to say Java is the thing to beat in terms of "goldilocks" languages that blend productivity/safety with performance.

> 2 - ...would we not use C++?

Because it's a huge regression in productivity and safety. The goal is to reduce memory consumption while achieving great performance and, crucially, consistently low latency which is a greatly underappreciated server metric and really should be the number one benchmark IMHO.


>* is memory consumption...*

???

You're recommending a slower language, that uses more watts, on a presumably 24/7 backend...

to save a few bucks on memory?

As I said, if you want a company to switch to c++ or rust because you want to save money on memory IN ADDITION to all the money you're saving on watts, while getting the same or better performance? That might make some sense.

Switching to go makes, No, sense. You end up paying more money per user session to do the equivalent work. (Even worse, each user waits longer because the work is done slower.)

And then, like typical engineers, we'd proceed to explain to our bosses that all of this is actually better...

because our new program uses less memory.


You’re misrepresenting my statements. I believe I’ve been clear that it’s early days for these languages and that they are good candidates to supplant Java in the future based on their trajectories. Memory utilization is just one efficiency advantage of a reference counted language like Swift. There are other reasons to look beyond Java as well.

Also, a request. Try to take it easy on the ???, all caps and excessive italics.


> and predates lots of modern trends and things like concurrency and functional programming and whatnot require more complicated code patterns..

And golang does what exactly for this? As a matter of fact, Java is superior here to golang on these fronts, and is only going to get better. There is a lot of unsubstantiated golang hype and people should know better.


> Swift

That's a pretty cool idea. I read something on HN the other day about using swift in the back end, but haven't used it myself. Hopefully Apple gets support for non-ubuntu linuxes as well as windows out the door (especially since they're no longer making their own hardware). I've heard anecdotally that it has very slow compile times, so that might be another issue.

> Swift is unique in the server world in that it uses reference counting

Out of curiosity, why has no-one done automatic reference counting on the server side before? C++11 has smart pointers, and I believe python does as well (can't speak to this one personally).

I also feel like most languages using the llvm backend could achieve performance at least close to c++; wasn't that part of the purpose? It seems like Swift might be especially ripe for this because Apple is the big developer of both LLVM and Swift.


While I'm also bullish WRT Swift-on-the-server, reference counting has absolutely been done in server-side languages before. At least Python and PHP use reference counting, though they both also have runtime cycle breakers.

Those examples are both interpreted languages, and they have a higher runtime cost for memory management because they're actively breaking reference cycles, so it's not apples-to-apples, but it's still reference counting.


> Out of curiosity, why has no-one done automatic reference counting on the server side before?

My guess is you fall into one of two categories:

1- You don't care a ton about efficiency because you can scale out to infinite servers that are relatively cheap and affordable. So you can run on django or rails or whatever and it's fine at "pre-IPO" scale.

2- You do care, but Java is good enough.

So you kind of have to be at Google's scale to care about creating a new server language. In Swift's case, the motivation was super constrained mobile devices, so bringing it to the server side is more of an incremental engineering cost against that massive investment.


I don't think carbon footprint is the be-all end-all you suggest. In a startup that has yet to scale, trading off efficiency for, say, flexibility, safety, or expressiveness may have minimal carbon impact but major benefit for your business. After all, few here will defend the choice of C for web service development these days.


> In a startup... few here will defend the choice of C...

Not sure what you're responding to but I specifically said "at scale" and "developer productivity / friendliness is important to overall success."


I think 'safety' is the key term here. In the future we will see more self-driving cars and other self-... machines. I would like them to provably not crash (into me).

If it were up to me, no matter the energy consumption, a piece of software in that realm should be proven correct (both crash-free and doing what the specification says).


>a piece of software in that realm should be proven correct (both crash-free and doing what the specification says)...

You do realize that a lot of autonomous driving software uses ML techniques that make providing guarantees like that difficult, right? (To be honest, oftentimes we can't even provide an explanation of why an ML decision was made, let alone "prove" why an ML decision is made. We only show tha most of the time, 98.3995% or whatever, the system should do "something like this".)

But in any case, none of that would affect the languages at issue here, because no one would write self driving software in any of these languages. There's about a 10,000% chance that any such startup would use GPU languages to do the meat of that work. Almost 0% chance any reputable company relies on something like go or swift to drive an automobile out on public roads.


> Almost 0% chance any reputable company relies on something like go or swift to drive an automobile out on public roads.

You couldn’t be more wrong. Tesla hired the creator of Swift to run Autopilot software.

He’s now at Google where he’s working on making Swift a primary language for TensorFlow.[1]

[1] https://youtu.be/s65BigoMV_I


I've spent the past several years programming w/ Swift, but the performance is a little concerning. Maybe the switch in Swift 5 to utf-8 backed Strings improves the performance?

I think Rust w/ macros like you see in Rocket (https://rocket.rs/) looks promising. I haven't used it, but the guarantees, simplicity, and performance (possibilities - it doesn't use async yet) are really interesting.


I work in Swift on iOS. I think the "Swift on a server" idea is way over hyped. There are so many issues with the whole Swift stack adopting that servers side would be a mistake IMO. One of issues with Swift on a server is ARC vs a Go-like GC. If a company is legitimately concerned about runtime efficiency, they already have such great solutions. Go being one of those.


I would love to hear your perspective but this is not a well supported argument. ARC is one of the most interesting things about Swift on the server. Historically one of the greatest challenges with servers is the impact of GC on the long tail of performance and latency. So, so much work has gone and continues to go into addressing the problem of low-overhead GC. You can appear to get great performance but then when you look at your stats you see that some significant percentage of your users are exceeding your maximum latency goals due to GC kicking in. Go and Java have made great strides but sometimes at the cost of memory inefficiency and/or optimizing for specific cases.

Reference counting, by contrast, is entirely predictable. It doesn't defer any work. I would argue that CTOs are a lot more interested in consistently low latency than in requests per second. So it's very interesting to see an approachable, performant language take the ARC route on the server. It is early days though.


My main issue of Swift's implementation of ARC is it commonly leads to memory leaks. Even experienced developers can cause unintentional memory leaks. It's extremely common in iOS development. It's also hard to even detect and eliminate them, you just seem memory usage climb as your app ages. It not an issue with iOS because it is commonly very small and an app's have such a short lifecycle.

ARC would be just one my issues using Swift outside of iOS.

I think you are overstating the pause duration of a modern GC. I have used both Go and the JVM server side, not at a huge scale but enough to see GC effecting response times. They add some fluctuate, but nothing compared to network latency or the multitude of other factors that fluctuate heavily. It was never significantly relevant for response times. I'm looking at my server logs right now and theres not even a real correlation between GC and response time. Unless you are considering a 0-10ms fluctuation.

If you are interested in using an ARC server side I know Kotlin Native is using ARC however their implementation eliminates the cyclical reference issue.


Nah, I’d say Swift’s good support for value type semantics helps cut down on leak problems. Depends on the patterns you use of course but I can see that being a good approach for server apps.

Funny you say that because just this week there has been an investigation into what seemed to be “leaks” but turns out it’s memory fragmentation. It’s a fascinating read into how to debug a server problem if you’re into that sort of thing.[1]

Agree that modern tracing GC can be very good in a wide range of cases but there are some where it’s not. Very dependent on the case. Ultimately you are deferring work and hoping to find some time in the future to squeeze it in unnoticed. ARC is a cool paradigm to explore on the server as it doesn’t have this problem to begin with.

[1] https://forums.swift.org/t/memory-leaking-in-vapor-app/22209...


Value types are great, but they don't help cut down on cyclical references as the type of coding that will cause memory leaks is done in objects. Typically objects with complex dependencies and inheritance hierarchies.

Memory fragmentation is another legit concern I guess, as far as I can remember iOS has no memory compaction. Again not a necessarily an issue for a short live user space application, it is a larger one than memory leaks at least at my company. In some hot spots of our app we specifically slow down reading of some queries to reduce memory fragmentation.

Frankly Chris Lattner's claim that a GC leads to 2x-3x memory consumption over ARC is unfounded and sorta shocking coming from someone held is such high esteem. It's something thats continually shown to be untrue.

It always seems Swift's biggest selling point is it uses ARC instead of a GC, which is either not a large issue or a GC is actually more beneficial. Other than that you still haven't dealt with the toxic "Swifty" community, the terrible tooling situation, the immature libraries and frameworks, etc, etc.

There seems to be so many better solutions to writing server side code. This is all coming from someone who uses the language on a daily basis.


Glad to see Vapor/Perfect has toned down their "we are going to change the world by releasing a new web framework" rhetoric on their sites. When they both first did announced them it was pretty cringe


Refcounting has long tail latency too arising from cascading deallocations.

(And then you have ref cycles which is another kind of headache. And poor cache behavior from all those refcount updates.)


How do you get the 11 years figure? What is your idea of a drastic reduction? How will you measure it? And should all countries observe it equally?


Fortunately there are really extensive online resources you can consult for details! For example the United Nations Intergovernmental Panel on Climate Change.[1]

[1] https://en.wikipedia.org/wiki/Intergovernmental_Panel_on_Cli...


The United Nations has something like 30 different climate models. Also no one can forecast next month’s weather accurately, much less the weather from 12 years from now. Or do you know of a more accurate forecasting model? Such a model would be extraordinarily useful in agriculture and emergency management, for example.


Climate and weather are different. When people are forecasting 12 years into the future, they’re not predicting the weather, they’re predicting the climate.

Just pointing out this distinction because the “can’t predict next week’s weather” is a disingenuous soundbite that has been used by climate change deniers to discredit climatologists in the eyes of the of the public who don’t understand the difference. I’ll give you the benefit of the doubt that you weren’t trying to be disingenuous, but the distinction still makes your question somewhat irrelevant.


This is like arguing that the science behind vaccines is invalid because doctors can’t predict when you’re going to catch your next cold. It’s willfully obtuse and has no place on HN.


[flagged]


More anti-science nonsense. Here, let's let NASA tell you the definition of the scientific method.[1]

[1] https://climate.nasa.gov/news/2743/the-scientific-method-and...


You can play around with different setting here http://trillionthtonne.org/ The model they're based on is a few years out of date though. The trajectory our politicians put us on right now seems to suggest that they subscribe to extremely optimistic settings or are okay with absurd warming.


> A faster language is not always the most energy efficient

My rule of thumb now is that memory access, not compute, is the primary consumer of energy. That would tend to confirm the above statement while also supporting the data that scripting languages aren’t super energy efficient, since they tend to do a lot more dynamic allocation than compiled languages, when generally broadly speaking about common programming practices in each language.

This is mainly colored by GPU usage and a paper/presentation some friends made: https://www.researchgate.net/publication/324217073_A_Detaile...

In the GPU case, memory access costs sometimes 10x more than compute, meaning that minimizing average (not peak) memory traffic is more or less the only path to significantly reduced energy. (See figs 7 & 10 in the linked paper)


"Costs more" in terms of energy per... what quantity exactly? Instruction executed? Clock cycle? Datum processed?


I think practically, for useful analysis in business it would be wattage per unit of useful work (or datum processed as you suggest). For instance, I work in finance, so what is the wattage per option price calculation?


The reason I asked was that it wasn't clear to me if the parent comment was measuring the same way or not re: memory accesses. It seemed pretty expected that a memory access would be more expensive per byte (it takes like hundreds of clock cycles...) but it wasn't obvious to me if it would be so per clock so I wanted to clarify which was meant.


I absolutely agree that it wasn't clear. I don't think wattage per clock cycle is a meaningful measure here - after all, all languages will have the same wattage per clock cycle, if they're executing the same instructions.

What's important is wattage per unit of useful end product, in my example pricing an option. For others, it might be wattage per web page served or anything else.


Energy per operation and delay per operation, on the order of ~40-100X more than compute...


In the paper it’s energy per rendered pixel. You could translate that abstractly to watts per flop.


This is probably true, but it would be incredibly cool to see a research project that demonstrates this.


Not sure what to make of this research. It's not like I can switch out JS in favor of these other languages (many of which are compiled binaries) in my web apps. The performance of for example web servers written in the different languages entirely would seem more interesting to compare.

The JS/TS comparison was directly fishy. I found no mention in the paper of how they managed to "run" the tasks in Typescript, which AFAIK is not possible, or at least not how one would do it in a real life.


Maybe you can switch it out :) https://webassembly.org/


It wouldn't be switching out. You'd just be running WASM.


Instead of a binary. You're running whatever the language compiles to. So then it depends on how efficient the WASM compiler is compared to the binary on whatever platform, and compared to JS.


Just use it as another ballpark reference of what trade-offs are being made when going with one language or another. Not really any different than any other language benchmark/shootout comparison.

Unless you have very large numbers of [users|servers|whatever] to amortize the cost of going one way vs another over, it's probably not worth even considering trying to change anything on your end. More likely, your personal or team productivity is the much more important metric to optimize for anyways. If you're doing things at FAANG scale, it's an entirely different story.


>Not sure what to make of this research. It's not like I can switch out JS in favor of these other languages

Nope but for iot devices or anything in a scenario with scarce energy resources or difficulty to remove heat it's an interesting datapoint. Although I think the results conform mostly with intuition. I doubt anybody would voluntarily compile a lot of haskell on their space satellite


Reversible computing should be mentioned, as it could be the holy grail of low energy use.

https://spectrum.ieee.org/computing/hardware/the-future-of-c...


Almost 20 years ago I took courses under a professor who was very into reversible computation. As I recall it was mainly on the language side, though.

Glad that it's still moving, sad that it's at an even slower pace than the "functional/immutable will save the parallellization worries" stuff I encountered in the same period.


It almost sounds like reversible computing is immutability at the hardware level.


Historically, virtually every move up the abstraction level sacrificed performance to gain programming efficiency. Assembly to C, C to C++, C++ to Java, Java to Python/Javascript. And now there are wars over Javascript vs. Typescript, and jQuery vs. React/Angular. Every new step up the stack claims that the higher level of abstraction can either be implemented with only a minimal penalty, or could actually improve performance due to easier automated optimization. However, every time, it seems that in practical applications, performance is sacrificed to make software development faster.

I don't know enough about the performance of Javascript vs. Typescript to form a strong opinion about it, but if history is any guide, it's likely that it improves software development efficiency at the expense of performance.


Typescript usually transpiles to javascript, just without the type information, and this is not done in the browser. If you target old browsers it can add maybe add some overhead like babel does.

I don't know about javascript's usual JIT compilers, but LuaJIT's JIT compiler (trace compiler) does a pretty good job of figuring out abstractions.


I mean, JavaScript comes out 4.5x slower and consuming 6.5x more memory. For TypeScript the numbers are 22x and 46x respectively. That honestly makes me very suspicious about the benchmark set-up.


I always feel like there is a big misunderstanding of what it means to “write a program in some language” in these types of comparisons.

For example, saying that Python uses much more energy is a foolish thing to say. The “same” algorithm written in pure Python is a very semantically different thing. It involves allocation of flexible objects that obey certain attribute lookup protocols, operator protocols, dynamic attribute mutation / creation, iteration protocols, etc. It is presumed that if you wrote in pure Python, you need this dynamism and ability to introspect at runtime, modify data structure layout arbitrarily, utilize an automatic garbage collector, etc. So you’d need a benchmark test that requires all that functionality before it could possibly make sense to test Python. Otherwise you’re penalizing Python for a bunch of expensive overhead, which is totally unfair and foolish because the whole point is that such overhead exists for use cases where either the costs are negligible or the flexibility that necessitates that overhead in any language happens to be a desired and important part, such that to write the same functionality in other languages would first require building all the protocols, garbage collection, etc. machinery that essentially defines Python.

If instead you have some benchmark problem that can be solved in e.g. C or Rust without needing any runtime dynamism or heavy machinery of certain protocols or garbage collection, then to write it in Python you would just write it in Cython or as a hand-made C extension module to specifically bypass the overhead of the interpreter or Python protocols / dynamic lookups / etc.

Basically, it never makes sense to compare Python and C on a benchmark that doesn’t require reimplementing most of Python in C first. Because to solve that problem in Python, the ubiquitous, basic, Python 101 way to do it would be to use Cython or numba or write your own extension module.

Python is basically a special C language DSL for dynamic types, garbage collection, and a set of conventions and protocols.

“Pure Python” is a linguistic trick for saying “a huge ton of tools that make C-level polymorphic structs easy to use.”


Yes except some people are using python where they should be using C++ and powet consumption is a good reason not to use python in these cases.


No, it’s a good reason to use Cython or one of the many other techniques for generating pre-compiled C extensions in the Python ecosystem. You may also use C++ directly or something if you prefer, but there would not be any performance based reason to do so.


Really silly to state how much electricity "languages" use without specifying implementations. You can specify for unstandardized languages, but languages don't have performance characteristics in the typical sense, implementations do, and this can very greatly between implementation.



The code size metric was pretty interesting to me. I would have imagined that functional programming languages should lead to the most succinct code. But I was surprised to see Haskell in the middle, and Go at the top there! Functional languages are right in the middle for all 3 metrics.


Unfortunately there are no APL/array languages there, they are notorious for being succinct! It's not a stretch to say "lines of code" in most other languages would be close to "characters of code" in APL. It'd be interesting to compare it using the other metrics too.


PHP's performance is quite impressive nowadays. It's one of the top fastest interpreted languages.


For how much shit PHP gets, it is a pretty good lang for building things


I'm curious: I've used PHP (not by choice) for the past two years and I strongly dislike it. (I could wax eloquent on this subject, but I don't want a flame war. :) What do you see of value in PHP? Is it the language itself? Its ecosystem or community?


I'd be interested to know the added energy cost (in terms of programmers) of writing the code in the first place and subsequently maintaining it

* edit- thinking about it it's probably negligable


Cost is a good first order proxy for energy used, so this is not negligible if you want to take into account all the energy used by the developer (including home activities).

As for any other product, this is a trade-off between fixed and variable cost: want to do some quick processing on an hourly basis? Just write a quick python script with a Cron. Want to compute something billions of time? Use C or something more modern that compiles. Need even more savings? Build an ASIC doing the job.


It would be interesting to match these results to what you get using EO-(efficiency-oriented)languages. (MOQA for example). There's a whole branch of computer science involved in static average-case analysis for algorithms and data structures. The results are 'interesting'.


The big surprise for me is where Perl stands in the rankings, below Javascript in all three tests.


JavaScript usually runs quite a bit faster than other dynamic languages (after JIT warmup), not because it's fundamentally any faster, just because modern JS engines are so good. Browser competition has meant that the amount of engineering work put into JS performance is much, much more than just about any other language.


Amen: I'd have to take a look at the actual code in use.


You can, the authors of the paper provide the actual code they used on their website.


“On average, compiled languages consumed 120J [joules] to execute the solutions, while for a virtual machine and interpreted languages this value was 576J and 2365J, respectively.”

IMHO it's not quite right to evaluate energy efficiency just by looking at the runtime performance. Compiled (incl. JIT) programs have to be compiled. So for compiled programs, you have upfront costs (e.g. rust, haskell much more than c), you would have to include in the calculation. Then you have programs/script that run once (a day maybe). I wouldn't be surprised if taking this into account would change the calculation.


They should include Julia in the results.


No mention of assembly language, which should be even lower on electricity and RAM usage than C and Pascal.


Yes, I was looking for "carefully optimised Asm" on that list too. Even if it's not faster it will pretty much always be smaller. (Compilers are surprisingly easy to beat at size optimisation.)


Well -Os can be a bit better. However I doubt program size is very significant compared to data size in Compiled languages by example.


I would like to see that benchmarked.


Very interesting!

We have wanted to do a similar conversion from the results data in our TechEmpower Framework Benchmarks [1] to energy efficiency. I've wanted to go as far as converting watt-hours to some average carbon emission rate for electricity generation in a given region.

I imagine the carbon emission per request would be amusing, if nothing else.

[1] https://www.techempower.com/benchmarks/


Aren't the savings more in the type of application? For example, driving users to addictive content makes them use their phone more. Also, Bitcoin comes to mind.


To me, the amount of electricity used is a similar issue to RAM. Just because we have computers with more RAM does NOT mean programs NEED to use that RAM. So many applications right now that don’t need as much memory as they are using but, whether it be lazy or poorly planned code or any other excuse, they guzzle it up like they are the only application running on your device.

I’m all for figuring out what’s the most efficient option while also ensuring it’s an effective one as well. If you kept a knife in its sheath to keep it nice, but never take it out to use it, then it’s not servicing its purpose as a tool (ignoring ornamental ones). However, no matter how cool it might seem, you really don’t need a samurai sword to pear apples.


But at the same time, when manipulating strings with regular expression, three of the five most energy-efficient languages turn out to be interpreted languages (TypeScript, JavaScript, and PHP), “although they tend to be not very energy efficient in other scenarios.”

I believe that's because they're all using a regular expression library which is probably written in C/C++, maybe even with some pieces in optimised Asm, so none of the actual RE-work is being done in the interpreted language itself. If you wrote actual RE processing in the interpreted language and let the interpreter interpret/JIT it, I bet they would be as (in)efficient as the "other scenarios" where the "heavy lifting" is running through the interpreter.


Hope that the cloud providers don't bill according to energy consumption. I like Python a lot...


They already do, in the sense that you'll need beefier/more machines to do the same job.

Of course, using Python means you have a lot of low-hanging fruit to pick. We have a Python service where we moved ONE ~60 line recursive function to Go and overall CPU consumption dropped to 15-20% of what it used to be.


That sounds pretty damn nice - would you happen to have a good link in mind for reading more on the subject?


This was a financial function which calculated an internal return of return for a long series of uneven cash flows, essentially a python implementation of the XIRR function that spreadsheets have.

To get to the required two decimal places of precision, each call would recurse 50-60 or more times so this was more or less all that the server was doing.

A VERY low hanging fruit.


What's interesting is also how go and python interaction is since you seem to be able to call go from python and the reverse very efficiently.


Sorry but we actually cheated on that part. Redid the page so that the function that is now in Go is a separate little service that is called via ajax directly by the users' browsers.

So there's no actual Python-Go interaction in our code.

This was quicker, cleaner and has made the page more responsive for users.


Thanks this comment is useful, client side integration is integration after all, and I think if it was simple, you would have it done differently (if I understand your words correctly - "cheating").


I really wonder that FORTRAN is so bad - I think that they didn’t try good compiler, like Intel’s



Can anyone extract what the impact of a JIT compiler is on these figures? The initial performance hit is eventually eroded by the runtime optimizations, so will a long-running Java program eventually close the energy performance gap on a C program?


Which lisp? Those numbers don't seem right from my experience with SBCL.


The data for the paper the article talks about is online [1], and it says [2] that the benchmarks are "implemented in 28 different languages (exactly as taken from the Computer Language Benchmark Game)".

The LISP evaluated by the Benchmark Game - and, apparently, this paper - is indeed SBCL [3], which seems about on par with Java.

“Lisp, on average, consumes 2.27x more energy (131.34J) than C, while taking 2.44x more time to execute (4926.99ms), and 1.92x more memory (126.64Mb) needed when compared to Pascal.”

[1] https://sites.google.com/view/energy-efficiency-languages/ho...

[2] https://github.com/greensoftwarelab/Energy-Languages

[3] https://benchmarksgame-team.pages.debian.net/benchmarksgame/...


I think this is what is so cool about Common Lisp. You can literally get pretty darn close to C in performance and still be at the highest levels of development efficiency/prototyping speed. I don't think too many other languages can say so. Take Python. It has really fast development time, but very slow performance.


C and Haskell were about what I expected, but those Pascal results have me scratching my head, given that it performed about as well as C and used even less RAM.


Pascal and C are roughly equivalent in my eyes, at least as far as what they spit out in the end. Both have pretty similar feature sets and development methodologies.

What surprised me is how much of a hit the OO programs took. I had thought they would compile down to something reasonably similar to the imperative languages, but C++ ended up being 50% slower.


So Rust is a little bit more efficient then C++. That's kind of surprising. And I knew Lisp was fast but I am still surprised it scored so high.


How does "TypeScript" come out to an energy score of 21.50 when "JavaScript" has a score of 4.45 - that makes 0 sense.


Their typescript compiler may not be targeting the correct ecmascript version so the typescript generated could be using polyfills instead of native features.


maybe they included compilation? it would make sense as that's a real world use of cpu/memory/energy.


But it certainly looks like they didn't include compilation time for all of the compiled-to-machine-code languages. C has the reference value of '1'.


I think they included compile time if the code is JITed at startup.


Still doesn't make sense. Build one time, run many times.


What about the energy fixing bugs in JS that Typscript found at compile time?


If I'd hazard a guess before reading the article, it's because of the transpilation phase to regular Javascript. That doesn't come for free.


> We then gathered the most efficient (i.e. fastest) version of the source code in each of the remaining 10 benchmark problems, for all the 27 considered programming languages.

I wonder if anyone has attempted to rate how idiomatic or typical CLBG entries are, and whether choosing the most idiomatic implementation for each language would have obtained different results?


I improved Rust's version of binary-trees by about 0.05s (on my CPU): https://salsa.debian.org/benchmarksgame-team/benchmarksgame/... , so Rust score may rise.


I'm a little surprised at where Erlang features in the list. I've never used it personally, but have heard so much about it being an amazing, performant language. I wonder why it's so time/energy/memory inefficient for the algorithms used in this exercise.


BEAM has a 'busy wait' feature. Schedulers with no jobs to do will remain active so they can respond to new jobs faster. You can turn this down so that the schedulers go to sleep quicker, reducing CPU usage.

From http://erlang.org/doc/man/erl.html:

  +sbwt none|very_short|short|medium|long|very_long

  Sets scheduler busy wait threshold. Defaults to medium. The threshold
  determines how long schedulers are to busy wait when running out of work
  before going to sleep.


Interesting. Have you ever used Erlang in production? How did you find it?


It's pretty obvious if you're running RabbitMQ which ranks highly in the output of `top` even if no messages are being processed.


Not really no. The closest thing to production I've used it was as a one-liner web-server to serve some images.


Erlang's use case is handling lots of connections simultaneously, and messaging between processes.

These benchmarks are all single-threaded, pure computation. Erlang is pretty slow at that (alas).


I’d certainly put Erlang (and Elixir) in the class of scripting languages. In that category it beats out most of the other classic scripting languages like Python, Ruby, Perl, and Lua at execution speed and energy usage, aside from JavaScript and oddly PHP.

For being a mostly pure functional language it performs pretty well in that category. To the parent comment’s question, the reputation for solid performance likely comes from Ruby/Python developers. When dealing with web server applications it really shines due to how it handles concurrency. Also NIFs are really easy to write, especially in Rust.


It's built for massively distributed systems and is efficient at that I believe. It's not really designed to be a replacement for numeric languages like C/C++/Fortran/Ada...etc. It's a high level language, so what you gain in user development efficiency, you lose in performance. Someone knowledgeable in Erlang could confirm though as I'm being general.


Is this the cost of restarting processes due to "let it crash" error handling?


No idea, but I doubt it if they're testing with well defined algorithms. I can't imagine the tests throwing lots of errors.


I did not expect Lua to be that far down the list. I thought Lua was a go to choice for scripting embedded software. Also, I'm not surprised that JavaScript outperforms most of the other scripting languages (according to this potentially erroneous benchmark).


I also was disappointed that my favourite language did so poorly in these tests - I wonder, though, what the stats would have been like had LuaJIT been factored in. Perhaps something for future evaluation ..


"Of the top five languages in both categories, four of them were compiled. (The exception? Java.)" and "While Erlang is not an interpreted language." Had to check Wikipedia to make sure I haven't become crazy.


Is JIT, interpreter and VM startup time included? Some benchmarks finish in just a few seconds. With Python for example just launching the process takes in the order of 100ms, which would be significant time in such short benchmarks.


For example on a server the CPU will likely be set at full thrust even if it has nothing to do. So I don't think this matters until you need to scale, where you would need less servers/power with a more optimized program.


Are they counting the electricity used to heat the pizzas that feed the programmers?


This was said half in jest I suspect, but it's not a completely off the wall concern. For an infrequently used program the energy burned in its development could easily be a nontrivial amount of the total energy used over its lifetime. If you can write a 5 line Perl script that gets the job done (and is only run a couple dozen times ever), then you're probably saving energy over the 100 line C program despite the massive differences in runtime energy consumption.


Phew, somebody gets it.


The language which uses the least energy is the language no one writes programs in.


Question: if these were run on underclocked hardware, would they use less energy? It seems like you'd need an increasingly large amount of energy to run something in a smaller amount of time.


No Julia or Crystal, that's disappointing.


YMMV. Our experience with Go vs Java showed we could run the same workload on far fewer machines using Go than Java.


So do we think Java is benefiting from J2ME and Android research? Is it just a coincidence?


How is JavaScript so far up the list? Slack commits murder on my MBP battery every day.


Using https://github.com/xtermjs/xterm.js as an example (the terminal used in vscode), they draw on a canvas to achieve better rendering in the browser. So I guess not javascript is the cpu hogging bottleneck, but the complexety of html/css. There are just too many footguns that slows down the rendering. If there only was something like "use strict" for html/css.


The heavily parralelized C in the language shootout is not how most code is written.

Rust was specifically created to use the paralell hardware resources well, and it excels at it.


No COBOL?


Seeing to the amount of transactions running over cobol it might deserve a place in a list like this.


If you're still using COBOL, power usage is likely very low on your list of concerns


No? Just about any financial transaction, insurance calculation and a lot of telcos operations run mainly of cobol.

Something like 90% of all credit card transactions.


You can throw money at electricity use, and COBOL is compiled to machine code so the power usage is much less of a concern than it would be in an interpreted language.

Much higher on the list of concerns is finding capable COBOL developers, ongoing maintenance, and security. Those are all things that you can't just throw money at.


> Of the top five languages in both categories, four of them were compiled. (The exception? Java.)

Java is also compiled.


Assembly language is the lower bound here. This is a 100% correct fact that nothing can use less energy than assembly language. No such statement can be made about other languages.


Not necessarily. Sometimes compilers give better programs than a programmer can with optimization and stuff.


Right but if I write a program that converts code into assembly and does the optimizations automatically then I can get something better or equal to a compiler.


Isn't that exactly a compiler?


Yes. And all of my statements are 100% correct except for this one.


Compiled Forth would give Assembly a run for its money and produce smaller code as well.


But could forth beat compiled Assembly language? I think not.


This study is a bit silly, I had to check whether it was an April Fool's joke. The chipset running the instructions is going to have a far higher impact on energy usage than the language compiling those instructions.

For instance, switching to ASICs or even FPGAs that are optimized to handle certain types of computations.

Just look at datacenters or mobile devices. Sure, optimizations are made to runtime environments to improve performance, and increase battery life. But you are going to see bigger gains through chip architecture. And that is what vendors focus on.


To the naysayers: these languages are either compiled, or will be JIT'ed at runtime. What they will be compiled to will be chip specific binaries-- x86, ARM, whatever. The energy usage will be markedly different depending on which chip architecture is chosen, and will no doubt be inconsistent between different architectures and languages.

Or to put it another way. Why do you think mobile devices almost exclusively use ARM? Why do datacenter operators invest heavily in R&D for ASICs instead of engineering new languages or runtimes? Because languages play a secondary role in energy consumption.

This study is just like countless of other meaningless benchmarks seen in language war flame posts. Pointless and misses the big picture.


With regard to datacenter operators, I would argue that a simpler answer to that observation is that they are investing heavily in the are in which they can actually effect change. That is, they can swap in-and-out their hardware, but language of choice is largely up to their customers - the dev. They can advocate and champion a particular language, but really enforcing it in any meaningful way is narrow casting their own user base.


They could offer their own compilers or runtimes. Some big companies do. But most dollars and time is spent on hardware R&D and upgrades.

Facebook and Google both have in house chip designers these days, specifically for their data centers.


How do you think, would the orders of languages (as listed in different tables in the article) change, if we repicated this study using other CPU?


It is not about stack rank. Your measurements in joules is going to be affected by another variable which was omitted in your experiments! This type of omission would be pretty obvious spot in a scientific paper citing lab experiments.

You will see different energy outputs per language based on their compiled results depending on chipset architecture. And you will see different deltas between them; e.g. language x may use 10% more energy than language y on chipset A, but use 20% more energy on chipset B. And usage will vary depending on which kind of task is performed on which chip architecture, and with what language.

So you did a poor job setting your controls. The effect of the programming language was not well isolated in your benchmark trials, so setting chosen language as an independent variable is flawed. Who knows whether the overall ranking of languages would be affected had the benchmarks been done differently.


very stupid but clickbaiting question... everything wastes everything...


What does “everything wastes everything” mean?


Now, seriously: C and the boogieman of managing your own memory is a myth. Anything you want from another language can be built up from C, and be in your direct source code control working in C. Far too much propaganda is written telling programmers perfectly capable of the gains from working in C that they should be afraid of working with their own memory allocations. As if being organized is impossible.


Language runtime analysis for server-like programs seems obsessed with putting everything into a single process and address space. With C/Unix, traditionally or at least since inetd, the way to go is start a fresh process per request, which mostly just solves the problem of manual memory management (eg. because the O/S clears up the mess you left behind), at the price of latency. I'd like to see a benchmark comparing these approaches, since gc is far from being free of overhead either.


How would that prevent any of these CWE-415 or CWE-416 vulnerabilities?

https://www.cvedetails.com/vulnerability-search.php?f=1&vend...

https://www.cvedetails.com/vulnerability-search.php?f=1&vend...

Pedantic note: quite a few of these vulnerabilities don't technically contradict the GP's claim because they're in C++, not C, but I hope there are enough C ones to disprove the point.


Not sure how serious they think this is, and how much is this rather a joke. If it's not a joke, they clearly not see the big picture. For example if you can write a software in a higher level language in a fraction of the time which will spare energy somewhere else in the world, it will use less electricity overall.


How does less dev time save energy “somewhere else in the world”?

Google & AWS pay money to develop & acquire technologies that save energy because of their scale. The energy used in dev is scratch compared to the energy used to run programs at scale. Google, for example, has a PHP compiler that makes all PHP web pages execute in a fraction of the energy usage of running the PHP interpreter.


...Google has a PHP compiler? Don't you mean Hack/HHVM by FB?


Nope, I don’t mean HHVM. I was thinking of Talaria. I don’t know if they still use it. http://enswmu.blogspot.com/2013/03/why-google-acquired-talar...

But HipHop is certainly another good example that demonstrates that this matters in practice.


Not everyone works for FAANG. Energy usage doesn’t even come up at most small shops. Cloud hosting costs might, but only if they are really out of whack.


You might be misunderstanding. AWS and Google and a few others are hosting everyone’s code, all the big shops and all the small shops and everyone in between. They do things to optimize everybody’s code, because it saves literally millions on their power bills.


Yes I can see that. As a small shop developer, I will make my choice of language based on programmer convenience rather than energy usage. I have never heard any developer talk about "kilowatts" when choosing tech (outside of bitcoin mining perhaps).


So I hear your point but don’t understand why you think it’s arguing against the need for this article? What’s wrong with drawing attention to something you and your peers weren’t thinking about, or doing something to help inform people’s choices?

Perhaps awareness and concrete data of energy usage has to come first? Yes, the article is pointing out something that all the developers you talk to might not know. Doesn’t that make it a good thing? Maybe in the near future you will hear them start to talk about energy. Maybe now that you’re becoming aware, you can be the first among the people you know to talk about the energy usage of your software choices.

Anecdotally, my own experience is that I didn’t hear a lot or think a lot about energy usage until I switched from games & web development to working at a hardware company. My peers now do talk about energy, even though most only write software.

You’re right that most people do choose based on convenience, especially when they lack any other reasons to make the choice. Historically, choosing solely on convenience has contributed to global environmental problems, and people globally are only just becoming aware of the environmental costs of their choices. Separately, Moore’s law only just recently stopped working, so it’s not surprising to me that energy usage has suddenly become more important. Reducing energy use is now one of the primary ways we can increase speed & efficiency, unlike the recent past.


It depends. If you're writing a math kernel that will be deployed to a billion IoT devices each with a lifetime of 10 years, you may want to spend some more developer time to shave a few cycles in assembly language. If you're writing some single-use script to automate a single job, do it in the easiest high level language with the library support that you need.


i suppose every attempt to figure out which is the best text editor, unix-like os, programming language etc is even more futile than most human endeavor :)




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: