Hacker News new | past | comments | ask | show | jobs | submit login

Very surprising result. I wouldn’t have bet that this is what would have happened.

But anyone working on language perf should take note even though it’s just one result from one team and one application. Of course they probably used C++ in a not great way and probably use Go in a better way. But maybe that is caused by something in Go that encourages good behavior or at least encourages the kind of behavior that Go optimizes for.

So, even if this result doesn’t mean that C++ devs should switch to Go to get more speed, it’s a result that is worth pondering at least a bit, particularly if you like thinking about what it is that makes languages fast or slow.




> I wouldn’t have bet that this is what would have happened.

Which part of the results are you referring to? It's well-known that reference counting has significantly lower throughput that tracing garbage collectors, so the fact that C++ is outperformed here isn't surprising at all.


As a Lisp programmer, I confess that when I see a performance shootout between Go, Java, and C++, I don't expect to hear the result that Go won -- and that its performance is on par with their old Common Lisp program.


Great point. Here’s the issue: there are tons of ways of doing reference counting in C++. Some go all-in with implied borrowing. Some make great use of C++’s various references. Some use the reference counting only for a subset of objects and rely on unique_ptr for hot things.

So, there is no universal answer to how C++ reference counting compares to GC.

There is a well known answer, that you’re probably referring to, if you reference count every object and you do it soundly by conservatively having every pointer be a smart pointer. But it just isn’t how everyone who does reference counting in C++ does it, and I was surprised because I’m used to folks going in the other direction: being unsound as fuck but hella fast.


> there are tons of ways of doing reference counting in C++

There are but critically there are also a lot of ways to not do ref counting at all. C++ isn't a refcounted language, it's a language where you can use refcounting (shared_ptr), but you don't have to (unique_ptr, value types). It's not even recommended to be primarily refcounted.

They chose a really odd subset of C++ to use here (shared_ptr exclusively), very unorthodox and not something I've ever seen elsewhere or recommended.


If either of those affects performance there are WAY too many memory allocations happening.


Sure, but I guess that's the point --- the GCs in Java and Go can handle any allocation pattern you throw at them reasonably well, but there's not, as far as I know, such a "one-size-fits-all" solution in C++ (not that it needs one.)


If you don't want to plan out proper (as in performant) data structures and memory management, C++ is indeed the wrong language. And judging by the stuff discussed in comments above, this is exactly what happened here.


I'm not sure I would call 'using your entire CPU to do trivially small heap allocations' a pattern unless you mean that it's a pattern you see when people wonder why their software is so slow.


Their software seems to be mostly just allocating lots of memory. It seems like they are comparing language speed if you don't know the first thing about optimizing ( which is to not allocate memory nonstop)




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: