Hacker News new | past | comments | ask | show | jobs | submit login
Why would you learn C++ in 2016? (itscompiling.eu)
149 points by ingve on April 12, 2016 | hide | past | favorite | 154 comments



A big advantage of learning C++ is so that you understand the concepts of ownership and lifetime, along with type abstractions and using types to enforce invariants in how your API is used.

I try to avoid using C++ for any pet projects if I can help it, but I've found that being forced to use it for about 4 years really changed how I thought about relationships between objects. My Java code c. 2005 was basically like spaghetti; if I needed to operate on an object, I'd pass in a reference somewhere and call whatever I needed, and then let the garbage collector (maybe or maybe not) clean it up. In C++ this doesn't work - you will have an unmanageable mess of memory leaks and use-after-frees (or reference cycles if you resort to shared_ptr). So I had to learn how common program architectures (servers, compilers, distributed parallel programs) manage memory and assign ownership to certain overall scopes. This in turn has informed how I design my production Swift, Javascript, and Python code, or use Java dependency-injection frameworks like Dagger.

I still appreciate being able to write spaghetti while prototyping, but I've found that maintainable production programs seriously benefit from having clear notions of object ownership and lifetime.


I think Rust will teach you out of the box a lot more about concepts of ownership and lifetime, i know i learned a lot from Rust.


Sadly, it won't teach you what happens if you fail to consider something.

This means, long hours of tracing, debugging, leak checks etc.

The lesson is: never ever rely on compiler, framework, language or other potentially buggy tools to think for you. Even with Rust's "hard" approach, you can get complacent and hemorrhage performance and memory until there's no turning back.

Those things were written by fallible people without hard mathematical proof everywhere, after all. Rust and especially its libraries too.

Understand libraries, their implementation and design trade-offs if you're using them.


> This means, long hours of tracing, debugging, leak checks etc.

But you don't have undefined behavior to debug, and that makes debugging easier. Trying to reverse engineer from a corrupted memory dump from a production system what the compiler did to your code and relate that back to where it went wrong is something you will have to do in C++, given enough time. You don't have to do that in Rust.


Interesting, could you give some example of such Rust code?


I found learning Rust to be a lot easier having learned C++11 first. Move semantics, RAII, borrowing (const references), mutable borrowing (references), option-types (pointers), and iterators are all C++ concepts. The big advantage Rust gives you is that the compiler handles a number of things that required manual bookkeeping or coding conventions in C++, and doesn't let you screw them up.


But you could equally say that all those C++ concepts are really Rust concepts. So I don't see how that's an argument that C++ is easier to learn than Rust. The strict checking seems to me to be a win, because the compiler tells you what you did wrong instead of having to debug undefined behavior (e.g. dereference after move of unique ptr).


I'm not arguing that C++ is easier to learn than Rust (I actually believe the opposite). I'm arguing that learning C++ helps with learning Rust.

If it were a completely green-field environment, I'd much rather write in Rust than in C++. (But then, in a completely green-field environment, I'd much rather write in a language suitable for prototyping like Python or Javascript than Rust, and then port it over to Rust when I have an idea what the shape of the problem is.) We don't exist in a completely green-field environment. The main advantage of C++ is that you can look at any number of open-source databases, GUI frameworks, servers, compilers, browser engines, and libraries and see how they chose to design their object models and ownership systems.


> suitable for prototyping like Python or Javascript than Rust, and then port it over to Rust

In all my years, every time someone prototypes, it either never gets used, or it does, its successful and then it never gets rewritten because there's no time for technical debt.

Now I prototype in the language I plan to use long term, but what that means is stubbing out sections, or putting in todos if there are spots that can be better. The prototype should be able to scale into your real application, otherwise your just building something that will haunt you.


A prototype is supposed to never get used. The point is to learn - to explore the territory so you have an idea of what's a good idea and what's a bad when you start implementing the real thing.

I just threw out a prototype last week, and started on implementation for real. I was able to cut about 70% of the features, throw out a couple third-party libraries and bleeding-edge language features that turned out to be more trouble than they're worth, identify 2 areas that really needed to be firmed up and made more robust before they'll support a "real" system (and have about 20 concrete examples of the first and 80 of the second, so I can design the features with actual data), and revise the API 4 times, for 5 weeks of work. It helps that I'm both CEO and only developer and so have the political clout to do this. And I'm under no illusions that this is the "final" revision - I'm doing this incarnation with somewhat more robust development practices, but I still expect I'll have to throw it out in a year or two. But it certainly beats doing those 4 revisions over a year (with a 7-person team) and then going out of business, as happened in a startup I'd previously worked at.


It's a lot cheaper buying a macbook if you can get a 20% off credit, which you can get by buying a new porsche. Spend $100k, save several hundred dollars!

Saying that learning C++ helps with learning Rust is almost certainly true in a banal way. Learning any N languages will make the incremental cost of learning language N+1 a little lower, even if those languages are not closely related.

Are you actually claiming that, if you knew someone who knew, say, Python and JS, but no statically compiled unmanaged language, and that person needed to know Rust by 2017, that your advice would be to spend the first several months learning C++? I find this implausible.

C++ is a baroque language with way too many conflicting features, conflicting cultural standards, and general all-around cruft. Isn't it?

Disclaimer: I know so little C++ that we should assume that I don't know it, and I haven't written any Rust yet.


No, of course not. If your goal is to learn Rust by 2017, go learn Rust. I don't know how this became about Rust anyway, the article is about C++ in 2016.

I also won't disagree that C++ is a baroque language with too many conflicting features, conflicting cultural standards, and general all-around cruft. There's certainly plenty I hate about it, and it's rarely the first choice I'd reach for starting a new project these days.

I'm claiming that if you are a brand new programmer that's remotely interested in HPC, games, finance, search, compilers, databases, or system software and want to build solid fundamentals for a career, learn C++. Because:

1.) It will provide access to jobs in those areas, where you can be mentored by experience programmers.

2.) It will provide access to open-source codebases in those areas, where you can learn the structure and development processes of a large, existing body of software.

3.) It will teach you language concepts that transfer over to other languages.

4.) It will teach you why certain language features didn't make it into other languages, and the pitfalls of using a language that grew by accretion over 25 years, and the many ways that you can shoot your foot off with these language features, and how to debug it once you do.

A very small portion of programming is in the language. Over the course of your career, you'll use dozens of languages - hell, I've only been doing this professionally for 12 years and I've shipped software in a dozen languages. What matters a lot more are the patterns - being able to recognize why you might use a certain technique, and how the same technique manifests differently in different languages. In the fields that it's still widespread in, C++ will open a lot of doors to learn these.


> C++ is a baroque language with way too many conflicting features, conflicting cultural standards, and general all-around cruft. Isn't it?

This is a really common sentiment, but when pressed to name a specific thing that should be removed (ignoring backwards compatibility), people really struggle.

The standard library has a lot of cruft and poor legacy decisions, but it's several decades old. Any code that's been around for several decades is going to have cruft. Any language that is as successful as C++ will have the same problem when it reaches the age of C++. It's inevitable. Anything looks elegant and free of cruft when its only a few years old and has low enough usage that the maintainers can make breaking changes without breaking billions of lines of code.


Not sure what the op is saying, but I'll say that given programmer A who knows Python and JavaScript, and programmer B who knows c++, B will have an easier time learning Rust than A will.


I'd much rather write in a language suitable for prototyping like Python or Javascript than Rust

Is this mostly because of the dynamic and interpreted nature of Python and Javascript or because of their library ecosystem?


It's both. It's also because they don't require you define your data structures up-front, and have a clean, concise syntax with lots of sugar, and no compile times.

When you start a green-field project, you usually have no idea what the overall "shape" of the software will look like. You don't know what your data-structures will look like. You don't know what their lifetimes will be, or where they'll be accessed from. You don't know what functions you will need to write, or which ones will be leaf functions and which will be branch ones. You don't know which functions will end up calling each other. You don't know where code will be duplicated, where it'll be shared, and which subtle variations will be required in different places.

All of these are really important to learn to come up with a "proper" software architecture.

The value of a dynamic language like Python or Javascript is that you can just hack your way around these problems to make further progress. Forget a field in a data structure? Just add it in the code that computes it. Need a function to take multiple kinds of objects? Make sure they have all the requisite fields (adding forwarding properties/methods if necessary), and then just pass it in, Python doesn't care. Make a mistake in your module system? Just call the damn private method, the language won't prevent you. Have slight variations in behavior? Attach a closure to the function or data structure and invoke it.

All of these are bad practices for designing a major system with lots of developers, and yet all of them come up all the time when a system is young. The value of prototyping is that you can make more forward progress, ideally getting something up on the screen or even in front of users, and then you have a concrete laundry list of problems to fix in the next version.


I disagree. I use the compiler's dissatisfaction with my code to guide the refactor I need when my model is wrong. In a language with a good type system, the typechecker is like a built in set of unit tests.


I think the more general point he makes is about design, its hard to have the design nailed down from the beginning for most projects, why not hack something up quickly and in the process discover lots stuff that you probably wouldn't had thought of in the beginning. But if you know a language that lets you iterate very fast and you are highly productive in it, that it also has great library support so don't you don't have to reinvent the wheel all the time and lets you refactor just as easily then sure ;)


My point is that I can hack things up quickly and in the process discover lots of stuff without needing to be able to run the hacked together code, because typechecking it is enough to guide me in how to reshape it.

In dynamic languages you can also hack things together and run them (manually or through tests) to see how your hacks aren't fitting together right. I'm not saying untyped languages are bad, I'm saying that they are not inherently better for prototyping in my experience.

Libraries are important, but totally outside the scope of this conversation and I think its rude of you to raise an implication about them here. Everyone knows Rust is a young language and has the libraries that come with that.


Sounds like you are talking about the comparison between Java and JavaScript/Ruby except the GC part.


The target audience for the code I write is not the compiler but my coworkers and the gals and guys who will maintain the stuff after I leave a project.

So I prefer to choose a language where I can nicely express the intent of what I code. Java is a good compromise, because it's easy to follow (even if it's quite verbose and mainly stuck in the world of nouns).

C++ is hard for that. It's an extremely complex language that got extended over the years and it takes a quite long time to master, for example compared to Java.

Yes I know, C++ is more "capable" than Java but it's not efficient in the hours-spend-per-solved-problem way of things.

Yes I know, there are you C++ whiz kids out there who are insanely fast using the language. But I experienced teams using it, and in a team C++ does not scale well. Adding more and more features to it makes it a cool language, but not one which I like to use in my day job.

And I'm not sure what I'd use it for. I use Java for the day-to-day boring "enterprise" world, twiddle around with dynamic languages for the hobby (yay to the recurrence of lisp!) and for close to the hardware stuff I'd use plain C - which is a not so bad language at all. And it's hard to beat speed-wise, writing C++ that keeps up with C is possible but not trivial.

And if I'd have to look forward I'd check out rust or go or even look at some really mindbending stuff like erlang. Parallelism can't be evaded in the modern world and using a language tailored to that is a great thing. (Yea, I'm sure there also is a C++ library implementing the actor model).


My preference is to use C++ like a C with some extra useful features, and not "go full C++" by trying to "do everything the C++ way". C++ has a lot of features and is growing but, contrary to what a lot of evangelism seems to say, you are not obliged to use all of them. Use a feature when it makes the code simpler and the effort reduced, don't try to use one and complicate things just because it's a new feature that seems interesting.


This is how I feel. Even as a Unix/"Pure C" fanboy, just having the STL is a huge time saver.


When I started with C++, I thought the same way. Now, two years later, I look back at all that code and I'm thinking wtf is all that C code doing there. I guess I just appreciate a lot of the C++11 features much more now.


Precisely! Just because the features are there doesn't mean you must use them.

My use case for instance, is to use C++ as C plus the STL (yuck for doing everything manually in C as opposed to having neat classes that are easier to use and equivalent after compilation), useful C++11/14 features like auto type inference or lambdas, PoD structs, and little else.


http://learnyousomeerlang.com/

It's probably not as mind-bending as you think. The actor model is fairly intuitive. The difficulty is that it's fairly easy to have unexpected behavior from simple actor rules. And the syntax is kinda funny ('.' and ',' look too similar to me), but there is elixir if you want (although I think elixir is even weirder...)


There are a few things I disagree, but at least one is factually wrong. C++ is as fast as C, if nothing else because it's (almost) a superset of it.


The things C++ is missing include performance features. It's missing the "restrict" keyword.

https://en.wikipedia.org/wiki/Restrict

Some compilers offer "__restrict" or "__restrict__" in C++, but it's not standard. Standard C++ lacks the feature.

Besides "restrict", there is also the matter of theory not matching practice. The norm for C++ programmers is to be at least somewhat unaware of various extra copies and allocation. You know it's true. In practice, this makes the language slower.


C++ has std::valarray that has aliasing rules similar to the restrict keyword, allowing the same types of optimizations.


Forgive my ignorance, but doesn't std::unique_ptr communicate the same restriction, and therefore enable the same optimisations?


unique_ptr has unique ownership; it's not an error to use other pointers to the same data (just a bad idea).


>>The target audience for the code I write is not the compiler but my coworkers and the gals and guys who will maintain the stuff after I leave a project.

If the compiler cannot understand the code you write, I doubt your "coworks and gals and guys" are going to be much happy with you. Isn't code for both the compiler and programmers? >>So I prefer to choose a language where I can nicely express the intent of what I code. Java is a good compromise, because it's easy to follow (even if it's quite verbose and mainly stuck in the world of nouns).

I don't call verbose code nice, just like average people don't call verbose prose nice. It is personal whether something is easy to follow. To someone who has put in the time and effort C++ code can be easy to follow; I see no reason to denigrate one for that. >>Yes I know, C++ is more "capable" than Java but it's not efficient in the hours-spend-per-solved-problem way of things.

If I am going to spend 30 years programming, I am going to become more "capable." And I will spend LESS hours-per-problem because I have put in efforts to really understand the problems and gained insights. >>Yes I know, there are you C++ whiz kids out there who are insanely fast using the language. But I experienced teams using it, and in a team C++ does not scale well. Adding more and more features to it makes it a cool language, but not one which I like to use in my day job.

Some teams are built to use Java, others use C++. I do HPC and numeric analysis; my C++ skills is quite an asset to my team. >>And I'm not sure what I'd use it for. I use Java for the day-to-day boring "enterprise" world, twiddle around with dynamic languages for the hobby (yay to the recurrence of lisp!) and for close to the hardware stuff I'd use plain C - which is a not so bad language at all. And it's hard to beat speed-wise, writing C++ that keeps up with C is possible but not trivial.

I know what I use C++ for, as do other people. C lacks abstraction mechanism and is fragile. And it is actually not very good for today's hardware. Eigen library offers ease of use and good performance using template-meta-programming; I don't see any C library doing both. >>And if I'd have to look forward I'd check out rust or go or even look at some really mindbending stuff like erlang. Parallelism can't be evaded in the modern world and using a language tailored to that is a great thing. (Yea, I'm sure there also is a C++ library implementing the actor model).

Erlang employs one kind of parallelism, and there are others. They are suitable for different use-cases.

Look, it doesn't surprise me a Java programmer doesn't think much of C++. I get it, it is competition. I don't recommend Java to my team lightly either. But, really, you got to find better trash talk.


Mostly spot on.

That said I can seriously see Rust giving C++ a run for it's money in about 2-3 years. All these things you have to learn the hard way(ownership, move semantics, dangling pointers) are dealt with up-front in Rust. They really aren't bolted on as an afterthought but a core part of the language(std::move I'm looking at you!).


I agree, but again that's for new code. Many times you'd be going in to an existing codebase like he mentioned. Especially in the areas he mentioned like Financials, HPC, or Compilers. You might get away with Games because even though the engines are reused the code is mostly written to be thrown away to make it faster.


Yup, that's why I say 2-3 years. It's going to take some time.

I also think there's some interesting work to be done w/ language integration so you can drop Rust into Java/C#/Ruby/etc. I think this is actually the best way to drive adoption since you don't need to greenfield it. Tricky part is going to be making sure you preserve the correct lifetime guarantees but I don't think it's insurmountable.


That sounds incredibly naive. Three years isn't a long time, especially when C++ has a 30 year head start.

People put too much emphasis on low level language features when promoting new languages. The pitfalls that Rust solves are known issues in C++, and there are known solutions to them and best practices for avoiding them. A lot of the problems are "solved" as new iterations of the standard come out.

To really give C++ a run for its money Rust will have to compete with the gigantic ecosystem around C++. It has libraries for literally everything. It's backed both directly and indirectly by all of the biggest software companies in the world. It's supported by debuggers, profilers, memory profilers, static analyzers, IDEs, refactoring tools, and editors. There are books, tutorials, training classes, etc.


> The pitfalls that Rust solves are known issues in C++, and there are known solutions to them and best practices for avoiding them. A lot of the problems are "solved" as new iterations of the standard come out.

What is the known solution for memory safety problems, in particular use after free?

(It's not RAII and smart pointers.)


I'm pretty impressed by ASAN (and MSAN, UBSAN, etc.) It catches uses after free trivially, as long as you have unit tests, which I always do.

I wonder at the cost of these 2 strategies, for the huge legacy of C/C++ infrastructure we have:

1) refactoring it to be testable, adding unit tests, adding test coverage automation, adding ASAN/MSAN builds, etc. (LLVM is a game changer IMO in addition to C++ 11; things have changed so much in the last 5 years)

2) rewriting it in Rust

I'm not claiming anything specific, but #2 seems strictly more expensive, since you still need unit tests for logic bugs anyway (and yes I have written OCaml and get that you can replace control flow with typed data, etc.) #1 also has a straightforward migration path.

More to the point, most open source software has TERRIBLE test hygiene. If they don't even have time to write tests, I wonder how they will have time to rewrite in Rust.

I'm all for new software in Rust, but I'm thinking about all of Linux user space, web servers, web browsers, every programming language implementation, etc.

Also, fuzzing can be combined with ASAN.

https://blog.hboeck.de/archives/868-How-Heartbleed-couldve-b...

Yes it would be better to know these things statically and not dynamically, and I'm glad that Rust has innovated in this space.


ASAN has not been successful at eliminating (sometimes exploitable) use after free bugs in the wild. For example, look at this year's Pwn2Own. All browsers are heavily unit tested, fuzzed, and run through tools like ASAN, in addition to making heavy use of smart pointers, and they all fell to UAF.

There is a reasonable argument to be made that memory safety problems don't happen enough to be worth eliminating in practice (though people are often reluctant to make this argument so explicitly). But I don't think it's possible to successfully argue that they can be eliminated at reasonable cost in C++ with the tools we have today.


For one, compilers usually ship debug stdlibs that will zero out freed memory and freed pointers, so debug builds will seg fault when deallocated memory is accessed.

Improving on that, Valgrind can find issues like use after free pretty easily, and even tell you where something's freed and where it's used again. Coverity does a great job on that, too. Coverity can even tell you when you access a pointer or shared pointer without checking if it's non-null.

On top of that, handling "raw" pointers and memory has fallen out of favor, and the preference is to use references as much as possible, which are more difficult to use incorrectly.

I won't claim any of those solutions are 100%, but they're sufficiently close that few people will jump to a new language over it. In my experience, "real world" C++ programmers don't spend much time fighting memory safety problems.


References are not harder to use incorrectly. They are just as vulnerable to UAF.

Additionally, the security track records of large-scale apps written in modern C++ disagree with you. For proof, go to any browser bug tracker, or look at Pwn2Own.


You're really kind of changing the subject and trying to put words in my mouth here.

The reality is that apps in every language have security vulnerabilities and bugs, and different languages are more or less susceptible to different classes of bugs. Nobody is claiming C++ is perfect, and it's irrelevant to this conversation.

Getting back to my original point, even completely solving the memory safety issue, getting mainstream adoption for Rust will require competing with C++'s other advantages, like its huge ecosystem, and I think it's optimistic to think that will happen in just 3 years.


> The reality is that apps in every language have security vulnerabilities and bugs, and different languages are more or less susceptible to different classes of bugs.

Memory safety problems are on the whole much more problematic than the types of bugs that other languages admit, especially memory-safe statically typed ones like Rust. That's because they regularly result in remote code execution, which is among the most severe kinds of security issues. Nobody's claiming that Rust eliminates all security vulnerabilities: just a large number of the most severe ones.

> Getting back to my original point, even completely solving the memory safety issue, getting mainstream adoption for Rust will require competing with C++'s other advantages, like its huge ecosystem, and I think it's optimistic to think that will happen in just 3 years.

Sure, no argument there in terms of replacing C++. But I think that the idea that "C++ is too entrenched, other languages won't be able to compete" is very '90s thinking. Look at how dominant C++ was in the '90s compared to its status today. In the early-to-mid-'90s, you wouldn't think of building your company on anything but C++; nowadays, building your company on C++ (outside of a few niches like games or embedded software) is uncommon enough to warrant a blog post.

What I think is likely to happen, if Rust succeeds, is that it will continue the trend of chipping away at C++'s dominance in areas where Rust's advantages are strongest (safety, concurrency, friendliness to newcomers). It won't replace C++ wholesale, of course; C++ is immortal.


>>The pitfalls that Rust solves are known issues in C++, and there are known solutions to them

Known issues and solutions that are known to experienced C++ developers. The article is specifically talking about newcomers.


2-3 years? Try 10 or 15 or more. Any serious code stick around for a long time.


I would love to use Rust more, and have enjoyed working with it on some toy projects, but the last time I looked there wasn't a super solid general purpose math library. Is there anything out there for Rust that is of similar scope to C++'s Eigen or Python's NumPy+SciPy? What about SSE/AVX intrinsics from Rust?


I really wanted to use Rust for numerical/scientific computing tasks, but it's kind of miserable for it. I didn't get hung up on the ownership things that all of the Rust zealots talk about (although I think explicit lifetimes are needlessly complicated). I got hung up trying to implement simple things like complex numbers and matrices in a way that was generic and usable. I'm sure some Rust fanboy will argue that Rust has operator overloading through traits, so I'll challenge anyone to make a workable implementation such that zA and 2.0B works in the following generics:

    let z = Complex<f64>::new(1.0, 2.0);
    let A = Matrix<f64>::ident(10, 10);
    let B : Matrix<Complex<f64>> = z*A;
    let C : Matrix<Complex<f64>> = 2.0*B;
If Rust can't do scalar multiplication or real to complex conversions, it's really not usable for what Eigen or Numpy can do. Try defining the Mul trait generically for those multiplication operators, and you'll see what I mean.

(yes, I know there are some syntax errors in the type declarations above - It's my opinion Rust got that wrong too...)

Supposedly this kind of thing will eventually be possible with the recent "specialization" changes, but I haven't seen anything that allows operators to work as above...

ps: Last I looked, there was fledgling support for SIMD on the horizon... LLVM supports that, so it could happen.


This is essentially a complaint that Rust went with strongly-typed generics like every statically-typed, non-C++, non-D language instead of untyped templates like C++ and D. I think that the difficulty of reasoning about template expansion type errors, the complexity of SFINAE, and the consequences for name resolution like ADL make templates not worth the added expressive power for Rust, especially when the features needed to support your use case can be added in the strongly-typed setting eventually.


I don't think that's quite right... ISTM that it's more about lack of multi-parameter type classes/traits[1] (+ specialization, I guess) and the fact that it went with the traditional non-symmetrical dot-based notation for dispatch... which means that the first "trait-parameter" would always be "privileged" (at least syntactically).

[1] At least I don't think it has those yet, right?


Rust has had multi-parameter traits for a very long time - possibly they were never not multi-parameter. All of the binary ops are multi-parameter. The first parameter is just an implicit Self parameter, which does preference it syntactically, but semantically it is not privileged in any way.

The issue that operator overloading math crates come into is that Rust's polymorphism is not only type safe but also makes stronger coherence guarantees than systems like Haskell do. You can't implement all of the things you want because Rust guarantees that adding code to your system is a "monotonic increase in information" - adding an impl is never a breaking change to your public API, and adding a dependency never breaks your build. Haskell does not make these guarantees.

There's no way to "solve this" entirely because there are a bunch of desirable properties for type class systems and they fundamentally conflict, but I think with specialization and mutual exclusion (the latter hasn't been accepted yet) Rust's system will be as expressive as anyone needs it to be while still maintaining coherence.

Of course in this context Wadler's law should be taken into account, and the grandparent poster could maybe revise their strength of opinion about syntactic sugar and recognize that this is about solving a much more complex problem than how to make matrix multiplication look nice. https://wiki.haskell.org/Wadler's_Law


I stand very much corrected. I must admit I wasn't sure and just skimmed a little documentation and didn't see any examples of MPTCs.

> The first parameter is just an implicit Self parameter, which does preference it syntactically, but semantically it is not privileged in any way.

Yes, but I think the syntax was actually what bothered the OP who was complaining about linear algebra. At least C++ has free functions for operators. (I assume, again without knowing, that Rust doesn't, otherwise OPs "demands" should be easy to meet, given specialization.)

I mean any kind of "double * matrix" or "vector * matrix" or ... should be easy to support if operators are free functions and there's MPTC and specialization. EDIT: Actually, come to think of it, for this situation (algebra) you don't really need specialization, you just need overloading. (Since none of the types involved are sub-types of each other. It could be argued that a 1xN matrix ~ N-vector, but that's probably not worth supporting.)

Generally, I just think it's a mistake to even support the magic "dot" notation and thus privileging one operand over any other, but I guess we're getting off topic.

Thanks for the lesson, btw! :)


Again, the issue we run into is with coherence. The Rust team decided that adding a dependency should never in itself break your build, and adding an impl/instance to a library should never be a breaking change. Haskell doesn't make this guarantee. C++ doesn't even typecheck parameters.

Providing this guarantee means establishing what are called orphan rules: you can only `impl Trait for (Type1, Type2, ...)` if a certain portion of those traits and types are defined within the same library as the impl (the exact rules are nuanced, there's an RFC that defines them). The rules that were arrived at as providing the best guarantees unfortunately make it difficult to implement the operator overloading traits in the way a lot of numeric libraries want.

For example, you can't define a trait `Integer` and `impl<T: Integer> Add<T> for T`.

I'm actually not sure what the OP's specific complaint is, but its ultimate cause is something along these lines.


> This is essentially a complaint that Rust went with strongly-typed generics

It could also be a complaint about how operators are implemented, e.g. in Scala they're just methods (no need for a special trait). That's not to say I don't think Rust made the right choice, but Scala went with strongly-typed generics, and can allow implementing the asymmetric multiplication operators.


If operators are just specially named methods, then with strongly-typed generics you can't write generic functions that work on anything that (for example) implements the + operator, because traits (concepts) have to have a specific signature.


Yes, you're right. The only way I can think of to handle it (in Scala, I suppose you can't do it in Rust) would be via implicit parameters. You would basically have a Times trait, and a TimesOp trait.

    trait TimesOp[L,R,O] {
        def times(L left, R right): O
    }

    object TimesOps {
        implicit object ScalarMatrixTimesOp[S] extends TimesOp[S,Matrix[S],Matrix[S]] {
            override def times(S left, Matrix[S] right): Matrix[S] = {
                ...
            }
        }
    }

    trait Times { self: L =>
        def *[R,O](R right, implicit timesOp: TimesOp[L,R,O]): O =
            timesOp.times(this, right)
    }
I'm sure I got something wrong here, but I think the basic idea works. In any case, it goes to show that getting this behavior in a language with strongly-typed generics is non-trivial.


No. The complaint is that Rust is not good at expressing numeric algorithms. The poster above asked about a Rust library like Eigen (C++) or Numpy (Python)... It doesn't matter - I suspect you would dismiss any criticism regardless.


I've been looking Rust with good eyes and I want to be using it in my future projects in College. But how easy would it be for developers to "give up" OOP for a more safe and up-front language like Rust?

I mean this as a serious question, cause I see C++ being used instead of C thanks to its multi-paradigm capability and I don't know how Rust could fight that for now.


You're not really giving up OOP with Rust, traits give you interface abstraction which is most of what you're getting with C++. As I write more C++ I find myself using less classes and more structs + functions that let me decouple my code and data.

If anything I would say Rust is a tad more multi-paradigm due to its really strong functional roots(Sum types!) and nicer treatment of lambdas(no alloc like in std::function).


Maybe a bit off-topic, but I just want to clarify - lambdas only include dynamic allocation if you convert them to an std::function, right? I was under the impression that 'auto f = []{...}' did no heap allocation, but converting to an std::function could, depending on the size of the closure.


Yup your on the mark, it leads to some interesting scenarios where you can't easily refactor code without either introducing template hell or taking the hit for std::function.


Rust is currently lacking plenty of "quality of life" features which even C++ has (eg, default parameters). Not to mention a good IDE. I do think it's the first serious contender to eventually replace C++, but it's really not there yet.

It's definitely worth trying and learning. I would've ditched C++98 in a heartbeat to move to Rust, but C++ is a pretty exciting and productive place to be ever since C++11.


I haven't needed an IDE in Rust. The auto-generated documentation is usually good enough for all my needs. However, C++ is pretty awful without an IDE, because you can easily end up with deep object hierarchies, usually with two files per object.

It's also nice to just tab to your terminal and do `cargo run` and get your code compiled out of the box. Writing a decent makefile (or whatever else you plan to do) for C++ is comparatively quite a hassle.


From a code writing perspective, Rust's support in Atom and other editors thought Racer is pretty good. Debugging is my big issue right now. For that I drop into lldb in UI mode, which works fine, but I would love to see that integrated into one of the ide's out there.

Anyway, the lack of an IDE hasn't hampered my ability to be productive in Rust. Unlike Java, C++, et al.


Almost everything is a crud app. Game developers are notoriously abused in compensation and work hours.

I do think c++ provides an interesting area to program into but the part where you do it to avoid places with business requirements would only speak to programmers who don't yet realize that the business perspective of your code is where your career growth and potential come from (not from learning more and more advanced coding technique).


> programmers who don't yet realize that the business perspective of your code is where your career growth and potential come from (not from learning more and more advanced coding technique).

Yes and no. I think programmers get the biggest lift, career-wise, from being able to put themselves in the shoes of a stakeholder while coding and being able to talk intelligently about what they produce to non-technical people. But that's just the price of admission to work in an environment where you don't get treated like shit usually.

Beyond that, if you want to work on problems that you really enjoy then finding a technical specialization that gets you excited to go to work is a must. Then find a job where your knowledge in that area translates to business value. If that sounds hard, spend some more time socializing with other people in your industry. Consulting or working on OSS projects helped me a ton in this area.

That's the arc my career has taken at least, and I'm still early in it.


"Video game developer is considered to be one of the most rewarding jobs in a software industry."

I would not agree with that. Unless you insert 'mistakenly' into that sentence or replace 'rewarding' with 'underpaid and overworked'.


I've been lucky in my career in games and have worked for some great people. Now I'm my own boss and try to treat my employees fairly. It makes me sad to hear about mistreatment in games.

I think working in games can absolutely be incredibly rewarding if you love the medium and delighting players. On huge teams individual effort can be hard to connect to player enjoyment but I've found it super fun to work on and lead smaller teams where everyone can directly impact players and get concrete immediate feedback on how your work is affecting player experience.

Unlike many other areas of software development most game features immediately impact how players perceive your game. I love the immediacy of that connection.


I would love to do game development / programming. The horror stories about long hours, low pay, lots of crunch all have driven me away from it so far. Gamasutra[0] did a nice article about how crunch makes games less likely to succeed (among other things); I hope this kind of stuff changes the mindset of the industry.

[0]http://www.gamasutra.com/blogs/PaulTozour/20150120/234443/Th...


That's good to hear you're baulking the trend. Do you develop your mobile games in C++?


Only if you're really into games.

I used to work on physics engines, but I was into physics engines, not games or Hollywood. Dealing with Hollywood sucks. Either they're in development and their credit cards bounce, or they're in production and they want a new feature yesterday.


I generally agree. This isn't always the case, but I'd guess on average across the entire industry from large to small across the world it is.


Seconded. I thought that video game developer was a badly-paid, over-worked, permanent crunch situation with poor job security. It has a reputation for paying people in perceived glamour rather than actual money, and crunching through naive youngsters who don't realise how badly they're being treated.


I agree 100% with second point. I enjoy writing c++11/14 and it's not my primary language. 10 years ago i would say no to c++ but today it's like writing in different language.

Qt is another huge argument why i use C++, there is no alternative (or i don't know any other cross-platform framework as good and mature as Qt is).


I tried to learn "modern" C++ style, and watched many, many videos from cppcon, but I just don't feel like I understand how to do it. I started a C++ project, #included <vector> and all the rest, and tried to do things the way the experts recommend, but I had no confidence that I was doing any of it right.

I know all the copying and moving and whatnot is supposed to "just work", but as far as I can tell, there's no easy way to verify that your program is doing what you intend. There also aren't any compact guidelines -- "follow these N simple rules and the STL won't explode!". With most other languages, I can at least think through the program to figure out if it is going to do what you intend, but the way that the STL is implemented to support all the modern C++ behavior requires a brain the size of a planet to understand.

I'm sure Stroustrup and the committee can write Modern C++, but I sure can't. Not with any confidence, anyway.


No one said that C++ is easy, you need to teach yourself about pointers even if you will not use them so often, because almost everything under the hood is connected to pointers in some way. You need to learn the language.

Watching video from ccpcon is not the way you will learn C++. Read book from Stroustrup about C++.


I don't have any trouble with pointers. I can understand C programs just fine. But Modern C++, as the evangelists love to point out, is specifically about avoiding raw pointers in favor of smart pointers, value semantics, etc. That's the stuff that I find difficult to use.

I did buy Stroustrup's post-C++11 book, "A Tour of C++", which is supposed to be the compact "how to do Modern C++". It provides tips, but as I mentioned above, I did not find it comprehensive enough. Terse recommendations like "Prefer returning by value" may be great small-scale recommendations, but they don't explain how the big picture is supposed to come together.


You should watch those cppcon again, because Stroustrup and Herb are saying there many times that smart pointers are NOT should be use instead of raw pointers. Use smart pointers where It's about ownership. You should watch this video:

https://www.youtube.com/watch?v=1OEu9C51K2A


How about Qt bindings to other languages? Last I checked for Python (like 4 years) ago, the biggest problem I had was with licence (GPL). But yeah, compared to Python, using C++ also gives me system threads which might come handy every now and then for CPU intensive stuff.


The problem with the Python Qt bindings, in my experience, is that PyQt is not... pythonic. I like Pythonic things, like Flask, over less-Pythonic alternatives, like Django. At least in principal. And yes, I know Django has gotten better about it over the years. I need to try a new project in it at some point, especially considering Flask's atrophy.

But PyQt is not so much a binding layer as just Qt in Python, literally. It has to be some kind of semantic offense though, since I have no problem writing Qt C++ to begin with. It is just the language and style of the API itself favors C++ because thats what it is made for.

QML and Python, on the other hand? Lovely. If it didn't depend on all of PyQt to just start up a QML program and signaling into / out of it, I'd be all over that idea.

Which is probably where I am super interested on the Rust front. qmlrs showed a lot of promise last year, and while the project seems to be suffering from a lack of contributions as of late, someone will eventually pick up the reigns - I do not think any other GUI framework is even close to appropriate for Rust to begin with anyway.


I don't know about python bindings, but i was looking at Go bindings. Anyway for me it feels natural to just use modern c++ with Qt, it works for me, i do not feel like i am loosing productivity.


+1000. there's std::async, and std::promise, with futures lambdas and smart pointers. I really do feel like c++ is a new language. I will mention that it still lets you shoot yourself in the foot. However now you get to choose when you'd like to prevent shooting yourself in the foot.


...I don't know any other cross-platform framework as good and mature as Qt is

There isn't. I've been looking for twenty years.

Qt is the only platform I'll use for embedded GUI development.


I tend to think that the C++ standards committee went off into template la-la land a few years back. Everything has to involve templates, "metaprogramming", and excessive gyrations within the type system. If you want to compute at compile time, C++ templates are a terrible way to do it.

The fundamental problem with C++ is that it has hiding ("abstraction") without safety. No other major language does that. In C, there's no memory safety but everything is explicit in source. In memory-safe languages, there's memory safety even if there's lots of abstraction.

Rust has a brilliant idea in the borrow checker, but the type system is kind of weird, the attempt to bolt on functional programming is marginal, and error handling involves hacks such as "try!()". But with enough use, those aren't so bad. Maybe. Personally I would have preferred Python-type "with" clauses and exception handling to RAII. If anything goes wrong in a destructor, Rust cannot help you.


> If you want to compute at compile time

I assume you're not up to speed on all the constexpr work.


I didn't realize that as of C++14, you can now execute loops at compile time. Does that work in many compilers?


Wouldn't any fully standards compliant compiler for c++14 support that feature?


gcc's optimizer will often do it for plain C

(recent versions at least)


You'd prefer explicit "with" clauses even for, say, reference counted types?

I mean, Objective-C tried explicit [foo release] for a long time, but I haven't heard anyone clamoring to go back to the pre-ARC world…


I mean "with" for opens and the like, not structures. Python "with", not Pascal "with".

For Python's "with", someone figured out what to do if something goes wrong in __exit__, which is called as control leaves the with clause. An exception in an __exit__ is handled well, and nested exceptions work reasonably. Failure to get this right results in problems such as unnoticed truncated files because the status on the close after a write wasn't checked.


Right, I'm referring to Python's "with". How do you write reference counted smart pointers without RAII?


When Moore's Law ends (imminently), performance and performance/watt will be harder and harder to come by.

In the long term, there may be a reversal in the trend towards "easy" languages.


I doubt that easy languages will disappear. Instead, compilers will get smarter at compiling/synthesizing low-level/parallel/distributed code from high-level languages/specifications. See a lot of the recent work out of Stanford on domain specific languages with Terra [1] and its derivatives (e.g. Ebb [2]) as well as Delite [3].

[1] http://terralang.org/

[2] http://ebblang.org/

[3] http://stanford-ppl.github.io/Delite/


This is a logical fallacy. More than likely, easy languages will become more performant.


There already is a "reversal", to some extent, in that statically typed languages are making a comeback.

However, the performance of dynamically typed languages has also improved a ton. It's not unrealistic to think that you could write simple JS code and get about 50% of the performance of an equivalent C function, possibly more.


you are already there for some admittedly contrived situations.

Things like simple loops that work on typed arrays in javascript can really give C/C++ a run for their money right now.


For the average programmer it ended a decade ago - most of us suck at concurrent programming.


I don't know why anyone would think C++ is dead. If you need top-notch performance, there's often no reasonable alternative.


stupid question: why do people prefer C++ over C? is C considered dead?


The furniture in C++ seems better, or at least more numerous - boost, std::, Qt.. . This being said, if you make your own furniture, I still rather like 'C' better for that.

'C' is used in kernels, drivers and such, in tools and in embedded work.


I miss generics and RAII when I have to use C.


RAII is nice—I hope GCC's __attribute__((cleanup)) gets included in C2x.

Templates though... are nice, I guess, but the cost in compile time, ABI, code size, and ease of debugging is a lot to swallow.

Besides, with the preprocessor and M4 you can get many of the benefits of templates, albeit with most of the downsides (and a few more to boot).


well, santaclaus wrote 'generics', not 'templates'. So presumably that means containers like std::vector, or std::unordered_map. Those are pretty easy to deal with.


C gives you very very little in the way of tools to create and manage abstraction. Some programmers love this and for some projects it may be appropriate.

But is a reason so many large native codebases go with C++. The language provides some pretty handy tools to manage code organization and complexity at that scale.


from a java background is it better to jump into C++ instead of C?


Yes. Be sure to jump straight into the most modern C++ practices and avoid thinking of it like C (with all its focus on non-type-safe pointer manipulations).

Other people have other opinions, of course. Care most about the opinion of your boss and team mates. If you show up with a C++ style far beyond what they are used to they will think you are a nut.


I'm going to say no, because C++ seems closer to Java at first but it's really not. You'll probably spend too much time learning about templates and its object system instead of learning to think in the things that make C and C++ unique—pointers, pointer arithmetic, explicit handling of error conditions, low-level control of memory, preprocessor, etc.


I don't think anyone agrees one which C++ features are worthwhile, but I think most people end up missing some feature of C++ when they fall back on C.


1) Resource management: you never have again to risk forgetting a call to "delete/free".

2) Templates: you don't need to resort to a horrible macro mess to implement a type-safe container.

3) Polymorphism: you don't need to manually manage vtables.


I would add namespaces to your list. They're extremely useful to organize entities of a program. No more prefixes to functions and types names.

Also enum classes are pretty useful. Unlike old enums, the labels do not pollute an outer scope.


I'm in the middle of C++ For Engineers and Scientists, learning about STL and vectors. Though I'm an EE and learning it for software defined radio, specifically GNU Radio.

Seems like all the really high performance stuff is in C++, C, or Fortran.

Recently learning Python, and knowing some basic C for microcontrollers, I can see how OOP is really handy, thus C++.


I'm surprised to find no mention of embedded development. C++ seems to have a foothold there.


You're right, but I just couldn't realistically enumerate all the places where C++ is used. There will always be something missing.


I haven't used C++ in about... 12 years. I'll admit ignorance about the updates that have occurred in recently, so I've got a general question: How is the compatibility between different versions? Specifically, would a C++ 11 compiler be able to compile legacy code written for C++ 03?


Yes in 99.9% of cases. The main trouble could come from the angle of accidental move (rvalue cast) somewhere invoking a broken/wrong move constructor or assignment.

Of course, crusty old code can be buggy yet originally working - e.g. relying on implementation defined or undefined behavior. (or even bugs in compiler or standard library implementation)


Short answer is yes (long answer is what the other guy said). The standards committee never makes breaking changes (a big reason why the language is so bloated).


The worst thing in modern C++ is metaprogramming. It's powerful technique, but the syntax is awful, programs are unreadable & unmaintainable. Moreover, compilers don't help with debugging such code, diagnostics messages rarely tell anything useful. Not to mention frequent ICEs, at least in MSVC. :)

As a programmer who already knew functional programming and have some experience in SML and Ocaml, I found C++-functional-programming really annoying.


If you're going to write a compiler, please, for the love of Bjarne Stroustrup, don't write it in C++. There are so many better modern languages for writing a compiler: OCaml, Haskell, Rust, and Terra to name a few. You really want features like algebraic data types and a sane package manager in your language of choice. In my mind, the only use case for C++ is interfacing with existing C++ libraries/codebases. All other tasks can be better relegated to superior languages.


What you say is very true, so cant think why you got downvoted. Compilers are so much easier to write in dialects of ML. Anything that has algebraic data types and pattern matching makes life a lot easier. I think one can save a lot of effort by writing large parts of the compiler in a language that has such support.


Golang's compiler was originally written in C++, so people are actively avoiding your advice.


no, it wasn't. the go compilers initially came from Plan 9, which hasn't seen any C++ code ever :)


"Came from Plan 9". What does that mean?


Plan 9 was a research OS that was sort of intended to be a "next generation" Unix. The Golang project is led by at least one person who worked on the Plan 9 OS, and overall it seems to be heavily inspired by Plan 9's design.

https://en.wikipedia.org/wiki/Plan_9_from_Bell_Labs


Yes, I know that. But I don't understand the implication that Go came magically from Plan 9. I recall reading Kernighan stating that the first Go compiler was written in C++ before bootstrapping.


Sorry, I misunderstood your comment. I think the original poster was referring to the Plan 9 team's strong preference for C over C++. Plan 9 was written with a modified version of C, and the developers seemed to be very interested in evolving the C language. As far as I can tell, they never used C++.

> In the presentation before the awarding of the Japan Prize today, you were quoted on the distinction between research and development. [The former, Thompson stated, was directionless, whereas development had a specific goal in mind.] So in that context, is Go experimental? > Yes. When the three of us [Thompson, Rob Pike, and Robert Griesemer] got started, it was pure research. The three of us got together and decided that we hated C++. [laughter] [1]

It looks like you were half-right about the early Go compiler being partly written in C++: There were two compilers, and "gccgo" had a C++ front-end, while their homemade compiler "gc" was written entirely in C. [2]

[1] http://www.drdobbs.com/open-source/interview-with-ken-thomps...

[2] https://golang.org/doc/faq#What_compiler_technology_is_used_...



People can and do write compilers in C++, clearly, but that doesn't make the language any better. The mistakes of others need not justify one's own!


I think C++ was dead from like 2005 to 2011, then it started respawning (to deliberately borrow a term from fps-shooters ;-).


Can't read on ios due to social media buttons overlapping the text.


Even on my laptop I had to open the web inspector up and delete them. Who thinks this stuff is a good idea?


Similar problem on Android.


Get yourself some content blockerz, yo.


Oops… they're showing up on mine too with the blockers ️


Haha, you guys really hated these two comments.


The site works for me on mobile (Firefox Beta for Android with uBlock Origin and the Fanboy Ultimate filter list).


You learn pretty much any skill because it's in demand and people will pay you to do it. Of course there are some things you'd rather do or not do independent of money, but it's a good place to start.

A better question would really be "What factors would lead me to believe C++ is right tool for a particular task?" Which, I think requires a bit more depth than you'll find in this article.


As the author of the article, I can only say that I agree with you but I just wanted to write a little bit longer version of "Hey, look, C++ is alive and there are still interesting jobs involving using it!". Your question is something absolutely different, and yes, it should be given much more in-depth thinking.


I entirely agree with this -- in a lot of ways C++ is truly awful (though it's getting much better); but! It's the only practical solution for a lot of interesting work. I think when people are doing something hard they always reach for the pragmatic solution rather than the principled one, and C++ is nothing if not pragmatic.


I think with C/C++ you learn how the machine really works (you could argue that Assembler would be better but C/C++ hits a sweet spot for me) . A lot of of people who have grown up with GC languages don't seem to be aware that under the hood there is still some C code that allocates and releases memory. There is no magic.


> The appearance of GPGPUs along with parallel computation frameworks such as CUDA and OpenCL created demand for C++ programmers with this kind of expertise.

This is a bit ironic because using accelerated APIs means the host language could be anything and it would still be 99% as fast.


Maybe many of the problems they work with are also CPU bound? C++ programmers also tend to be somewhat more familiar with how memory is transferred to graphics cards, but anyone could potentially learn in any language. AoS or SoA?


I'd consider learning it to work on a critical FOSS projects already done in C++, make money with it, or write language analysis/transformation tools to make C++ apps safer automatically. Little reason to use it in a new project without legacy C++ code.


Any insight into why Genode went with C++? That, plus swearing to never program in it again after doing so for a decade, rather curb my enthusiasm towards it.


Jesus... No I didn't know. So, add that to the list of reasons I might grudgingly learn C++.


As someone who hasn't programmed much C++ in the last 10 years, what's a good book to get caught up with all the new stuff?


I like both "A Tour of C++" by Stroustrup and "Effective Modern C++" by Scott Meyers. Though, the latter is a bit dense if you are unfamiliar with c++.


What are some good resources for learning C++ with an eye to these modern practices?


Try Kate Gregory's Pluralsight courses.


D is a better language than C++.


Pff, E is better than D any day of the week. Trust me; I'm a programmer.


I will take a look at C++ when they get rid of header files.


I actually kind of like header files. I find that they can nicely separate external interfaces from internal interfaces. They also can summarize an interface nicely.

They are not without problems. Complete template definitions, default circular includes. Not supporting partial classes. Ect.


Compared to modules I don't think header files have even a single redeeming quality. They are a historical artifact.


Those stuff can be generated by the computer itself through automation. Its just headache to manage 2 files


insert citizen kane slow clap gif here




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: