Hacker News new | past | comments | ask | show | jobs | submit login

I have this impression that modern C++ is far more complex with a far more large surface area (from the perspective of learning the language) than Rust.

Because I am not expert in both, what folks that came from C++ to Rust or have to work regularly work with both say about that?




I’ve done a pretty fair amount of both, more C++ in total, more recent work in Rust.

I think it really depends on whether you’ve got a mountain of legacy C++ with dated infrastructure and practices or a modern C++ code base at shop that runs a tight ship.

In the former case the incidental as opposed to inherent complexity in C++ is a real PITA compared to Rust (which isn’t exactly shy about gratuitous complexity especially in the trait system and going more than a bit overboard with macros IMHO). C++03 written without modern tooling and a hardass style guide and stuff is usually a nightmare. I would vastly prefer to work in almost any Rust codebase compared to a sprawling nightmare of code that still calls new/delete routinely.

A modern C++ codebase with all the best practices and tools and stuff? 6 one way half dozen the other more or less: Rust is now fast and feature full enough to be an option for most anything C++ would do. Do you like hardcore affine typing by default or dislike it?

Another way to think about it is that modulo some big differences: Rust bundles (and mandates) a bunch of stuff you opt into in C++: a uniform set of best practices, hardcore static analysis, a credible strategy for catching memory safety issues and UB and thread safety issues. (The case is overstated about the difference in efficacy of e.g. ASAN and the borrow checker, they have pros and cons and it’s not a 1-bit debate).

C++ tooling has a few important edges (though Rust is catching up): clangd is usually (always?) faster and more stable than rust-analyzer but you can throw hardware at it so it’s not a huge deal.

Cargo is just a dunk below some project size. Above some project size the story is still evolving.

It’s just not as big of a difference as you often hear one way or the other. I’d probably default to Rust unless I had library interop reasons to go C++ (which is often).


I work with both, having started with C++ about 17 years ago, and agree that Rust feels like a relatively simple language compared to C++. Rust might feel harder to learn initially because the borrow checker won't let you compile certain programs, but once you are over this initial hump, the rest is quite straightforward.


I don't really agree with that. I'd say they're complex in different directions. C++ has complexities Rust doesn't because it bends over backwards for source-level compatibility. A lot of it is entirely at the semantic and pragmatic level. Rust's complexities are mostly due to its type system, meaning the complexity is at the syntactic level. I had never seen a computer crash because an IDE was trying to figure out how to parse a program before I worked with Rust, for example.


It is funny that you mention so specific a situation, as a very funny version of this was going around yesterday: https://developercommunity.visualstudio.com/t/Too-much-anime...

It doesn’t invalidate your experience, it’s just a funny bug.


"Too much anime" was not a phrase I expected to see in a compiler bug report.

Internal compiler errors do happen from time to time. They're annoying but usually easy to work around. I've had projects where I just cannot use rust-analyze because it just never finishes. It just eats RAM without accomplishing anything.


>"Too much anime" was not a phrase I expected to see in a compiler bug report.

Looking on the code example they provided [1], the comments is literally an anime character. Is he Naruto?

[1] https://godbolt.org/z/1PGrYjq3h


That looks more like Goku or another Dragon Ball Z character. It's a pretty funny bug report for that fact though.


    namespace StreetFighter { 
    class Ryu
Just a hunch, but I think that might be Scorpion from Mortal Kombat.


Blockchain? Did you file an issue?


Hah. Yeah. No, I didn't file an issue. I don't think it's a bug on rust-analyzer. There's only so much a language server can do.


>I had never seen a computer crash because an IDE was trying to figure out how to parse a program before

This happens daily on my Intel MBP in Xcode. In only a ~15k LoC small app, 99% Swift. I’ve had to break up several functions into smaller ones purely because the compiler chokes so easily. They actually have a dedicated error message to the tune of “couldn’t figure out types here, could you break this function up please?”.

But yeah, outside of that I’ve never seen it happen in major languages using major language tooling. Never even saw it in 5 million+ line C/C++/.NET CLR mixed codebases.


> I had never seen a computer crash because an IDE was trying to figure out how to parse a program before I worked with Rust, for example.

C++ is complex enough that the IDE can't really parse much of the program's code in any useful fashion. You're lucky if it can get the right type hints and jump to definition. And even the latter may not be complete.

Contrast with e.g. Java, which makes it easy for the IDEs to get the full picture.


Sure, but in those cases the parser just gives up. It doesn't grow its working set trying harder and harder seemingly forever.

We're talking about C++ and Rust here, so I don't know why you bring up Java. If parsing Rust was as easy as parsing Java you would not see me complaining about it.


Giving up vs. crashing is a trivial difference, ultimately boiling down to an extra terminating condition connected to real-world resource use. Either way, the parsing is useless.

I brought up Java as an example of what it means for the IDE parsing to work and be useful.


Losing your work versus a minor inconvenience is a trivial difference to you? Well, okay!


What kind of IDE are you working in that will lose your work when it crashes?

I don't know what's going on in the Rust world, but in C++ world, even the good ol' Visual C++ (2017, 2019), when it crashes (which it does surprisingly often on my work's codebase), it doesn't lose anything other than maybe unsaved edits. It's annoying, sure, but 30 seconds later it's back to where it was before the crash.

Also, a not working parser is not a trivial inconvenience. It just means the tool doesn't work. From the POV of wanting to use advanced functionality that relies on parsing the code, it doesn't matter whether the tool aborts its execution so it doesn't crash, or has to be disabled because it doesn't abort its execution and just crashes. The end result is the same: I can't use the advanced functionality.


What I said was that the computer crashed. The IDE used so much memory that it took the system with it. When it came back up something weird had happened to the environment and it was ignoring the stuff in .bashrc.

>Also, a not working parser is not a trivial inconvenience. [...] The end result is the same: I can't use the advanced functionality.

Yeah. Now compare "the IDE is just working as a glorified text editor" to what I'm describing above.


I'm sorry for misunderstanding your earlier comment, and thank you for clarifying. I can see how this is a much more serious problem.

However.

That sounds to me less like an IDE problem, and more like a Linux problem. Specifically, the problem with the... unique way a typical Linux system handles OOM state, i.e. by suddenly curling into a ball and becoming completely unresponsive until you power-cycle it. I've hit that a couple times in the past, and my solutions were, in order:

- Monitoring the memory usage and killing the offending process (a graph database runtime) before it OOMs the system;

- After becoming tired of the constant vigilance, quadrupling the amount of RAM available in the OS; (yes, this is why people overprovision their machines relative to what naive economists or operations people may think..)

- Changing the job, and switching to working on Windows; WSL may not be 100% like a real Linux, but it inherits sane OOM handling from its host platform.

I'm sure there is a way in Linux to set a memory quota for the offending IDE process. This would hopefully reduce your problem to the more benign (if annoying) case I described earlier.


I actually run Windows mainly. This was inside a Hyper-V VM with 32 GiB of RAM. I'd like to be able to work on this project from Windows, but unfortunately I can't, and don't have the energy or inclination to figure out how to get it building on Windows. I already knew rust-analyze had this problem, which is partly why I allocated so much memory for the VM. Unfortunately I triggered a rust-analyze rebuild just as I was already building another codebase in a different directory. That's what brought it over the edge.

While I agree that Linux sucks at handling this particular situation, my point was about Rust's complexity. Normally, when you're using C/++ dependencies the IDE doesn't need to inspect the internals of the libraries to figure out how to parse dependent code. And, it's also true that rust-analyze doesn't know how to stop trying. It will parse what you give it, even if it kills your machine.


I had emacs crash (well, lock-up) while trying to parse the error messages from some extreme C++ metaprogramming. It was at least a decade ago and both emacs and C++ have improved, but still...

edit: mind, in the same code base (10M+ loc), Visual Studio would crash while attempting to initialize Intellisense.


I keep selling it as "Like vim, there's just this one thing to get over, then everything's clear."

https://jbaber.sdf.org/misc/editors/learning_curves.jpg


I guess it's safe to extend the analogy to "C++ is like Emacs" then


C++ precedes Rust by ~30 years. Just wait how large the Rust surface area might have become in 2053. There’s lessons to be learned from history, so hopefully less, but about every successful language so far has only kept growing and becoming more complex.


Indeed it's already grown `async` and `?` since 1.0. But I love those features, they're a good thing.

In Rust, the compiler still checks everything. I can't mis-use async or the question mark operator and accidentally make my code unsafe.

In C++, I'm expected to know everything about every feature I use, so I have to be paranoid. Sure I remember that unique_ptr isn't atomic, do my juniors remember that? Sure, returning a reference to a local variable is a warning, but it's not an error, right? Many less-healthy teams probably ignore those warnings. And I myself don't even remember the rules for SFINAE or `move` in full. Not to mention that OpenCV's mix of shallow and deep copying for matrices casually breaks `const`.

In Rust, more surface area is just more Lego bricks. If two Legos snap together, you're safe. If they don't, don't force them. C++ expects you to force everything and take responsibility for not being a human encyclopedia of the language.


Almost every feature is a good thing. But the total quantity can become a bad thing, because the interactions between the features become more and more complicated. It's almost like features have values proportional to the number of features, but they have costs proportional to the square of the number of features. You eventually reach the point where new features add more cost than they add value.

But it's not that simple, because each new feature adds value to a subset, but adds costs to everyone. If the subset is vocal, they often get what they want, even if it's a net loss for all users taken as a whole.

So the trick is, first, to stop adding features once the costs outweigh the benefits, and second, given that you have only a finite number of features that you can add before you reach that point, to add the total set of features that are going to make the most valuable language.


Yes but actually no. In rust the compiler handles interactions between features for you. C and c++ don't. Per my previous Lego analogy.


Is it not possible to use a linter with C++ that can tell you when you have done X or Y incorrectly?

surely, there is tooling that can help right?


Often no, for multiple reasons.

Separate compilation means that even if you write an interprocedural static analysis system (quite a bit more complex than what most people would call a linter), you still run into oodles of hard boundaries where you can't look into other functions. Fixing this requires explicit annotations all over the place to even have a chance.

C++ is also a remarkably complex language in terms of aliasing relationships. There are a bazillion ways you can make two names alias each other. This gives you a few options when writing an analysis. You can be sound and do weak updates everywhere, which means your alarms will basically never be confident. You can be unsound and assume no aliasing, which means that you've got a lot of false alarms. You also can't really make a rule "don't ever create aliasing relationships" because they are often idiomatic C++.

And finally, the key properties that people really deeply care about like heap lifetimes fundamentally involve complex reasoning about both temporal properties, dataflow, and heap shape. All of these are hairy static analysis problems. In combination, a nightmare.


It is possible to use C++ with a linter to help detect errors, but linters do not catch all errors nor do linters always understand what you are trying to do.

Most approaches to using C or C++ safely involve throwing out large portions of the language and disallowing features that are easy to misuse or problematic to analyze for safety.


Not all of the issues you can create in C++ are visible at compile time. That’s kind of the point.

A lot of the time you’re going to need to lean on Valgrind. And that’s AFTER you shipped a fatal crash and you’re parsing a tombstone.


Maybe. But my employer doesn't have those tools. Assuming we're a median team, then half of all c++ teams don't do any static analysis or even know how to use valgrind properly. (The only tools that looked useful to me were the ones that incurred a thousand x slowdown, and our code is too crappy to run correctly at different speeds)


"Sure, returning a reference to a local variable is a warning, but it's not an error, right? Many less-healthy teams probably ignore those warnings."

Seriously?


Rust has the advantage of seeing the mistakes of the past and not making them. Many intersting ideas have been tried, only after significant use do we discover which are good and which are bad.

Compromise is sometimes needed. C++ had some ideas they knew at the time were bad, but backward compatibility forced it and backward compatibility is itself a great idea worth the costs.


Rust will have plenty of time to invent whole new categories of mistake, if it ever catches on. It started out with a raft of old familiar mistakes, and shed them over the years leading up to the 1.0 release, such as non-contiguous stacks and green threads. Maybe the way async is specified will turn out to have been one of the mistakes. It has, anyway, mechanisms to shed old mistakes that are not relied on much.


As it becomes popular eventually everything no matter how bad will be relied upon by many.


One of those history lessons is that change happens. This is why Rust has editions: allows for new features without breaking old code. There is still code complexity, but that complexity has been shifted/amortized into the compiler suite instead of everyone's project code.

Crucially, editions allow for deprecation, which is a trick C++ always had trouble with no matter how outdated the language construct.


I think the big problem with C++ was the “C” in it, that is maintaining compatibility with C (or sort of compatibility). Rust didn’t make this choice and it’s a completely new and different language.

Bad choices in C++ will ever change since you will break compatibility with a ton of stuff. Rust has the concept of “edition” that allows to migrate to new language versions gradually.


Surely the C++ surface area will also have increased significantly in the same span of time which means that Rust will still be ahead?


By then, there may be a newer "new" language that is simpler than Rust.


Probably, but how much is an open question.


>> Just wait how large the Rust surface area might have become in 2053.

This is a valid concern. One can hope that Rust evolves very slowly and as-needed. IMHO part of the problem with C++ is the fact that a committee exists to advance the language and produce regular updates. Combine that with most of the language already being defined (with a lot of overlap with C BTW) and you get a lot of bolted-on stuff and core features as part of the standard library that might have otherwise been part of the language with nice syntax. Rust had the advantage that a lot of things had been learned prior to its creation so things are cleaner. Lets hope keeping it that way is a priority and not just adding new things on top of new things - I think they're doing it right, but I don't really follow it.

C is great in this regard. The language is IMHO mostly "done" and rarely changes. I'm happy to use C99 and not much demands newer.

Looking at Wikipedia I'm afraid C is starting to get too many updates, but the 2017 version is said to add no new features! :-) https://en.wikipedia.org/wiki/C_(programming_language)


Well though-out languages gain complexity in a much slower rate than the carelessly put together ones. By orders of magnitude.

I doubt Rust will become as complex as C++ before it's abandoned in a few decades.


Why do you think it will be abandoned? Programming languages past a certain point don't die easily, especially when used for low-level system programming (otherwise C and C++ would be long dead).


All languages stop evolving at some point. C isn't there yet, but there are many dead languages that people only use because a lot of things are already written on it, and nobody wants to change anything.


So why does that mean Rust will be abandoned in a few decades?

Do you think there will be a trend "back" to using C or C++ for systems programming? I would bet against it. I do believe, by the way, that C has stopped evolving (which is good) and that C++ should stop evolving as well.

Or do you think the replacement of old languages by new ones will accelerate? So in 30 years most systems programming will be done in a language that doesn't exist yet? Maybe not done at all in a "programming language" as we have today?

Or do you think Rust is clearly losing out to some other new languages for systems programming, such as Zig, and will never be popular enough in the first place to enter the "slowly dying legacy" regime?


You don't need a "trend back to using C++". C++ usage is still growing by leaps and bounds. The number of people picking up C++ for professional use, in any short interval -- used to be two weeks, now a bit longer -- is more than the total number who are coding Rust in production. That will be true for a long time.

Rust could still catch on.


How are you calculating the usage numbers here?


I've worked a lot with C++, and a small to moderate amount with Rust. I tend to prefer Rust when given the choice.

Comparing the overall complexity levels is something of a category error, though. Most of the complexity of Rust is in the core functionality, idioms, and conventions of the language. You'll need to grapple with most of that complexity very early.

Most of the complexity of C++ is in the various functionality that was either inherited from C or accumulated over the decades after that. Most individual pieces of software don't use all of that. E.g. approximately nothing will use both va_list and variadic templates (ok, maybe indirectly through libraries, but not in a way the direct author needs to think about). The latter is just a better way of accomplishing what the former does. There are lots of variations on this theme.

My sense is that, in practice, Rust has a steeper learning curve than C++. I find it more productive now that I'm pretty familiar with it, so I think it's worth that steeper learning curve. I still think it's a bit steeper.


This might be influenced by the fact that I've used C/C++ for much longer, but my impression is that Rust is an even larger language with more details to learn than C++. The difference is that in Rust if you get the details wrong (in safe code) you get a compiler error or at worst a logic bug, while in C++ sometimes you get a compiler error and sometimes you get memory corruption or integer overflows, or undefined behavior.

Expanding on this, the general concepts you need to understand for both are about the same, but because Rust enforces them in the compiler, you have to learn the detailed rules of how the language enforces ownership and lifetimes and such which is more detailed/complicated/restrictive than the concepts themselves.

Furthermore, some of the Rust language details cause library APIs to become more involved than they would be in C++. An example in the std library is that in C++ handling errors and function results are orthogonal features. With sum types they are intertwined, and the Rust community is more liberal with adding convenience functions and syntax so you end up with a combinatorial explosion of all the different ways you want to handle the error combined with all the ways you want to handle the result, with about 40 methods each in Result and Option, plus methods for handling errors in iterators (functional streams). Lifetimes and async can also complicate crate APIs in ways that don't exist in C++. None of them are difficult on their own, but the shear volume of things you need to learn an remember makes me appreciate the minimal Go philosophy.

On the flip side, the places where C++ gets more complex are all the little bad decisions that can't be fixed for backwards compatibility like the stupid numeric conversion rules inherited from C which can easily bite you. And both the C and C++ string/stream libraries suck in their own ways.


> This might be influenced by the fact that I've used C/C++ for much longer, but my impression is that Rust is an even larger language with more details to learn than C++. The difference is that in Rust if you get the details wrong (in safe code) you get a compiler error or at worst a logic bug, while in C++ sometimes you get a compiler error and sometimes you get memory corruption or integer overflows, or undefined behavior.

I disagree. Rust seems less complicated in many ways. For example, move semantics are a lot simpler; they're always a byte copy, whereas with C++, you have to remember rvalue references, lvalue references, and universal/forwarding references (which are usually rvalue references but sometimes are lvalue references). You also have to be careful not to mess with a moved-from object, as it's in an unspecified state.

C++ also makes a distinction between trivially copyable types and non-trivially copyable types (a distinction Rust doesn't make). It's difficult to remember the rules for non-trivially copyable classes, but they boil down to "if it has a user-provided constructor, destructor, or copy-assignment operator; or if it has virtual functions; then it's not trivially copyable."

In Rust, all you have to remember is that types are by default moveable (and moves are a memcpy), deep copies are implemented through the Clone trait, and if your type can't be memcpy'd (e.g., a self-referential struct) it needs to only be accessible through Pin pointers, which ensure that it isn't memcpy'd in safe code.

Another thing: you can cause a use-after-free error by combining coroutines with lambdas [1]. An error like this only happens because C++'s rules around coroutines and lambdas are complicated enough that the committee didn't forsee this happening. This seems indicative of higher complexity in C++ than Rust, to me.

[1]: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=95111#c23


On the other-hand, I find it annoying in Rust that an assignment or an unadorned function parameter might result in a move or a copy and you can't tell by the function signature or call site. Instead it depends on whether the type implements the Copy trait, which is a big semantic difference based on "spooky action at a distance".

> You also have to be careful not to mess with a moved-from object, as it's in an unspecified state.

This is a great example of Rust being easier, even when it just as complex. Both languages don't allow you to use an object after it is moved, so it is the same amount to learn and to think about when writing code, but Rust will give you a helpful compile-time error while C++ will let you blow your foot off.


> but Rust will give you a helpful compile-time error while C++ will let you blow your foot off.

But Rust can actually analyze trivially when this happens, while it is not possible with C++ due to language semantics.


>You also have to be careful not to mess with a moved-from object, as it's in an unspecified state.

This is incorrect, somewhat. It's "unspecified" in the sense that the standard doesn't mandate that user-defined move constructors and assignment operators leave source objects in any particular state. All standard library classes are left in well-defined states when moved (if they can be moved), and you can choose to define your classes to do the same. The usual rule of thumb is that a moved object should be in the same state as if it had just been default-constructed. This is what all the standard classes do.

Move semantics in C++ and in Rust are more or less equivalent. The major differences are that C++ copies by default and rust moves by default, and that Rust doesn't allow using a moved object while C++ does.

>In Rust, all you have to remember is that types are by default moveable (and moves are a memcpy), deep copies are implemented through the Clone trait, and if your type can't be memcpy'd (e.g., a self-referential struct) it needs to only be accessible through Pin pointers, which ensure that it isn't memcpy'd in safe code.

If defining value semantics was so simple you wouldn't have had to bring up Pin pointers (which I assume are not just pointers, but something special that needs to be kept in mind), or the fact that Rust understands that there are two different kinds of code, which C++ doesn't distinguish. It's suddenly so obvious that one is simpler than the other.

>Another thing: you can cause a use-after-free error by combining coroutines with lambdas [1]. An error like this only happens because C++'s rules around coroutines and lambdas are complicated enough that the committee didn't forsee this happening. This seems indicative of higher complexity in C++ than Rust, to me.

It's trivially easy to cause use-after-free errors with lambdas. Return an std::function<void()> that captures and reads a local std::unique_ptr by reference and then call operator()() on the object. This is not a complex interplay between features; lambdas necessarily introduce dynamic lifetimes into a language that was originally not designed to support them, so using them requires care.


> Move semantics in C++ and in Rust are more or less equivalent.

That's not my experience at all. The big annoyance for me with C++ move semantics is that it forces me to allow an invalid or default state (representing a moved-from object), which subverts a major premise of RAII: non-default constructors establish class invariants which are maintained for the lifetime of the object, so that all other code can assume those invariants. There should be no such thing as an invalid or partially initialized object. When I'm forced to allow an invalid state to support move semantics, all my code has to either check for this invalid state (if it's logically possible) or assert that it's not present (if it's not logically possible). That's a major source of gratuitous complexity that simply isn't inherent to move semantics, as Rust demonstrates. (The invalid state is necessary because there's no way to prevent destructors from running in C++, so a destructor needs some way to know that an object isn't properly initialized.)

C++ move semantics have brought us back to the bad old days of checking isInitialized flags and calling initialize() methods, which is what non-default constructors were supposed to solve.


If you find yourself doing if (!initialized) initialize(); that's a sign that you should have just called initialize() on the moved object while still inside the move constructor, if initialize doesn't need any additional parameters. If there's no way to construct or initialize the class in a default state (e.g. something equivalent to an empty std::string) with no additional parameters, it's probable that the class shouldn't have been movable, and instead the object should have been wrapped either in std::unique_ptr or std::optional. Not every class needs to be movable.

Again, this has nothing to do with C++'s move semantics and everything to do with how you define your object's state transformations.


Let me make it a bit more concrete. I have a hazard pointer class, where the constructor registers the provided pointer for GC protection, and the destructor removes GC protection. I would like to be able to dereference this hazard pointer object freely, without doing null checks everywhere. RAII is the perfect fit for these semantics: the constructor establishes the invariant (GC-protected non-null pointer) and all other code can assume the invariant. Until I needed move semantics, that is.

In the constructor of a class with a hazard pointer member I needed to be able to initialize a hazard pointer on the stack and then move it into the member variable. (Because the hazard pointer constructor is fallible, I needed to catch exceptions thrown from the hazard pointer's constructor and retry from within the containing class's constructor, so I couldn't just use an initializer list.) In order to support move semantics, I had to give up the invariant that any hazard pointer instance is properly initialized (I needed to use a null pointer to represent the invalid state). That complicated all the clients, which now had to either check for or assert against the invalid state.

None of these gymnastics would have been necessary in Rust. Sure, it doesn't have constructors, but it's easy enough to write a factory method that establishes constructor invariants, and then you know that any object returned from that factory method will satisfy those invariants for the entire lifetime of the object. Since it is impossible to accidentally use a moved-from object (unlike C++), there is no need to introduce an invalid state to prevent misuse. I could just freely dereference my hazard pointers, with no checks or asserts necessary.


It seems like the obvious answer is to have a nullable_hazard_ptr and a hazard_ptr, which composites nullable_hazard_ptr. nullable_hazard_ptr is movable and default-constructible, while hazard_ptr can only be constructed with arguments and cannot be moved, but can be constructed from a nullable_hazard_ptr &&.

So if you need to return a hazard pointer from a function you return a nullable_hazard_ptr and the caller can choose to assigned that value to auto or to hazard_ptr. In the latter case, the caller will have the guarantee that the object is valid because if the function returned a null pointer the constructor will have thrown an exception. Furthermore the pointer will remain valid until it goes out of scope because there's nothing that can be done to it to make it invalid (UB notwithstanding). Of course, anyone who chooses to use nullable_hazard_ptr will need to check for validity.

Unfortunately this does mean that it's the responsibility of the callers to choose the right pointer.

>In the constructor of a class with a hazard pointer member I needed to be able to initialize a hazard pointer on the stack and then move it into the member variable. (Because the hazard pointer constructor is fallible, I needed to catch exceptions thrown from the hazard pointer's constructor and retry from within the containing class's constructor, so I couldn't just use an initializer list.)

This particular case would be handled by calling a helper function in the constructor's initialization list for each hazard_ptr member. As I said, this function should return nullable_hazard_ptr (always non-null; you will have already ensured this inside the function. You still need the nullable type because it's the only one that can be moved).

Ultimately what you have is something analogous to std::lock_guard and std::unique_lock. You are acquiring and releasing a resource and in some cases you need to tie the acquisition into the program structure and in other cases you need to be able untie it. There's no way to specify that in C++'s type system other than by having two separate types.


Thanks for this instructive example.


Here's a much simpler example of something impossible in C++ and trivial in Rust: how about a non-nullable unique_ptr? The constructor should just be able to check for null and then no code need ever check for null again, right? Sorry, you need an invalid state to represent a moved-from instance, so this is impossible.

Are you telling me that having to accommodate invalid states that are semantically both unnecessary and undesirable is not a serious limitation of C++ move semantics?


Following the previous example, you wrap std::unique_ptr in another class that has no move constructor and forwards constructor parameters to std::make_unique(), and can also be constructed from an std::unique_ptr. Now you have a heap-allocated smart pointer class that can't possibly be null.

Alternatively, you make it movable, and if someone tries to call operator*(), operator->(), or get() on the null value, you throw an exception. Not as clean, but, hey, it's safe.


>If defining value semantics was so simple you wouldn't have had to bring up Pin pointers (which I assume are not just pointers, but something special that needs to be kept in mind) [...]

Pin<P> is basically a wrapper for any pointer type P, whether it's Box<T>, &mut T, &T, etc. Its sole purpose is to keep the object that's pointed at from being moved, byte-copied, or byte-swapped. For self-referential structs, that's all you can do; there's no equivalent to move constructors. See this link at the bottom [1].

> This is incorrect, somewhat. [...]

You're right, but I really just meant that you have to keep track of more. If you use a moved-from unique_ptr, for example, you'll dereference a null pointer. Sometimes you do want to use a moved-from object, though, so it's not like this can be disallowed. It's something you have to keep track of. I just think this is more complex than Rust's unconditional rule that moves are bytewise copies, and moved-from objects aren't allowed to be used. I've heard C++ programmers complain about Rust being less powerful than Rust in this respect, but it is simpler.

> It's trivially easy to cause use-after-free errors with lambdas.

I don't fully understand it, but from what's described at the link, this causes the use-after-free bug:

  [x] () -> future<T> {
      co_await something();
      co_return x;
  }
The fix is to write it like this, I'm pretty sure:

  [x] () -> future<T> {
      auto xx = x;
      co_await something();
      co_return xx;
  }
The reason is because coroutines copy their input to their own stack frames, but lambdas are passed by address, and so the coroutine later dereferences a dangling pointer. Again, it just seems more complex than anything in Rust.

[1]: https://doc.rust-lang.org/nightly/std/pin/index.html


>If you use a moved-from unique_ptr, for example, you'll dereference a null pointer.

You can still check if the pointer is null first, which would be UB if the object was simply in an undefined state.

>It's something you have to keep track of.

You don't have to keep track of it, you just have to design your classes such that moving a value leaves the source in a usable state. At the point of use, moving is no different from any other operation on the object. There's no intrinsic difference between moving an std::unique_ptr and reset()ing it. You do have to keep in mind the possible values of the object as you operate on it, but this has nothing to do with move semantics. In any language, you wouldn't want to attempt to access the 10th element of a list after clearing the list.

Now, if you design your class such that the moved-from state is invalid and distinct from any state the the object could reach by any other means then yes, you will need to treat moves on that type differently. However, that's a problem you created for yourself.

>The reason is because coroutines copy their input to their own stack frames, but lambdas are passed by address, and so the coroutine later dereferences a dangling pointer.

Yes, like I said, lambdas decouple lifetime from lexical scope. You have to be use them carefully to avoid running into issues. This is not because C++ lambdas are a complex feature than Rust lambdas. The opposite is true: Rust lambdas are more difficult to use incorrectly because Rust's type system is more complex and can keep track more closely of the lifetimes of objects.


What is the equivalent of that Forest Gump GIF of the guy listing the 20+ c++ initializers in Rust?


Dramatically more so. Rust is complex, but most (not all!) of the complexity is there to support a specific, modern, safe style of programming.

C++ adds fifty years of cruft to that.

A lot can be said about the surrounding social environment, but as far as the languages go, I don't think it's far wrong to say that Rust is "C++: The good bits".


Yeah I guess I'd phrase it as 3 kinds of "complexity"

Rust is complex because the compiler has many ways to say "No I won't compile that, it might be wrong".

C++ is complex because it requires the programmer to understand every feature used in the codebase. (e.g. Will the compiler warn me if I use inheritance and dynamic dispatch wrong? I'm not sure. I find code with inheritance hard to read.)

Go _code_ is complex because the language is too simple. (if err != nil is waterbed complexity)


> Rust is complex because the compiler has many ways to say "No I won't compile that, it might be wrong".

Doesn't Rust have an escape hatch in the form of the "unsafe" keyword for cases where you're reasonably sure the code is safe and correct?

I haven't used Rust and most of my knowledge of it comes from HN, so I could easily be wrong.


It does, which is where the social aspect comes in: You're expected to make every reasonable effort not to use it, even if that comes at slight costs in performance, because practically speaking people are terrible at making those judgements.

Sometimes works well. Sometimes, unfortunately, it leads to bullying.


I always pictured that in a professional setting, engineering management would have an edict that `unsafe` is simply not allowed, or possibly, they allow it with extensive code review by multiple engineers.

And that there would also be a lot of people trying to implement a double-linked list and finding themselves having to sprinkle "unsafe" all over the place to satisfy the borrow checker.


Both of those happen. Though in practice, unless you’re specifically implementing basic infrastructure, I’ve never yet had any need to use unsafe.


> C++ adds fifty years of cruft to that.

If "that" refers to the support of a modern, safe style of program, then no, C++ has only the cruft, not that.


Honestly, there's a "lot" of extra stuff but the standards committee has a general standard that things that can be done in a library don't need to be supported on the language level, meaning 90% of the new stuff is going to be invisible to you except in making the language more ergonomic to use.

An easy example is ranged-for. It depends on so much complexity internally that the end-user basically will never see. All they'll see is that as long as std::begin and std::end are defined for a container you can just `for (auto item : container)` it. Stuff like overload resolution is essential to it but you don't need to know overload resolution rules to use ranged-for.

Or the way initializer lists make initialization so much simpler. The way you can leave constructor writing to the compiler for so many different types of constructors.

How the compiler handles elision so well because of how the constructors are designed. You don't need to write a single rvalue reference move constructor for simple types. They're generated for you.

The current ongoing push to make large parts of the standard library constexpr so you can have seriously complex things going on directly at compile time to result in a minimal output that can just put in equivalent constants to constexpr function calls.

Like, the reality is, it is pretty much a core principle of the language that you can write C++2003 if you really want to, and everything on top of it just makes writing things easier. But if you want to piecewise substitute your code with the latest stuff it more often than not becomes terser and more straightforward because of the evolution of the std library.


I’ve got over a decade in C++ and I won’t use it unless forced (I.e. paid more money than I can reasonably refuse). It’s only gotten worse and worse. I would and have used C before I’d use C++ and now I’d use Rust before either of them.


C++ lets you write anything you can imagine, and the language features and standard library often facilitate that. The committee espouses the view that they want to provide many "zero [runtime] cost," abstractions. Anybody can contribute to the language, although the committee process is often slow and can be political, each release the surface area and capability of the language gets larger.

I believe Hazard Pointers are slated for C++26, and these will add a form "free later, but not quite garbage collection" to the language. There was a talk this year about using hazard pointers to implement a much faster std::shared_ptr.

It's a language with incredible depth because so many different paradigms have been implemented in it, but also has many pitfalls for new and old users because there are many different ways of solving the same problem.

I feel that in C++, more than any other language, you need to know the actual implementation under the hood to use it effectively. This means knowing not just what the language specifies, but can occaissionally require knowing what GCC or Clang generate on your particular hardware.

Many garbage collected languages are written in or have parts of their implementations in C++. See JS (https://github.com/v8/v8)and Java GC (https://github.com/openjdk/jdk/tree/36de19d4622e38b6c00644b0...)

I am not an expert on Java (or C++), so if someone knows better or can add more please correct me.


There are Java implementations in Java like Jikes RVM.

Garbage collected languages can be easily bootstraped, it is a matter of what intrisics are available, and what mechanisms are availble beyond plain heap allocation.

Oberon, Go, D, Nim, Modula-3, Cedar are some examples.


But mainline Java is written in C and some C++, right?


This. It always amuses me when people complain about complexity in Rust. It feels like they have no idea what they don't know about the alternatives


Apples to oranges. Rust's borrow system is something you couldn't implement in C++, meanwhile C++ has far better allocator and compile-time support and probably more features in total (things like concepts, intrinsic bitfields, etc.).

Importantly (afaik), Rust has far less features which are deprecated and/or in the specification but barely implemented.


Arguably, Rust is “just” a C++ subset with RAII made into a compile-time guaranteed primitive, and the only possible way to manage objects.

Add to it that it is a new language that had the hindsight of C++’s warts, and no backwards compatibility on C and its own existing syntax, it would be very surprising to see Rust as more complex.


There are many, many, many things that are regular classroom exercises in C++ that are extremely difficult or impossible to do in rust. Rust programmers tend to just pretend like anything rust doesn’t do well is an antipattern. Making the simple case simpler and the complex case way more complex is not desirable for most people.


I've used both professionally (Rust for the last 4 years) and I 100% agree. Rust, even with stuff like async, is a much, much, much simpler language than C++.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: