Hacker News new | past | comments | ask | show | jobs | submit login

> A conforming C++ compiler is forbidden from telling you in some† unknown number of cases that it suspects what you've written is nonsense, it just has to press on and output... something.

It's not quite that bad - a conforming C++ compiler is permitted to error out and not compile the program. It just doesn't have to.




Many of the cases of IFNDR are semantic constraints, especially in C++ 20 and beyond. As a result of being semantic constraints it's generally impossible to diagnose this with no false positives. The ISO standard forbids such false positives so...


Can you give a more concrete example of the kind of thing you're talking about? Like, if you try to sort something and your comparator implementation for that type is not transitive, the compiler can silently produce a broken binary?

Surely in the undecidable cases the compiler is allowed to produce a binary that errors cleanly at runtime if you did in fact violate the semantic constraint, and any sane implementor would do that. (Not that any sane person would ever write a C++ compiler...)


> Like, if you try to sort something and your comparator implementation for that type is not transitive, the compiler can silently produce a broken binary?

It's not merely about whether your comparisons are transitive, the type must exhibit a total ordering or your sort may do anything, including buffer overflow.

> Surely in the undecidable cases the compiler is allowed to produce a binary that errors cleanly at runtime if you did in fact violate the semantic constraint,

I don't think I know how to prove it, but I'm pretty sure it's going to be Undecidable at runtime too in many of these cases. Rice reduced these problems to Halting, which I'd guess means you end up potentially at runtime trying to decide if some arbitrary piece of code will halt eventually, and yeah, that's not helpful.

I've written about it before, but I should spell it out: The only working alternative is to reject programs when we aren't sure they meet our constraints. This means sometimes we reject a program that actually does meet the constraints but the compiler couldn't see it.

I believe this route is superior because the incentive becomes to make that "Should work but doesn't" set smaller so as to avoid annoying programmers, whereas the C++ incentive is to make the "Compiles but doesn't work" set larger since, hey, it compiles, and I see Rust's Non-Lexical Lifetimes and Polonius as evidence for this on one side, with C++ 20 Concepts and the growing number of IFNDR mentions in the ISO standard on the other side.


> the type must exhibit a total ordering or your sort may do anything, including buffer overflow.

Sure. But there's no requirement for the compiler to be a dick about it, and hopefully most won't.

> I don't think I know how to prove it, but I'm pretty sure it's going to be Undecidable at runtime too in many of these cases. Rice reduced these problems to Halting, which I'd guess means you end up potentially at runtime trying to decide if some arbitrary piece of code will halt eventually, and yeah, that's not helpful.

At runtime can't you just run the code and let it halt or not? Having your program go into an infinite loop because the thing you implemented didn't meet the requirements is not unreasonable.

I'm sympathetic to the idea that there could be a problem in this space, but without a real example of a case where it's hard for a compiler to do something reasonable I'm not convinced.

> I've written about it before, but I should spell it out: The only working alternative is to reject programs when we aren't sure they meet our constraints. This means sometimes we reject a program that actually does meet the constraints but the compiler couldn't see it.

> I believe this route is superior because the incentive becomes to make that "Should work but doesn't" set smaller so as to avoid annoying programmers, whereas the C++ incentive is to make the "Compiles but doesn't work" set larger since, hey, it compiles, and I see Rust's Non-Lexical Lifetimes and Polonius as evidence for this on one side, with C++ 20 Concepts and the growing number of IFNDR mentions in the ISO standard on the other side.

Meh. I'm no fan of the C++ approach, but I'd still rather see C++ follow through on its strategy than half-assing it and becoming a watered-down copy of Rust. People who want Rust know where to find it.


Historically correctness wasn't seen as an important goal in C++ and so no, I don't think any of the three popular C++ stdlib implementations will do something vaguely reasonable for nonsense sort input. It's potentially faster (though bad for correctness) to just trust that this case can't happen since the programmer was required to use types with total ordering. So yes, I'd expect it to result in bounds misses in real software.


I wouldn't think you could get bounds misses without doing extra comparisons, so I'd expect real-world sort implementations to just fail to sort, which doesn't seem particularly unreasonable. But in any case, it's a huge leap from "existing C++ stdlib implementations behave badly in this case" to "the C++ standard requires every implementation to behave badly in this case".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: