Hacker News new | past | comments | ask | show | jobs | submit login

> Many of the most damaging recent security breaches happened to code written in MSLs (e.g., Log4j) or had nothing to do with programming languages (e.g., Kubernetes Secrets stored on public GitHub repos).

I’m surprised Herb is so defensive here. He normally strikes me as level-headed but he’s arguing in bad faith here. There’s no way a language can prevent arbitrary code execution if the programmer intentionally wants to allow it as a feature & then doesn’t think through the threat model correctly or not managing infrastructure secrets (the latter btw is mitigated by Microsoft’s own efforts with GitHub secret scanning although there should be more of an industry effort to make sure that all tokens are identifiable).

But C/C++ is a place where 60-80% of the vulnerabilities are regularly basic things the language can mitigate out of the box. No one is talking about perfection. But it’s disappointing to see Herb stuck and arguing “there’s other problems and even if memory safety is an issue Rust has problems too”. The point is that using Rust is a step function improvement and provides programmers with the right tools so they can focus on the other security issues. A Rust codebase will take more $ to exploit than a C/C++ one because it will be harder to find a memory vulnerability which is easier to chain into a full exploit vs attacking higher level stuff which is more application specific.

EDIT: And language CVEs are a poor way to measure the impact of switching to Rust because it’s the downstream ecosystem CVEs that matter. I’m really disappointed this is the attitude from the C++ community. I have a lot of respect for the people working on it, but Herb’s & Bjarne’s response feels like an unnecessary defensive missive to justify that C++ is still relevant instead of actually fixing the C++ ecosystem problems (namely their standards approach is making them move too slowly to ever shore up their weaknesses).




Their defensiveness makes sense in the current context where in the last few weeks organisations like the White House [1] and Google [2] are explicitly calling out the importance and imminent need of moving away from memory unsafe languages like C and C++. If everyone focussed on this one issue, it is possible that we might actually start moving away from C and C++ in the next 5-10 years.

Sutter pointing out that memory safety isn't the only vector for system vulnerabilities would have the effect of spreading cybersecurity efforts and budgets across all of them. In that case memory safety isn't the foremost problem it's being portrayed as, and it isn't worth migrating away from C and C++.

[1] - https://www.whitehouse.gov/wp-content/uploads/2024/02/Final-...

[2] - https://research.google/pubs/secure-by-design-googles-perspe...


The White House reports also acknowledge that memory safety isn't the only issue, but instead, is a large one that movement can be made on. From page 8 of that report:

> To be sure, there are no one-size-fits-all solutions in cybersecurity, and using a memory safe programming language cannot eliminate every cybersecurity risk. However, it is a substantial, additional step technology manufacturers can take toward the elimination of broad categories of software vulnerabilities.

And of course, Google as well.

Not even the most fervent memory safety advocates believe that it is the sole thing that will make software secure, so arguments like this come across a bit strange.


No I completely agree, fixing memory safety is not the only thing that needs doing, far from it. I agree with the White House report in particular, which spends time talking about the responsibilities of C suite execs in ensuring their software is vulnerability free. That's good, actionable advice.

I called out those two reports because for the first time in forever, there's actual impetus to move away from C and C++. That challenges the standards committee's usual stance that the status quo is acceptable. That's why we see Herb Sutter actually engaging with the issue of memory safety here. Compare that with Bjarne Stroustrup's earlier glib dismissal of these concerns, where his talk started with "The Case Against Switching Languages". Kinda shows where his priorities lie.


But he’s not engaging with the issue of memory safety here.

> Since at least 2014, Bjarne Stroustrup has advocated addressing safety in C++ via a “subset of a superset”:

> As of C++20, I believe we have achieved the “superset,” notably by standardizing span, string_view, concepts, and bounds-aware ranges. We may still want a handful more features, such as a null-terminated zstring_view, but the major additions already exist.

Sounds like Herb too believes that C++ is making good progress and that it’s a library issue. This is problematic when the default `[]` API that everyone uses has no bounds check. So then you change the compiler to have an option to always emit a bounds check. But then you don’t have an escape hatch when performance is important.

Herb is always defending against switching away from C++ and that C++ will solve the problems in a back compat way. They’ve been disrupted and they’ve taken a classical defensive approach instead of actually trying to compete which would require a massive restructuring of how C++ is managed as a language (e.g. coalescing the entire ecosystem onto a single front-end so that new language features only need to be implemented once). They need to be more radical in their strategy but that doesn’t gel with design by committee.


Fedora & downstream build with -D_GLIBCXX_ASSERTIONS, which enables bounds checking for many of those operator[] calls (including std::vector). For tight loops, GCC can often eliminate the bounds checks, at least if you use size_t (or size_type) for the loop iteration variable, not unsigned.


> This is problematic when the default `[]` API that everyone uses has no bounds check.

The default [] API can be replaced with C++ classes that do bounds checks. C++ 20 provides the std::array class to do precisely that, and std::span to implement fat pointers.

All that's missing to implement the subset-of-a-superset mode is a compiler option to disable native arrays in C++ code (but not in extern "C" code).


> The default [] API can be replaced with C++ classes that do bounds checks

Which means you subtly break the performance guarantees of code which makes migration to a new version more annoying.

> C++ 20 provides the std::array class to do precisely that

What? https://en.cppreference.com/w/cpp/container/array/operator_a...

> Returns a reference to the element at specified location pos. No bounds checking is performed

> option to disable native arrays in C++ code

Yeah, no that's not the only place where bounds checking shows up. Lots of places use pointers as iterators because the language lets you. So even if you shut off those avenues, code that uses pointers as iterators would remain exploitable. Of course it's a step improvement, but there's just no way to close the barn door for C++ unless you sacrifice performance to such a degree that the obvious question becomes "why restricted C/C++" which still has a bunch of footguns, is slow, and which has a really inconsistent API and language surface?


He’s suggesting adding bounds checks automatically by the compiler, which is vastly more than Stroustrup was recommending. He reckoned merely running sanitizers was sufficient. He wasn’t even taking the problem seriously, as if it’s a given that the world will continue using C++ no matter what.

The fact that Sutter is willing to sacrifice performance for safety means at least he has woken up to the reality that the future may hold less C++ code than the past.


But without a way to recoup the performance when you need it, then C++ potentially becomes as slow as things like Go or Java with extra footguns and slower developer speed. That's why Rust has `unsafe` and `unchecked` API methods that you can use in unsafe to bypass bounds checks. And it's an extremely consistent API surface to deal with (not to mention a much better thread safety story which Herb hand waves away as "not important because other languages also have thread safety issues" even though he admits no one is as bad as C++ here).


I feel someone asserting this is the first time there's been an impetus coming from the government to move away from C and C++ must be entirely unfamiliar with the history of Ada.


People using Ada's failure are unfamiliar with the history of Ada, otherwise they would acknowledge that the reasons that it did not took off outside DoD weren't at all technical related, rather the hardware requirements (Rational started as a Ada Machine company), the price of the compilers, not being part of the UNIX SDK on the UNIX vendors that Ada compilers (it was an additional expense), the hacker culture against bondage languages (as usually discussed on Usenet),....


While they are similar, they are different: the move towards Ada was scoped purely at the Department of Defense. This situation is one where the government is also trying to work with and encourage practices in general industry.


I wonder how well that's going to work out. The software industry isn't exactly noted for taking technical advice from the White House...


In the request for comments before this was published, there was broad support from wide swaths of industry. Many organizations you've heard of are on board. Or at least, the public position of their companies are, I don't know how well that translates to the rank and file.

I have to write up a post about this...


Please post it here when you do. I'd love to read it.


In this case the problem's a little harder than that. The advice is good advice that the industry is already happy to give to itself, but less happy to actually apply when there's some cost or education required.


Rust has an industry and a hobbyist ecosystem, whereas I’m not sure if Ada ever had the hobbyists on board.


It's perhaps a small community, but there sure are hobbyists using Ada (it's not a bad choice of a language for certain applications). With GNAT, Ada is quite accessible even. See, e.g. https://pico-doc.synack.me/


Only after GNAT came to be, note that besides Ada Core, there are other six vendors still in business, with the typical defence contract prices, hardly easy to get hobbists that way.


> Not even the most fervent memory safety advocates believe that it is the sole thing that will make software secure

You must be new to HN, yes?

;0


I sure see a lot of people claim that others do advocate that, but I rarely if ever see anyone actually advocate for that, and if they do, it's not someone who's representing any of the organizations advocating for this issue. It's a "make up a guy to get mad about" kind of situation.


The Whitehouse report should scare the shit out of anyone invested in better software. People think the report is a step in the right direction, it's not.

You do not want the blob of D.C. putting their eyes on anything that looks like a bottomless money pit for consultants and self-proclaimed experts. Once that happens and is codified to some degree in law, it's very difficult to change or remove.

This potentially affects every government system in existence, and those systems are already some of the most legacy systems today. The US Navy still pays Microsoft to maintain support for Windows XP, so the idea this will happen in any less than a 25 year horizon is absurd. And even then, the dates can be extended. Why put a stop to the gravy train when it can keep going -- it's not like the public is even aware of just how enormous the federal government really is. Once you understand this an opportunity to extract billions of dollars from large organizations, you then have to ask what their lobbyists will do to change the laws in their favor to completely neuter the legislation codified into law.

I haven't see anyone even consider this highly likely, if not almost certain outcome.


I've read the report and it seems balanced and fair. I liked that they tackled how improvements could be made on many fronts, taking different approaches in each one. They didn't go overboard with any assertion or recommendation.

On one hand you're saying the problem is intractable and it'll take 25 years to solve. Then why are you criticising an effort to get the ball rolling?

You're frustrated by the Government using old, outdated and possibly insecure software. Then surely the White House exhorting the Federal government to fix these issues and procure software without issues is a good thing?

Of course any change is an opportunity for consultants to make money, but that doesn't mean the change isn't needed or that the White House is wrong for starting it.


Because there is no clear objective success criterion.

Further, once consultants are being paid, they're disincentivized to actually accomplish this incredibly broad and nebulous goal. It allows politicians to campaign on more secure computing, but never actually accomplishing anything except profiting from their spouse being one of these consultants. And by never solving the problem, it continues, which only further justifies spending more money to "fix" the problem.

You might call this cynical, it's not, it's realistic. I challenge you to find an example where this level of corruption isn't taking place, and explain what makes you so confident that will be the case for this specific issue, drawing specifically on areas of contrast.

If you can overcome the government corruption, you still have to overcome the lobbyists. You can't do both except in situations where the corporations are in cahoots with the government.


You are so terrified of government, and yet you don't recognise how powerful government actually is. You're scared of some overreaching law and excessive waste, but actually that's not what's happening here. This is the White House using its Bully Pulpit to effect change. That is at once more effective in this particular case (because a law forcing the use of a language would be unconstitutional) and less harmful (because people can choose to ignore it).


This is a gross mischaracterization. I would encourage you to re-read both of my replies so as to best respond to the points about how government involvement does not actually solve problems, but instead perpetuates problems because of institutional corruption.

Ad hominem attacks are not a counterargument to these inescapable facts, they're also against the community guidelines and do very little to persuade anyone to your position: https://news.ycombinator.com/newsguidelines.html


I’ve read far more coherent anti-government polemics than the one you’ve written. Those didn’t convince me, and I doubt re-reading yours will. They’re so greyed out I can barely make them out anyway.

What you’ve mistaken as an ad hominem attack was me trying to tell you - even though you’re concerned of what the government may do here by passing laws, they’re doing much more with much less effort. I’m surprised that you were unable to grasp that.


What exactly do you think this report accomplishes?

It's not legally binding like Congressional legislation or an executive order.

Nobody has been prevented from using memory safe languages prior to the report being published. I'm sure there are plenty of instances where consultants and contractors have been required to use C or C++ because it's written into hundreds of thousands of pages of antiquated government contracts, but this report isn't a magic wand that's going to change those contracts. You have to convince the most stuffy lawyers imaginable to change them, which is an expensive endeavor that no reasonable business is going undertake unless it affects their bottom line, and that only happens after Congress passes a law. And prior to a law, there's going to be an army of lobbyists ready to carve out a waiver system, render any hope of improving software quality moot.

I'm of the opinion it's merely a clarion call for D.C. parasites to invent ever-more creative ways to waste tax dollars. Without clear objective success criterion defined by the government, the problem will persist indefinitely. And if you're a bureaucrat with friends and family making millions consulting on this problem, you're disincentivized to solve anything. Why cut off the hand that feeds you.

I'm eager to hear your thoughts about these undeniable problems.


This is what I mean. I said you had no idea how the government gets things done and you took it to heart, quoting the HN guidelines and everything. The government doesn't need to pass laws to get its way. Think on that for a second - we're taught that changes can only be made by laws, and yet the government is doing something here that involves no law being passed, no regulation issued. A simplistic libertarian who distrusts government might view this as a simple waste of time, but it's actually an effective way to get things done.

Like I tried to tell you, this is jawboning. Here's an example of various elected officials using it against a social media company (https://knightcolumbia.org/blog/jawboned). It really works, which should scare you more.

Next, I'll assume you've read the report [1] in full, every page, like I did. But I'll add relevant excerpts that demonstrate that this isn't about starting some "War on Memory Unsafety" (my words), but rather encouraging the software industry to adopt better practices, at no cost to the taxpayer.

- Building new products and migrating high-impact legacy code to memory safe programming languages can significantly reduce the prevalence of memory safety vulnerabilities throughout the digital ecosystem.xi To be sure, there are no one-size-fits-all solutions in cybersecurity, and using a memory safe programming language cannot eliminate every cybersecurity risk. However, it is a substantial, additional step technology manufacturers can take toward the elimination of broad categories of software vulnerabilities.

- Formal methods can be incorporated throughout the development process to reduce the prevalence of multiple categories of vulnerabilities. Some emerging technologies are also well-suited to this technique.xxvi As questions arise about the safety or trustworthiness of a new software product, formal methods can accelerate market adoption in ways that traditional software testing methods cannot. They allow for proving the presence of an affirmative requirement, rather than testing for the absence of a negative condition.

Then it talks about the role the CTO, CIO and CISO can play in an organisation to improve cybersecurity readiness.

- The CTOs of software manufacturers and the CIOs of software users are best leveraged to make decisions about the intrinsic quality of the software, and are therefore likely most interested in the first two dimensions of cybersecurity risk. In the first dimension, the software development process, the caliber of the development team plays a crucial role. Teams that are well-trained and experienced, armed with clear requirements and a history of creating robust software with minimal vulnerabilities, foster a higher level of confidence in the software they produce.xxxvi The competence and track record of the development team serve as hallmarks of reliability, suggesting that software crafted under their expertise is more likely to be secure and less prone to vulnerabilities.

- A CTO might make decisions about how to hire for or structure internal development teams to improve the cybersecurity quality metrics associated with products developed by the organization, and a CIO may make procurement decisions based on their trust in a vendor’s development practices.

- The CISO of an organization is primarily focused on the security of an organization’s information and technology systems. While this individual would be interested in all three dimensions of software cybersecurity risk, they have less direct control over the software being used in their environments. As such, CISOs would likely be most interested in the third dimension: a resilient execution environment. By running the software in a controlled, restricted environment such as a container with limited system privileges, or using control flow integrity to monitor a program at runtime to catch deviations from normal behavior, the potential damage from exploited vulnerabilities can be substantially contained.

So you, unlike the people who never read the report, would know that this report was all about educating firms on ways that they can become more secure. At no point does it talk about what the federal government might or might not do. It doesn't involve any spending, any corruption, any laws, any lobbyists, anything that people scared of Big Government might worry about. Not a single dollar spent.

And already it is having results. A few days later, Google published a report that broadly agrees with everything the White House is saying, and talking about their implementation plan. Especially for the millions of lines of C++ in the most used software among regular people - Android and Chrome. [2]

[1] - https://www.whitehouse.gov/wp-content/uploads/2024/02/Final-...

[2] - https://security.googleblog.com/2024/03/secure-by-design-goo...


Virtually everyone wants to improve software and make it more secure. We're approaching it from different angles and it's being confused for disagreement on the topic.

There's a lot of value in having good faith discussion from each perspective so we can mitigate downside risk while enhancing the upside goal. I'm eager to hear your thoughts on the undeniable problems restated in my previous replies.

Google is not a good example -- they're technically competent and were already doing work in Rust. It seems like focusing on small software shops still writing C++89 code would be better thing to focus mental energy on. Are there any examples of those kind of businesses using this WH report to steer their roadmaps or technology directions?


I can explain it to you but I can't understand it for you.

I've tried to show you how jawboning works, but you're still steeped in a mindset where the government "undeniably" coerces through legislation and regulation.

> I'm sure there are plenty of instances where consultants and contractors have been required to use C or C++ because it's written into hundreds of thousands of pages of antiquated government contracts

Could you show some instances of these contracts? That's on you, I can't prove a negative.

You're imagining that there must be legislation requiring C++ and therefore it's impossible to get a change away from C++ by just talking.

> there's going to be an army of lobbyists ready to carve out a waiver system, render any hope of improving software quality moot.

Now you're beginning to understand why they didn't go with a coercive law or regulation. When people are coerced, they demand carve outs. When you ask nicely, like the White House have here, they may consider it. And there's nothing wrong with carve outs per se. For example, thousands of ships/planes and other systems are going to use Sqlite as their database and that's written in C. No sense in demanding a database in Rust because frankly, Sqlite is proven software, deploying on billions of devices in use today. It deserves a carve out.

> Without clear objective success criterion defined by the government, the problem will persist indefinitely.

Why would you think this is the last you're ever hearing of this? This problem would take a decade to solve, at the most optimistic. Why are you demanding a perfect solution on day one? All they've done so far is pointing out ways the industry can do better. Maybe next year they change the procurement criteria for some defence contracts. Maybe the year after that they change the procurement criteria for all government contracts. They can try different things, iterate on them.

They can look at the success of industry initiatives like https://memorysafety.org in a couple of years and see if that's something they should invest in themselves.

> It seems like focusing on small software shops still writing C++89 code would be better thing to focus mental energy on. Are there any examples of those kind of businesses using this WH report to steer their roadmaps or technology directions?

Are you asking if there are some small shops who have responded to this 3 week old report, completely changed the direction of their business and published a report about it? Even if they had, how would I have heard about it? Google's report reached the front page of HN and that's where I saw it, a small company would struggle to reach that kind of exposure.

Your clear distrust of government makes you unable to see that what they've done is a small, effective step in the long march towards improving software security. That's why you set impossible standards for them ("objective success criterion", "proof of small shops adopting it") and then immediately think you're correct when you see they fail to meet those standards. I can't change how you feel about government, so there's not much left to say. If you feel your "undeniable" points haven't been addressed, I'm not going to attempt it again.


Politicians are not campaigning on this at all. This is a niche topic that only impacts software developers. Nor is this setting out milestones for switching. It’s just advice saying “hey guys, consider other alternatives to C/C++”. It’s a social pressure - there’s no force of law behind this yet. And at most the government can only compel what their own vendors do.


So what else do you propose?


The table stakes here is automatic bounds checking. This is something that pretty much every newer language does already, and even several older languages figured out how to do well.

The problem in C/C++ is that pointers don't inherently communicate their bounds, so your options for adding automatic bounds checking are a) fat pointers and consequently (severe) ABI break; b) some sort of shadow memory to store bounds info (ASAN, generally considered inadvisable to use in production); or c) change the language to communicate what the bounds of a pointer are. The good news is that most interfaces will provide the bounds of a pointer as another member of the struct or the function parameter it's part of; the bad news is that actually communicating that information requires a scope lookup change that is hard to get through the committees.


Things like CHERI, Fil-C, and CCured make pointers just carry their bounds.

It’s not an unfixable problem.

I wish we were talking about fixing it, not making excuses.


The problem with fixes on things this low-level is that they carry the potential to break lots of code. Since broken code has to be fixed, you then get into the "why not just rewrite it in <insert new hotness here>?" argument, which is headed off by just not fixing it.

C/C++ maintainers knew this and didn't want to see their lives' work made less significant. Now the issue's been forced by (among other things) one of the world's most influential software customers, the US Federal Government, implying that contract tenders for software written in languages like Rust will have an advantage over those written in languages that don't take memory safety as seriously.


CHERI claims that the amount of changes are exceedingly small.

Fil-C is getting there.

So, C has a path to survival.

> The problem with fixes on things this low-level is that they carry the potential to break lots of code. Since broken code has to be fixed, you then get into the "why not just rewrite it in <insert new hotness here>?" argument, which is headed off by just not fixing it.

“Lots” is maybe an overstatement.

Also, if there was a way to make C++ code safe with a smaller amount of changes than rewriting in a different language then that would be amazing.

The main shortcoming of CHERI is that it requires new HW. But maybe that HW will now become more widely demanded and so more available.

The main shortcoming of Fil-C is that it’s a personal spare time project I started on Thanksgiving of last year so yeah


> CHERI claims that the amount of changes are exceedingly small.

Oh, man. Yes, they do. Many people have been claiming that for decades.

When can we expect one of them to claim it's done?

(To be fair, the amount of changes required has been diminishing through those decades.)


I think the hardest part about CHERI is just that it's new HW. That's a tough sell no matter how seamless they make it.


CHERI has hardware in the form of ARM Morello and CHERI RISC-V running FreeBSD, easily to check their claims.


CHERI is effectively a mix of option a and b in my categorization, necessitating hardware changes and ABI changes and limited amounts of software changes. I'm not familiar with the other options in particular, but they likely rely on a mix of ABI changes and/or software changes given the general history of such "let's fix C" proposals.

ABI breaks are not a real solution to the problem. When you talk about changing the ABI of a basic pointer type, this requires a flag day change of literally all the software on the computer at once, which has not been feasible for decades. This isn't an excuse; it's the cold hard reality of C/C++ development.

There is no solution that doesn't require some amount of software change. And the C committee is looking at fixing it! That's why C23 makes support for variably-modified types mandatory--it's the first step towards getting working compiler-generated bounds checks without changing the ABI and with relatively minimal software change (just tweak the function prototype a little bit).


Wouldn’t you have to recompile all your dependencies or run into ABI issues? For example, let’s say I allocate some memory & hand it over to a library that isn’t compiled with fat pointers. The API contract of the library is that it hands back that pointer later through a callback (e.g. to free or do more processing on). Won’t the pointer coming back be thin & lose the bounds check?


Compile everything memory safely and then no problem.


Fil-C sounds like an amazing project!

Do you have any guesses on whether it could easily target WebAssembly? I'd imagine many people would like to run C code in the browser but don't want to bring memory unsafety there.

link: https://github.com/pizlonator/llvm-project-deluge/blob/delug...


How much code out there does stuff to the effect of

  union MyObject {
    void* ptr;
    unsigned long data;
  }
  (...)
  MyObject obj;
  obj.ptr = (void*)some_function;
  (...)
  store_context(obj.data);
And what would happen to such code if pointers are suddenly fat?


CHERI handles that by dynamically dropping the capability when you switch to accessing memory as int.

Fil-C currently has issues with that, but seldom - maybe I've found 3 such unions while porting OpenSSL, maybe 1 when porting curl, and zero when porting OpenSSH (my numbers may be off slightly but it's in that ballpark).


The reason they don't communicate their bounds is also a performance optimisation. You can certainly do it in C++; use a std::vector for e.g. and use the .at() method to index on it and it'll throw an exception unless you disable that with a compiler flag.

The thing is, it's fine to take that risk if you're writing HPC simulation software, but it's much less fine if you're writing an operating system or similar.


The performance and power use cost to checking bounds is trivial!

Apple has tested this, on mobile devices even, when working on -fbounds-safety. From the slides:

    System-level performance impact
    • Measurement on iOS
    • 0-8% binary size increase per project
    • No measurable performance or power impact on boot, app launch
    • Minor overall performance impact on audio decoding/encoding (1%)
    • System-level performance cost is remarkably low and worth paying for the security benefit
Some more specific synthetic benchmarks suites reported ~5% runtime cost for bounds checking.

https://www.youtube.com/watch?v=RK9bfrsMdAM https://llvm.org/devmtg/2023-05/slides/TechnicalTalks-May11/...

Bounds checking being omitted due to performance is mostly a myth, the only time this should ever be believed is in very specific circumstances such as performance critical code and when the impact has actually been measured!


Whether it's trivial or not depends totally on the workflow. A 5% runtime cost can be enormous - when I was in academia I was running thousands of simulations on big clusters like ARCHER, some of which could take up to a fortnight to run. In those cases, a 5% cost can add a whole other working day to the runtime!


> Whether it's trivial or not depends totally on the workflow.

People here are talking about language defaults, and that the default should be safe, and while, yes, technically you can construe a workflow they're not going to work for, they work for most.

That doesn't prevent your ARCHER simulation from calling — hopefully only at sites that profiling indicates need it — .yolo_at(legit_index_totes) (or whatever one might call the method) & segfaulting after burning a few days worth of CPU time away.


Do you believe that is a common case, or an exceptional one?


I don't think it's particularly exceptional for the sorts of people that are still using C++ (and making a conscious decision to do so over Rust for e.g.).

If you're writing 'standard' C++ these days, you're probably already making use of std::array, std::vector, etc. anyway. The only area where people are working on modern codebases I've not seen so much of that is in HPC stuff and embedded.


Yeah, “also” a performance optimization.

It’s also just legacy. We’ve always done it that way so we still do it that way for ABI compat and because it’s hard to find a compiler that does it any other way.

Imagine if the story was: “you totally can have a bounds on your ptrs if you pass a compiler flag and accept perf cost”.

I bet some of us would find that useful.


> You can certainly do it in C++

You can do it in C as well, although it's a lot clunkier. I've been doing so for decades when the effort is appropriate to the task.


The problem is fixable in C++. std::span is the fat pointer; std::array is the checked array. All that's missing is a compiler option that gives warnings/errors when the legacy native [] features are used.

C is probably unfixable. But that's a different language.

Presumably compilers would allow conversion of spans to native pointer arguments when calling methods declared as "extern 'C'".


Visual Studio does exactly that, yet most devs don't care until the goverment steps in.


The problem is existing practice. GCC has solved this problem for function parameters a long time ago with parameter forward declarations. But other compilers did not copy this GNU extension, and also nothing else really emerged... This makes it hard to convince the committee to adopt it.

In structs there is no existing extension, but a simple accessor macro that casts to a VLA type works already quite well and this can be used by refactoring existing code.

There are still some holes in UBSan, but otherwise I think you can write spatially memory-safe C more or less today without problem. The bigger issue is temporal safety, so the bigger puzzle piece still missing is a pointer ownership model as enforced by the borrow checker in Rust.


> There are still some holes in UBSan, but otherwise I think you can write spatially memory-safe C more or less today without problem.

I wouldn't call it a solved problem until gcc and clang have an auto-inserts-bound-check flag that does the equivalent of a Rust panic on every array access if it's out-of-bounds, is considered usable on production code [1], and works on most major projects (that care enough to change their source to take advantage of this flag). Overall, the problem isn't so much that we don't know how to write safe C code, it's that the compiler doesn't quite have enough information to catch silly programmer mistakes, and there current situation is juuuuust bad enough that we can't feasibly make code that doesn't tell the compiler enough error out during compilation.

> The bigger issue is temporal safety, so the bigger puzzle piece still missing is a pointer ownership model as enforced by the borrow checker in Rust.

Temporal safety is interesting in part because it's not clear to me that there currently exists a good solution here. The main problem, like existing partial solutions for spatial memory safety, is that the patterns to make it work well are known, but programmers tend to struggle to apply all of the rules correctly. Rust's borrow checker is definitely a step up from C/C++, but at the same time, there are several ownership models that it struggles to be able to express correctly, even if you ignore the many-readers-xor-one-writer rule that it also imposes. Classic examples are linked lists or self-referential structs, but even something like Windows' IOCP can trip up Rust's lifetime system.

Although, at the very least, a way to distinguish between "I'm only going to use this pointer until the end of the function call" and "I'm going to be responsible for freeing this pointer you give me, please don't use it any more" would be welcome to have, even if it is a very partial solution.

[1] Don't get me wrong: the development of the sanitizers is an important and useful tool for C/C++, and I strongly encourage their use in test environments to catch issues. It's just that they don't meet the bar to consider the issue solved.


Sanitizers without runtime, i.e. -fsanitize=bounds -fsanitize-trap=bounds, can be used in production? And I think it can be used on existing projects by refactoring. Catching this at compile-time would be better, but Rust also can't do it and it is not needed for memory safety. And I think the solution C converges to (dependent types) actually would allow this is many cases in the future, while this is difficult without them. I fully agree about your other points.


The defensiveness is entirely understandable. There's a very vocal contingent of the industry who is increasingly hostile to anyone who dares to say that C/C++ isn't pure evil. Defensiveness is the natural reaction to that sort of thing.


Instead of defensiveness, why not talk about the ways in which the C++ committee is changing how they're operating (or even leaving ISO) and changing their culture to shore up these things.

Look at past Scott Meyers talks (e.g. [1]). He highlights how the committee has arbitrary set of principles that can be applied to justify or reject any proposal and the inconsistencies in the language are a reflection of this.

This isn't a problem of the language itself but rather that it's design by committee with 4 major front-end implementations (Intel, MSVC, Clang, GCC although I believe Intel is standardizing on the Clang front-end at least). Organizational issues like that are tough to spot but it's become clear now for several years that Rust is going to beat C++ silly if the C++ committee doesn't clean up their act and steer the C++ community a bit better (e.g. still no ABI specification, no improvements on macros, no standardized build system, modules are a joke, no standardized package system, etc etc). They're not effective stewards not least of which because they can't even take good impactful ideas from Rust and copy them.

[1] https://www.youtube.com/watch?v=KAWA1DuvCnQ


C and C++ could both easily add this feature:

https://www.digitalmars.com/articles/C-biggest-mistake.html

which is adding slices to the native language. This will eliminate buffer overflow errors (if the user uses that feature). D has had this from the beginning, and while it doesn't cover everything, that feature alone has resulted in an enormous reduction in memory safety errors.

BTW, D also has a prototype ownership/borrowing system.


C++20 added std::span, which is essentially that.


Without bounds checking.


Hey. Be reasonable. That's coming in c++26 using the `.at` interface that no one actually uses because `[]` is more natural, shorter (2.5x shorter), and convenient.


Yeah, that is exactly the problem. :)


> although there should be more of an industry effort to make sure that all tokens are identifiable

There is, BTW: https://datatracker.ietf.org/doc/html/rfc8959

Getting people to use the standard is another matter.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: