Hacker News new | past | comments | ask | show | jobs | submit login
Improvements to static analysis in GCC 14 (redhat.com)
396 points by dmalcolm 80 days ago | hide | past | favorite | 143 comments



To me fanalyzer is one of GCC killer features over clang. It makes programming C much easier by explaining errors. The error messages also began to feel similar to Rust in terms of being developer friendly.


I know Rust (esp on HN) is very hyped for its memory safety and nice abstractions, but I really wonder how much Rust owes its popularity to its error messages.

I would say the #1 reason I stop learning a technology is because of frustrating or unclear errors.

EDIT: Getting a bit of topic, but I meant more because I love C and would love it more with rust level error messages.


Clang already had decent error messages by the time rust stabilized. There's simply not much you can do at runtime to explain a segfault.


Not when you called templated functions and were greeted with compile-time template stack traces. Or you called overloaded functions and were presented with 50 alternatives you might have meant. The language is inherently unfriendly to user-friendly error messages.


I agree, and I'd go a step further:

In my opinion, the complexity of the interactions between C++'s {preprocessor, overload resolution, template resolution, operator overloading, and implicit casting} can make it really hard to know the meaning of a code snippet you're looking at.

If people use these features only in a very limited, disciplined manner it can be okay.

But on projects where they don't, by golly it's a mess.

(I suppose it's possible to write a horrible mess in any language, so maybe it's unfair for me to pick on C++.)


[flagged]


I’m talking about C++. You wrote that Clang already had friendly error messages. While they were less unfriendly than GCC, calling them friendly is a stretch.

Rust having traits instead of templates is a big ergonomic improvement in that area.


Funnily enough, trait bounds are still a big pain in the neck to provide good diagnostics for because of the amount of things that need to be tracked that are cross cutting across stages of the compiler that under normal operation don't need to talk to each other. They got better in 2018, as async/await put them even more front and center and focused some attention on them, and a lot of work for keeping additional metadata around was added since then (search the codebase for enum ObligationCauseCode if you're curious) to improve them. Now with the new "next" trait solver they have a chance to get even better.

It still easier than providing good diagnostics for template errors though :) (althought I'm convinced that if addressing those errors was high priority, common cases of template instantiations could be modeled internally in the same way as traits purely for diagnostics and materially improve the situation — I understand why it hasn't happened, it is hard and not obviously important).


ASan seems to do quite a lot.


That's what makes me wary of modifying my NixOS config. A single typo and you get an error dump comparable to C++03 templates.


That's definitely the most painful part of iterating on Nix code for me, even in simple configs. You eventually develop an intuition for common problems and rely more on that than on deciphering the stack traces, but that's really not ideal.


Actually, thats a reason why I never even touched Nix. Besides, being functional and all the hype, but the syntax and naming of the language feels ad-hoc enough for me to never have caught on...


It's what got me pissed of enough with xmonad to discard it.


... but you do get an error. That's a lot better what you typically get with C or C++. Assuming it's valid systax, of course.

This is a veering off topic, but I do agree that Nix-the-language has a lot of issues.

(You might suggest Guix, but I don't want to faff about with non-supported repositories for table stakes like firmware and such. Maybe Nickel will eventually provide a more pleasant and principled way to define Nix configurations?)


My favourite Nix error message is

    infinite recursion encountered, at undefined position


I tried some kind of BBC micro at a computer museum, and found out that if you had an error anywhere in your BASIC program, it would just print "error". No line number, no hint at what the problem was.


I could understand some kind of ancient system not having the detail or knowledge to explain what happened in particular, but this is something that still happens in a lot of Microsoft software in particular.

Outlook has a consistent tendency to give you errors like "Couldn't get your mail for some reason", or Windows saying "Hey networking isn't working". No "connection timed out" or "couldn't get an IP address" or "DNS lookup failed" or any other error message that is possible to diagnose. Even the Windows network troubleshooting wizard (the "let us try to diagnose why things aren't working for you" process) would consistently give me "yeah man idk" results, when the error is that I'm not getting an address from DHCP and should be extremely easy to diagnose.

I get that in a lot of cases, problems cut across lots of errors or areas of responsibility, and getting some other team making some other library to expose their internals to your application might be difficult in an environment like Microsoft, but it's just inexplicable that so much software, even these days, resorts to "nope can't do it" and bail out.


Haha, reminds me of some Scheme interpreter that would just say something like 'missing paren' at position 0 or EOF depending on where the imbalance was :)

... but, yeah... I'm pretty sure there could be some hints as to whereabouts that infinite recursion was detected.


Arguably Rust got good error messages by learning from Elm: https://elm-lang.org/news/compiler-errors-for-humans


Elm is acknowledged as being the initial inspiration for focusing on diagnostics early on, but Rust got good error messages through elbow grease and focused attention over a long period of time.

People getting used to good errors and demanding more, is part of the virtuous circle that keeps them high quality.

Making good looking diagnostics requires UX work, but making good diagnostics requires a flexible compiler architecture and a lot of effort, nothing more, nothing less.


Rust's eye towards errors predates Elm entirely.



> I would say the #1 reason I stop learning a technology is because of frustrating or unclear errors.

Overly verbose error messages that obscure more than illuminate are chief complaint against C++.

Honestly, they can just sap all the energy out of a project.


"You violated a template rule. Here's a novel on everything that's broken as a result"

It's why the Constraint system was important for C++.


Yeah Rust is popular because it's a practical language with a nice type system, decent escape hatches, and good tooling. The borrow checker attracts some, but it could have easily been done in a way with terrible usability.


> The borrow checker attracts some, but it could have easily been done in a way with terrible usability.

Why would anyone use the resulting language over C? What you're describing is C with a slightly friendlier compiler.


I have never heard C as being described to have a good type system.


To this day, many C programmers believe that strong typing just means pounding extra hard on the keyboard.

Peter van der Linden, "Expert C Programming"


"Strongly typed, weakly checked". Which is a funny way to say "Not strongly typed" or perhaps more generously "The compilers aren't very good and neither are the programmers but other than that..." (and yes I write that as a long time C programmer)

But hey, C does have types:

First it has several different integers with silly names like "long" and "short".

Then it has the integers again but wearing a Groucho mask and with twice as many zeroes, "float" and "double".

Then an integer that's probably one byte, unless it isn't, in which case it is anyway, and which doesn't know whether it's signed or not, "char".

Then a very small integer that takes up too much space ("_Bool" aka bool)

Finally though, it does have types which definitely aren't integers, unfortunately they participates in integer arithmetic anyway and many C programmers believe they're integers, but the compiler doesn't so that's... well it's a disaster, I speak of course of the pointers.


You could try to argue this is the only source of rust's popularity.... or you could admit that the borrow checker is in fact a reason why folks use Rust over C.


The hard problem with C is that it's hard to tell if what the programmer wrote is an error. Hence warnings... which can be very hit or miss, or absurd overkill in some cases.

(Signed overflow being a prime example where you really either just need to define what happens or accept that your compiler is basically never going to warn you about a possible signed overflow -- which is UB. The compromise here by Rust is to allow one to pick between some implementation defined behaviors. That seems pretty sensible.)


For signed overflow I use -fsanitize=signed-integer-overflow .


Good. I wonder how many people do and also if their compilers support it. (One would hope so, of course. I assume clang and GCC do.)

... but the question is really what you ship to production.

Btw, possible signed overflow was just an example of things people do not want warnings for. OOB is far more dangerous, obviously... and the cost for sanitizer in that case is HUGE... and it doesn't actually catch all cases AFAIUI.


For OOB you can enable bound checking in the C++ standard library. That's relatively cheap. Of course it won't help with C raw pointers and C array.


For production one could use -fsanitize-undefined-trap-on-error that turns it into traps. I would not describe the cost of -fsanitize-undefined=bounds has huge. The cost of Asan is huge.


Clang has a similar tool, the Clang Static Analyzer: https://clang-analyzer.llvm.org/


I've found it to have quite poor defaults for its analysis (things like suggesting "use annex k strcpy_s instead of strcpy"). fanalyzer is still by far the easiest to configure.


And had it for much much longer than GCC.


I have had the exact opposite experience: clang constantly gives me much better error messages than GCC, implementations of some warnings or errors catch more cases, and clang-tidy is able to do much better static analysis.


"Copilot explain this error" has made this whole discussion irrelevant for me.


An issue is immediacy: problems are better the earlier they are pointed out (why online errors are better than compile errorswl, which are better than CI errors, which are runtime errors). Having to copy paste an error adds a layer of indirection that gets in the way of the flow.

Another is reproducibility and accuracy: LLMs have a tendency to confidently state things that are wrong, and to say different things to different people, the compiler has the advantage of being deterministic and generally have better understanding of what's going on to produce correct suggestions (although we still have cases of incorrect assumptions producing invalid suggestions, I believe we have a good track record there).

If those tools help you, more power to you, but I fear their use by inexperienced rustaceans being misled (an expert can identify when the bot is wrong, a novice might just end up questioning their sanity).

Side note: the more I write the more I realize that the same concerns I have with LLMs also apply to the compiler in some way and am trying to bridge that cognitive dissonance. I'm guessing that the reproducibility argument, ensuring the same good error triggers for everyone that makes the same mistake and the lack of human curation, are the thing that makes me uneasy about LLMs for teaching languages.


FYI, in VS Code, you highlight the error in the terminal, right click and select "copilot explain this." One less layer of indirection. In C++, I ultimately only end up using it for 10% of the errors, but because it's the type of error with a terrible message, copilot sees through it and puts it in plain English.

I was so impressed with gpt-4's ability to diagnose and correct errors that i made this app to catch python runtime errors, and automatically make gpt-4 code inject the correction: https://github.com/matthewkolbe/OpenAIError


Certainly for the only new diagnostic I wrote for Rust, I expect an LLM's hallucinations are likely to have undesirable consequences. When you write 'X' where we need a u8, my diagnostic says you can write b'X' which is likely what you meant, but the diagnostic deliberately won't do this if you wrote '€' or '£' or numerous other symbols that aren't ASCII - because b'€' is an error too, so we didn't help you if we advised you to write that, you need to figure out what you actually meant. I would expect some LLMs to suggest b'€' there anyway.


This reminds me one of the reasons I hated C++ so much. 1000+ lines of error messages about template instantiation, instead of 'error: missing semicolon'.


In our programming class in high school we were using Borland C++; I had a classmate call me over to ask about an error they were getting from the compiler.

> "Missing semicolon on line 32"

I looked at it, looked at them, and said "You're missing a semicolon on line 32". They looked at line 32 and, hey! look at that! Forgot a semicolon at the end. Added it and their program worked fine.

Even the best error messages can't help some people.


I'm quite surprised to hear this. What do you get from GCC's analyser that Clang's static analyser doesn't already report?

I tried to use GCC's analyser several times, but I couldn't find any good front ends to it that make the output readable. Clang has multiple (reasonably good HTML output, CodeChecker, Xcode integration, etc.). How do you read the output?

Furthermore, I find that GCC produces many more false positives than Clang.


While I wish GCC would implement integrations and/or a language server, I usually do C programming in the terminal (with entr to trigger automatic rebuild on save).

I do find some false positives, but I haven't had many of them to be a deal breaker for me. Aside from what I mentioned about the errors being descriptive, I do like the defaults and that it's part of the compilation process.

for example, possible malloc null warning is on by default (which i don't think is on clang).


I'm quite surprised that clang doesn't have static analysis! That doesn't seem right, but I don't program much in C anymore.


It does. However it catches some different things*


36 more comments in this other thread:

https://news.ycombinator.com/item?id=39918278 ("GCC 14 Boasts Nice ASCII Art for Visualizing Buffer Overflows (phoronix.com)", 2 hours ago)


A few months ago I made a neat little linux utility.

It was a drop in replacement shim for an arbitrary executable that would pretend to be the original when invoked, fork off the original and hook up to its stdout and stderr.

The error output was then fed to a custom GPT assistant that knew what program the errors came from. That assistant was tasked with turning the original errors into friendly human readable form. The output from the assistant was then sent out of the shim stderr.

It worked very well, but then I got really sick and wasn't able to work on it anymore.

I was using it for GCC / Clang errors because I had become tired of staring at heavily nested compiler dumps for concept/template issues, but you could use it for anything of course.

It would be a nice project for someone to build again, do it properly and generalize it since it doesn't look like I am going to be bouncing around again for a while.


I wish there was a better output format for the analysis, because this is hell for screen readers.


FWIW I implemented SARIF output in GCC 13 which is viewable by e.g. VS Code (via a plugin) - though the ASCII art isn't.

You can see an example of the output here: https://godbolt.org/z/aan6Kfxds (that's the first example from the article, with -fdiagnostics-format=sarif-stderr added to the command-line options)

I experimented with SVG output for the diagrams, but didn't get this in good enough shape for GCC 14.


I find that the biggest practical issue with using GCC's analyser is that it's just so darn difficult to set it up and have readable output. Have you considered focusing on this a bit more? Writing documentation (or even producing tools) for integrating analysis into one's usual workflow, integrating with common build systems (e.g. CMake), making the output properly readable in the context of the source code? I feel that at this point this would be much more helpful than ASCII art or more kinds of warnings ...

I did attempt to use the SARIF output just after your previous blog post a year ago, with a CMake based project, but after an hour I still wasn't able get the warnings to show within the context of the source code.

The -fdiagnostics-format option isn't even in the GCC documentation, your blog post is the only place where I saw it mentioned.

In short: I'd love to use GCC's analyser, and tried it several times, but my bottleneck is usability, ease of setup, and a proper interface to help sort out the many false positives from the true issues.


    if (nbytes < sizeof(*hwrpb))
        return -1;
    
    if (copy_to_user(buffer, hwrpb, nbytes) != 0)
        return -2;
The fix that was done was:

    if (nbytes > sizeof(*hwrpb))
But I think the correct fix is:

    if (copy_to_user(buffer, hwrpb, sizeof(*hwrpb)) != 0)
It never makes sense to copy out of the hwrpb pointer any size other than sizeof(*hwrpb).


If the caller has nbytes = 4

and sizeof(hwrpb) is now 16 bytes, then you will be copying 12 bytes of data too many from the caller, potentially reading into memory it doesn't own. I would say that should be avoided.

The better solution I believe would be to only copy the minimum amount of bytes supported by caller & callee. So:

nbytes = MIN(nbytes, sizeof(hwrpb));

Which should ensure backwards and forwards compatibility, assuming the version info of hwrpb->size is respected then the fact that part of the hwrpb struct isn't initialized shouldn't matter.


Right, but the size of the buffer is given, it doesn't make sense to stomp over end of the callers buffer either, so you can't use pass in something longer than `nbytes` either.


That's what the original check is for:

    if (nbytes < sizeof(*hwrpb))
If the buffer isn't large enough to hold *hwrpb, then it already fails. The original check was good, only needed to change the amount of bytes copied to sizeof(*hwrpb).


No, because if nbytes > sizeof(*hwrpb), your version causes the kernel to only write part of the buffer, and then when the app accesses fields at the end of the struct, it would read uninitialized data, which is very bad.

Recall that the API is intended to be used like this:

    struct hwrbp buf;
    getsysinfo(GSI_GET_HWRPB, &buf, sizeof(buf), /* .. */);
At first glance, it might seem unnecessary to pass the buffer size at all, because in theory the user and kernel should agree on what the sizeof(struct hwrbp) is. But the reason it is passed is because there are various reasons why the separately compiled kernel and user binaries might disagree (e.g., incorrect compiler flags, wrong header file being used, struct has changed between different versions, etc.), and it's useful to detect that. So you can make an argument that the most conservative check is:

    if (nbytes != sizeof(\*hwrpb)) return -1;
After all, if the user and kernel disagree on the correct size of the struct, then something is wrong! But allowing nbytes < sizeof(*hwrpb) has the benefit that the kernel developers can add fields at the end of the struct without breaking backward compatibility with older applications.

I would agree with you if the kernel had some other mechanism to pass the size of the buffer that was actually filled to the client (like e.g. the read() syscall does) but the getsysinfo() API doesn't return that data, so the kernel must either fill the buffer entirely or return failure.*


> No, because if nbytes > sizeof(*hwrpb), your version causes the kernel to only write part of the buffer, and then when the app accesses fields at the end of the struct, it would read uninitialized data, which is very bad.

> I would agree with you if the kernel had some other mechanism to pass the size of the buffer that was actually filled to the client (like e.g. the read() syscall does) but the getsysinfo() API doesn't return that data, so the kernel must either fill the buffer entirely or return failure.

As you mention, this struct is versioned. Userspace can tell how much of the struct was filled by checking the size field (hwrpb->size).

> But allowing nbytes < sizeof(*hwrpb) has the benefit that the kernel developers can add fields at the end of the struct without breaking backward compatibility with older applications.

That's a related but separate issue. Backward compatibility can be handled by switching on nbytes or by copying fewer bytes with a carefully designed struct. It's not clear that backward compatibility was the original intention of this code, the original intention more seems to be sanitizing tainted input. This struct has not changed in at least 16 years.


The original less-than check was deemed incorrect, and was replaced entirely. For good or for ill, it seems the author deems it valid to pass in a value smaller than sizeof *hwrpb, and that many bytes will be dutifully copied. This might form part of some barebones API versioning mechanism.


> The original less-than check was deemed incorrect

It was only deemed incorrect because of an information leak. Not because it's a valid use-case for user space to copy smaller portions of *hwrpb into user space. https://github.com/torvalds/linux/commit/21c5977a836e399fc71...


It's really great. Shear amount of work is huge. It seems difficulty level is on par with introducing fat pointers/array views into stdlib and C standard.


-Wstringop-overflow is the first warning I disable because of all the false positives.

I doubt the analyze variant would fare any better.


Isn't sort of like pulling the battery out of your carbon monoxide detector because the constant beeping is giving you a headache and making you sleepy?


No. -Wstringop-overflow is really broken with a huge amount of false positives.

At $JOB we disable it on a line by line basis, but I'm not sure it is worth the effort.


I disable it in normal builds but track new occurances in builds specifically for high positive rate but still potentially useful warnings.


Very cool stuff!

I haven't done much C development lately, so I'm curious how often `strcpy` and `strcat` are used. Last I checked they're almost as big no-nos as using goto. (Yes, I know goto is often preferred in kernel dev...) Can anyone share on how helpful the c-string analyses are to them?


The use of goto is unambiguously correct and elegant in some contexts. Unwavering avoidance of goto can lead to unnecessarily ugly, convoluted code that is difficult to maintain. It usually isn't common but it has valid uses.

While use of functions like `strcpy` are less advisable, there are contexts in which they are guaranteed to be correct unless other strong (e.g. language-level) invariants are broken, in which case you have much bigger problems. In these somewhat infrequent cases, there is a valid argument that notionally safer alternatives may be slightly less efficient for no benefit.


strcpy and friends don't really have any benefits beyond just being there. The "safer" versions are still unsafe in many cases, while being less performant and more annoying to use.

Writing a strbuffer type and associated functions isn't particularly hard and the resulting interface is nicer to use, safer, and more efficient.


I argue strview (non-owning) is almost always what is needed. Most of string operations are searching and slicing.


You also need a strview. Not really relevant for avoiding strcpy and strcat though.


> The use of goto is unambiguously correct and elegant in some contexts.

For C, absolutely. For C++, it's likely a footgun.


It has fewer use cases in C++ but it still has use cases where the alternatives are worse.


What is a C++ use case where RAII doesn't solve the problem better? I imagine one exists, but I've never encountered it in 20 years. Conversely, I've seen it used inappropriately for cleanup many times (which would be fine in C).


Some usage of goto is still idiomatic in C if used in ways logically equivalent to structured programming constructs C lacks. It requires some care, but I mean, it's C.

(I'm not however fond at all of longjmp)


> (I'm not however fond at all of longjmp)

I don't think there is any justifiable reason to use setjmp/longjmp in modern C code. At best it's a crude imitation of throw/catch semantics; if you really want that, C++ has a real implementation.


There's nothing wrong with simple usages of goto.

The strxcpy family on the other hand is complete garbage and should never be used for any reason. I'm horrified that they're used in the kernel at all. All of those functions (and every failed attempt at "fixing" them) should have been nuked from orbit.


This is the approach taken in git https://github.com/git/git/blob/master/banned.h


> There's nothing wrong with simple usages of goto

Indeed a like a few gotos here and there for doing cleanup toward the end of the function.


Or to break out of nested loops. The problem is with unstructured goto spaghetti making the code impossible to follow without essentially running it in your head (or a debugger).

Goto + Switch (or the GCC computed goto extension) is also a wonderful way to implement state machines.


What's wrong with `strncpy`?


strncpy won't always write a trailing nul byte, causing out of bounds reads elsewhere. It's a nasty little fellow. See the warning at https://linux.die.net/man/3/strncpy

strlcpy() is better and what most people think strncpy() is, but still results in truncated strings if not used carefully which can also lead to big problems.


Speaking of strlcpy, Linus has some colorful opinions on it:

> Note that we have so few 'strlcpy()' calls that we really should remove that horrid horrid interface. It's a buggy piece of sh*t. 'strlcpy()' is fundamentally unsafe BY DESIGN if you don't trust the source string - which is one of the alleged reasons to use it. --Linus

Maybe strscpy is finally the one true fixed design to fix them all. Personally I think the whole exercise is one of unbeliavable stupidity when the real solution is obvious: using proper string buffer types with length and capacity for any sort of string manipulation.


> the real solution is obvious

If it were obvious it would have been done already. Witness the many variants that try to make it better but don't.

> using proper string buffer types with length and capacity

Which you then can't pass to any other library. String management is very easy to solve within the boundaries of your own code. But you'll need to interact with existing code as well.


> If it were obvious it would have been done already. Witness the many variants that try to make it better but don't.

Every other language with mutable strings, including C++, does it like that. It is obvious. The reason it is not done in C is not ignorance, it is laziness.

> Which you then can't pass to any other library. String management is very easy to solve within the boundaries of your own code. But you'll need to interact with existing code as well.

Ignoring the also obvious solution of just keeping a null terminator around (see: C++ std::string), you should only worry about it at the boundary with the other library.

Same as converting from utf-8 to utf-16 to talk to the Windows API for example.


> The reason it is not done in C is not ignorance, it is laziness.

Of course not. C has been around since the dawn of UNIX and the majority of important libraries at the OS level are written in it.

Compatibility with such a vast amount of code is a lot more important than anything else.

If it were so easy why do you think nobody has done it?

> Ignoring the also obvious solution of just keeping a null terminator around

That's not very useful for the general case. If your code relies on the extra metadata (length, size) being correct and you're passing that null-terminated buffer around to libraries outside your code, it won't be correct since nothing else is aware of it.


> If it were so easy why do you think nobody has done it?

People have done it, there are plenty strbuf implementations to go around. Even the kernel has seq_buf. How you handle string manipulation internally in your codebase does not matter for compatibility with existing libraries.

> That's not very useful for the general case. If your code relies on the extra metadata (length, size) being correct and you're passing that null-terminated buffer around to libraries outside your code, it won't be correct since nothing else is aware of it.

You can safely pass the char* buffer inside a std::string to any C library with no conversion. You're making up issues in your head. Don't excuse incompetence.


> People have done it, there are plenty strbuf implementations to go around.

Precisely!

Why plenty and why is none of them the standard in C?


The TL;DR on that is basically "lazy, security unconscious assholes keep shutting it down".

Dennies Ritchie strongly suggested C should add fat pointers all the way back in 1990. Other people have pointed out the issues with zero terminated strings and arrays decaying into pointers (and the ways to deal with them even with backwards compatibility constraints) for years.

One of the most prominent was Walter Bright's article on "C's Biggest Mistake" back in 2009 and he was a C/C++ commercial compiler developer.

There is no excuse.


It is easy to document mistakes in hindsight, since hindsight is 20/20.

It is very easy to write your own one-off secure string handling library. This is a common assignment in intro to C programming classes.

So why isn't it standard in C already?

You offer a theory that there is a gang of "security unconscious assholes [who] keep shutting it down". This gang is so well organized that they have managed to block an easy improvement for many many decades for unknown reasons. That's a pretty wild theory.

Or Occam's razor suggests a different answer: It's actually difficult.

No, not the writing code part, that's easy. It's the seamlessly integrating with ~60 years of mission critical codebases part that's hard.


There's no need to integrate with 60 years of mission critical codebases, you're making up a problem in your head that doesn't exist.

Nothing needs to be fixed, all it takes is to stop doing the stupid thing.

It does not take a "coordinated gang" to shut down C standard proposals, them getting shut down is the default.

You seem to be neither familiar with the nature of the problem or the struggle that is getting anything passed through ISO standardization. I don't mean to belittle you by saying this, I just hope to make you understand that you are assuming things that are simply not based in reality.

It doesn't even need to be in the standard btw. Just write your own. It's a few lines of code. As you say, a beginner exercise. Yet there is code written after the year 2000 that still uses the strxcpy family. Long after the issues have been known and what the solution is.

"Backwards compatibility" is a total red herring. C++ has the solution right there in its standard library. A backwards compatible string buffer implementation.


> Nothing needs to be fixed, all it takes is to stop doing the stupid thing.

Well we'll just agree to disagree I suppose, as I'm equally convinced that you're not grasping what the problem actually is.

All I can say is that if this were as easy to fix as you assert and "all it takes is to stop doing the stupid thing" and we both agree that writing code for the better thing is super easy, then consider why it has not been possible to fix in the C universe.


I don't know what to tell you. Look at the git codebase, they downright ban any usage of the strcpy family, going so far as to hide them under macros so people can't use them.

Banning them outright is not possible in old codebases before the internet got really popular and people were pointing out how bad these functions were, but they sure could stop using them in any new code written in that codebase. That's what code review is for.

Any C code written after 2010 has absolutely no excuse to use these functions. They are inefficient, unsafe and more annoying to use than a strbuf implementation that takes half an hour to write.

So why have people continued to use them?

Option a) they were already there, the codebase is over 30 years old, and replacing the code entirely would be too much work. This is a valid reason.

Option b) ignorance, they don't know how to write a strbuf type. This one is downright impossible, any C dev knows how to do it, and like I said, literally every other language does it the same way.

Option c) laziness. This is for me the only real reason. As awful as these functions are, they're in the stdlib. You still see people saying "simple usages of strncpy are fine". They are not fine.

If you can think of an option d) I'd love to know, because I honestly can't think of anything else. Note that interfacing with existing 30 year old codebases does not count, as how you internally manipulate strings has no bearing on that, all you need to ensure is the 0 terminator at the end.

You get a mutable char* from the old function. You shove it in a struct strbuf {size_t capacity, size_t length, char* data}. Done.

You get a constant char* from the old function. You call strlen followed by malloc and memcpy into a new buffer for the strbuf. Or if you don't need to actually mutate the string, you store it in a non-zero terminated struct strview {size_t length; char* data}.

So what is the challenge here? Why is usage of strcpy not banned in any codebase less than 20 years old?


> you should only worry about it at the boundary with the other library.

If this was a mitigation, it would solve all problems with nul-terminated strings i.e. do strict and error-checked conversions to nul-terminated strings at all boundaries to the program, and then nul-terminated strings and len-specified strings are equivalently dangerous (or safe, depending on your perspective).

The problem is precisely that unsanitised input makes its way into the application, bypassing any checks.


It's impossible to avoid "sanitizing" input if you have a conversion step from a library provided char* to a strbuf type. Any use of the strbuf API is guaranteed to be correct.

That's very different from needing to be on your toes with every usage of the strxcpy family.


> It's impossible to avoid "sanitizing" input if you have a conversion step from a library provided char* to a strbuf type. Any use of the strbuf API is guaranteed to be correct.

I agree: having a datatype beats sanitising input (I think there's a popular essay somewhere about parsing input vs sanitising input which makes pretty much the same point as you do), but it's still only partially correct.

To get to fully correct you don't need a new string type, you need developers to recognise that the fields "Full Name" and "Email address" and "Phone number", while all being stored as strings, are actually different types and to handle them as such by making those types incompatible so that a `string_copy` function must produce a compilation failure when the destination is "EmailAddressType" and the source is "FullNameType".

Developers in C can, right now, do that with only a few minutes of extra typing effort. Adding a "proper" string type is still going to result in someone, somewhere, parsing a uint8_t from a string into a uint64_t, and then (after some computation) reversing that (now overflowing) uint64_t back into a uint8_t.

If you're doing the right thing and creating types because "Parse, Don't Validate", a better string type doesn't bring any benefits. If you're doing the wrong thing and validating inputs, then you're going to miss one anyway, no matter the underlying string type.


Sure but now we're talking about a universal problem across languages, rather than a C-specific problem.


> Sure but now we're talking about a universal problem across languages, rather than a C-specific problem.

Of course, but that's my point - C already gives you the ability to fix the incorrect typing problem, using the existing foundational `str*` functions.

A team who is not using the compiler's ability to warn when mixing types are still going to mix types when there is a safe strbuf_t type.

The problem with the `str*` functions can be fixed today without modifying the language or it's stdlib.

Most C programmers don't do it (myself included). I think that, in one sense, you are correct in that removing the existing string representation (and functions for them) and replacing them with len+data representation for strings will fix some problems.

Trouble is, a lot of useful tokenising/parsing/etc string problems are not possible in a len+data representation (each strtok() type function, for example, needs to make a copy of what it returns) so programmers are just going to do their best to bypass them.

Having programmers trained to create new string types using existing C is just easier, because then you solve the whole 'mixing types' problem even when looking at replacements for things like `strtok`.

Or ... maybe I'm completely off-base and the reason that programmers don't create different types for string-stored data is because it is too much work in current C-as-we-know-it.


For me the "real" solution looks something like this:

    ssize_t strxcpy(char* restrict dst, const char* restrict src, ssize_t len)
Strxcpy copies the string from src to dst. The len parameter is the number of bytes available in the dst buffer. The dst buffer is always terminated with a null byte, so the maximum length of string that can be copied into it is len - 1. strxcpy returns the number of characters copied on success, but can return the following negative values:

    E_INVALID_PARAMETER: Ether dst or src are NULL or len < 1, no data was copied
    W_TRUNCATED: len - 1 bytes were copied but more characters were available in src.
strxcat would work similarly. I have not decided if the return value should include the terminating null or not.


How is this useful though? I mean yes, it is useful in avoiding the buffer overruns. But that's not the only consideration, you also want code that handles data correctly. This just truncates at buffer size so data is lost.

So, if you want the code to work correctly, you need to either check the return code and reallocate dst and call the copy again. But if you're going to do that might as well check src len and allocate dst correctly before calling it so it never fails. But if you're already doing that, you can call strcpy just fine and never have a problem.


Sometimes truncation is fine or at least can be managed. Yes, strdup() is a better choice in a lot of situations, but depending on how your data is structured it may not be the correct option. I would say my version is useful in any situation where you were previously using strncpy/cat or strlcpy/cat.


Wow yeah this seems to summarize well the usual api flakiness and just shuffling of C

It seems people come with "one more improvement" that's broken in one way or the other


The problem with strlcpy is the return value. You can be burned badly if you are using it to for example pull out a fixed chunk of string from a 10TB memory mapped file, especially if you're pulling out all of the 32 byte chunks from that huge file and you just wanted a function to stick the trailing 0 on the string and handle short reads gracefully.

It's even worse if you are using it because you don't fully trust the input string to be null terminated. Maybe you have reasons to be believe that it will be at least as long as you need, but can't trust that it is a real string. As a function that was theoretically written as "fix" for strncpy it is worse in some fundamental ways. At least strncpy is easy enough to make safe by always over-allocating your buffer by 1 byte and stuffing a 0 in the last byte.


strncpy() also zero pads the entire buffer. If it's significantly larger than the copied string you're wasting cycles on pointless move operations for normal, low-security string handling. This behavior is for filling in fixed length fields in data structures. It isn't suitable for general purpose string processing.


#define strncpyz(d,s,l) *(strncpy(d,s,l)+(l))=0

Of course this one is unsafe for macro expansion. But well, its C :)


I'd rather put the final nul at d+l-1 than at d+l, so that l can be the size of d, not "one more than the size of d":

  strncpyz(buf,src,sizeof buf);


As others have already pointed out it, it doesn't guarantee that the result is null-terminated. But that's not the only problem! In addition, it always pads the remaining space with zeros:

    char buf[1000];
    strncpy(buf, "foo", sizeof(buf));
This writes 3 characters and 9997 zeros. It's probably not what you want 99% of the time.


It's not possible to use it safely unless you know that the source string fits in the destination buffer. Every strncpy must be followed by `dst[sizeof dst - 1] = 0`, and even if you do that you still have no idea if you truncated the source string, so you have to put in a further check.

    strncpy (dst, src, (sizeof dst) - 1);
    dst[(sizeof dst) - 1] = 0;
    int truncated = strlen (dst) - strlen (src);
Without the extra two lines after every strncpy, you're probably going to have a a hard to discover transient bug.


if you really want to use standard C string functions, use instead:

    int ret = snprintf(dst, sizeof dst, "%s", src);
    if (ret >= n || ret < 0)
    {
        /* failed */
    }
or as a function:

    bool ya_strcpy(const char* s, char* d, size_t n)
    {
        int cp = snprintf(d, n, "%s", s);
        bool ok = cp >= 0 && cp < n;
        ok ? *s = *s : 0;
        return ok;
    }


snprintf only returns negative if an "encoding error" occurs, which has to do with multi-byte characters.

I think for that to possibly happen, you have to be in a locale with some character encoding in effect and snprintf is asked to print some multi-byte sequence that is invalid for that encoding.

Thus, I suspect, if you don't call that "f...f...frob my C program" function known as setlocale, it will never happen.


> Thus, I suspect, if you don't call that "f...f...frob my C program" function known as setlocale, it will never happen.

Of all the footguns in a hosted C implementation, I believe setlocale (and locale in general) is so broken that even compilers and library developers can't workaround it to make it safe.

The only other unfixable C-standard footgun that comes close, I think, are the environment-reading-and-writing functions, but at least with those, worst-case is leaking a negligible amount of memory in normal usage, or using an old value even when a newer one is available.


I see that in Glibc, snprintf goes to the same general _IO_vsprintf function, which has various ominous -1 returns.

I don't think I see anything that looks like the detection of a conversion error, but rather other reasons. I would have to follow the code in detail to convince myself that glibc's snprintf cannot return -1 under some obscure conditions.

Defending against that value is probably wise.

As far as C locale goes, come on, the design was basically cemented in more or less its current form in 1989 ANSI C. What the hell did anyone know about internationalizing applications in 1989.


I actually do use `snprintf()` and friends.


except no one does that return code check and worse they often use the return code to advance a pointer in concatenated strings


`strncpy` is commonly misunderstood. It's name misleads people into thinking it's a safely-truncating version of `strcpy`. It's not.

I've seen a lot of code where people changed from `strcpy` to `strncpy` because they thought that was safety and security best practice. Even sometimes creating a new security vulnerability which wasn't there with `strcpy`.

`strncpy` does two unexpected things which lead to safety, security and performance issues, especially in large codebases where the destination buffers are passed to other code:

• `strncpy` does NOT zero-terminate the copied string if it limits the length.

Whatever is given the copied string in future is vulnerable to a buffer-read-overrun and junk characters appended to the string, unless the reader has specific knowledge of the buffer length and is strict about NOT treating it as a null-terminated string. That's unusual C, so it's rarely done correctly. It also doesn't show up in testing or normal use, if `strnlen` is "for safety" and nobody enters data that large.

• `strncpy` writes the entire destination buffer with zeros after the copied string.

Usually this isn't a safety and security problem, but it can be terrible for performace if large buffers are being used to ensure there's room for all likely input data.

I've seen these issues in large, commercial C code, with unfortunate effects:

The code had a security fault because under some circumstances, a password check would read characters after the end of a buffer due to lack of a zero-terminator, that authors over the years assumed would always be there.

A password change function could set the new password to something different than the user entered, so they couldn't login after.

The code was assumed to be "fast" because it was C, and avoided "slow" memory allocation and a string API when processing strings. It used preallocated char arrays all over the place to hold temporary strings and `strncpy` to "safely" copy. They were wrong: It would have run faster with a clean string API that did allocations (for multiple reasons, not just `strncpy`).

Those char arrays had the slight inconvenience of causing oddly mismatched string length limits in text fields all over the place. But it was worth it for performance, they thought. To avoid that being a real problem, buffers tended to be sized to be "larger" than any likely value, so buffer sizes like 256 or 1000, 10000 or other arbitrary lengths plucked at random depending on developer mood at the time, and mismatched between countless different places in the large codebase. `strncpy` was used to write to them.

Using `malloc`, or better a proper string object API, would have run much faster in real use, at the same time as being safer and cleaner code.

Even worse, sometimes strings would be appended in pieces, each time using `strncpy` with the remaining length of the destination buffer. That filled the destination with zeros repeatedly, for every few characters appended. Sometimes causing user-interactions that would take milliseconds if coded properly, to take minutes.

Ironically, even a slow scripting language like Python using ordinary string type would have probably run faster than the C application. (Also Python dictionaries would have been faster than the buggy C hash tables in that application which took O(n) lookup time, and SQLite database tables would have been faster, smaller and simpler than the slow and large C "optimised" data structures they used to store data).


It doesn't guarantee that the output is null terminated. Big source of exploits.


gotos are fine if used judiciously. strcpy and strcat are “fine” in that they work when you know your code is correct and you have big problems if you don’t. But this describes most of C, unfortunately.


> gotos are fine if used judiciously

Is there a language feature that is not? :)


If you use trigraphs in your code I will be very upset


> Last I checked they're almost as big no-nos as using goto.

I don't think so. Gotos are fine, strcat and strcpy without a malloc with the correct size in the same scope is a code smell.


> Last I checked they're almost as big no-nos as using goto.

Huh? Why is goto a no-no? It is there for good reason. I think we all agree with Dijkstra that, in his words, unbridled gotos are harmful, but C's goto is most definitely bridled. I doubt any language created in the last 50+ years has unbridled gotos. That's an ancient programming technique that went out of fashion long ago (in large part because of Dijkstra).


Languages other than C give you options for flow control so that you don't need goto for that. It is a spectrum, if you only use goto to jump to the end of a small function on error it is okay, though I prefer something better in my language. I've seen 30,000 line functions with gotos used for flow control (loops and if branches) - something you can do in C if you are really that stupid and I think we will all agree is bad. This 30,000+ line function with gotos as flow control was a lot more common in Dijkstra's day.


We all agree that you shouldn't write bad code. Not using goto, not using any language construct.

But when unbridled gotos were the only tool in the toolbox, bad code was an inevitability in a codebase of any meaningful size. Not even the best programmer was immune. This is what the "Go to statement considered harmful" paper was about.

It was written in 1968. We listened. We created languages that addressed the concerns raised and moved forward. It is no longer relevant. Why does it keep getting repeated in a misappropriated way?


In 1968 they had better languages and programmers were still using goto for control in them despite better options.


Of course. The ideas presented in said paper went back at least a decade prior, but languages were still showing up with unbridled gotos despite that. But that has changed in the meantime. What language are you or anyone you know using today that still has an unbridled goto statement?


> Languages other than C give you options for flow control so that you don't need goto for that.

The idiom `if (error) goto cleanup` is about the only thing I see goto used for. What flow control replaces that other than exceptions?


Jumping out of nested loops. Implementing higher level constructs like yield or defer. State machines. Compiler output that uses C as a "cross-platform" assembly language.

All of them are better served with more specialized language constructs but as a widely applicable hammer goto is pretty nice.

I don't expect C to have good error handling or generators any time soon but with goto I can deal with it.


I'm actually familiar with this, having used libprotothreads in production for about 4 years.

Something like libprotothreads can't actually be implemented in a language that doesn't have gotos, so yeah, I see the need for it.


Compiling HLL constructs in some of those scenarios ultimately produces a jump statement. So, it makes sense that a higher-level version of a jump would be helpful in the same situations.


> What flow control replaces that other than exceptions?

defer has gained in popularity for that situation.


RAII + destructors

Though gcc supports cleanup functions, just not very ergonomically.


> 30,000 line functions with gotos

The problem there is the 30K line function, not the goto!


30k functions are a problem but they are manageable if goto isn't used in them. I prefer not to but a have figured them out.


Wow! Longest single function I can think of having written is ~200 lines. I always feel bad when editing it but there's no useful way to break it down so I let it be. But a single 30,000 line function? Wow.


I'll take a 30k line function that does one thing over 30 1k line functions that are used once...


Agreed! Breaking into multiple functions for no reason other than style isn't smart either.


goto used in certain idiomatic ways (e.g. to jump to cleanup code after an error, or to go to a `retry:` label, or to continue or break out of a multiply nested loop) is fine. What's annoying is bypassing control flow with random goto spaghetti.


Very nice. I’m glad to see these all have detailed reports explaining what’s wrong!


It's hard to believe that more and more compiler writers realise that language lawyering alone isn't going to improve anything but runtime on an unchanging set of microbenchmarks. I still remember the bad old GCC 4.x error messages and those defending them explaining why they should stay like this despite a single template error easily filling ten unintelligible screen pages.

When clang was new users switched to it just for the error messages and promises not to fuck them over too hard e.g. start exploiting that signed integer overflows are "ackchyually undefined". Which is of course correct, but not what users complained about. They complained that what they considered a bugfix release broke code because the defaults changed and -fwrapv didn't even catch all the cases that used to compile to what the user needed/expected.


now we want a GCC language server!


You may joke, but Stallman actually tried to convince them in 2017 to modify gcc to make one out of it

https://lists.gnu.org/archive/html/emacs-devel/2017-04/msg00...


Few years before that Stallman personally sabotaged this kind of tooling "because someone might abuse it". LWN did a write-up: https://lwn.net/Articles/629259/

So not surprising gcc devs weren't especially interested in on it, since Lord Stallman can come in and decree it unethical on a whim out of misguided fears.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: