I have C74 manual. char was ASCII character occupying lower 7 bit of a signed 2's complement 8-bit number, int was signed 2's complement 16-bit number, no unsigned integers.
I wish the WG14 would publish the TeX sources for the latest working draft. None of the pdf-to-html generators work well, and as a result the most up to date version you can conveniently view in your browser is C99.
I don't understand, why are the PDF documents specifying the C/C++ standards being sold for $$$ when I believe I was able to find them on the respective WG website?
Might be just good ol' market segmentation - a company with the money to spend on a spec isn't likely to settle for a draft, but the average random dev doesn't care much about the differences.
Right, good point, all you get from the WG website is the working draft, which, as you pointed out, is almost always virtually identical to the published one that costs $$$.
Since everyone is recommending other resources, here's some C developers whose code I enjoy very much (or just really like the software they created with C):
Just want to add and let people know that antirez (creator of Redis) writes so many insightful code comments that you almost always learn something new when you go through his projects.
He has also written an article on how he writes the code comments [0]; I've been using Design, Why, and Teacher comments a lot in my last projects and they have been very helpful, especially if you come back to the project after a long time or introduce a coworker to the code base.
Totally agree, I went through some of the code in his GitHub repo and it’s a joy to read through, making it clear what everything is for, how and why it works, which is rare especially for plain C.
I try to comment my code in a similar way, often ending up with up to 30% comment to code ratio. I know some (many?) people are very dismissive of this style and reject the idea you have to write any comments where the code is ‘self explanatory’, but I don’t care. It doesn’t take me a lot of extra time to document the general idea behind a piece of code and it makes my future life and that of other devs reading the code so much easier it’s easily worth the effort to write these comments and keep them up to date.
Like I said it’s rare to see OSS code written that way though, I’ve even thought on multiple occasions that some projects actively strip all code comments automatically for whatever reason, considering the total absence of any form of code documentation in them.
> reject the idea you have to write any comments where the code is ‘self explanatory’
I'm one of those, but in your defense I think we need to maintain a very high bar for "self explanatory".
At the same time, I think it's important to be aware that comments not immediately adjacent to what they describe are usually not visible in code review, are often not visible to the person editing the code in question, will fall out of date without an active process to keep them up to date, and at that point will do more harm than good.
I'm curious if you have any particular mechanisms for addressing the above?
I'm trying in my current setting by introducing a mechanism for cross-references (anywhere in the repo, ^^{label} refers to @@{label}) that my CI setup can recognize and surface during code review. It's provided a bit of benefit, but it's underused, and it's too soon to have a good sense of long term impact.
>> I'm one of those, but in your defense I think we need to maintain a very high bar for "self explanatory".
I put 'self explanatory' in square quotes exactly because of that, at the time of writing the code many things may seem self explanatory for the person writing it, but even seemingly simple things can be very confusing for people who read the code for the first time. But obviously I'm not advocating comments that simply describe in words what the next line of code does mechanically, those are useless. But I do like comments that break up the algorithm in sections that are clearly marked by comments, even if some parts of those comments may seem trivial ('loop over all objects of type x and process those with property y', etc). For me the value is in being able to read the comments first and get an idea of the structure of the code and the though process behind it, before actually having to interpret the code itself.
>> At the same time, I think it's important to be aware that comments not immediately adjacent to what they describe are usually not visible in code review [..] I'm curious if you have any particular mechanisms for addressing the above?
Not really. In our team we don't really use code review tools for feedback about algorithms or higher level design of the code, my experience they are pretty awful for that anyway. We use bitbucket primarily to point out local improvements and ask questions why sections where added or changed, but for higher-level discussion we prefer offline tools like whiteboarding, and rotating people so everyone works on different parts of the codebase.
This, 100%. I've been working on a side project that is essentially rebuilding Redis but in Ruby, only to learn how things work (https://redis.pjam.me/) and the Redis code base is fantastic. As someone with basically no professional C experience, the clarity of the code base makes diving in fairly easy
I agree. It makes his code very approachable, as well as providing an excellent model for complex projects. I try to follow this as well to provide good examples within our corporate codebase.
The main challenge I have is not all engineers are equally willing/comfortable/able to communicate their ideas in this way. Sometimes because of ingrained preference (comments are never necessary), other times because of language barriers.
There’s a very vocal group that tells you not to write in C because it’s unsafe, immoral and you’ll burn in hell if you do but the fact is...I would never be able to understand the problem Rust is trying to solve until I’ve recreated the problems in C. By now I’ve hit a good amount of segmentation faults.
Would really love to know if there’s a list of all the possible ways to get one oF these in C.
I really like C as well, it's the smallest language I know. From bare metal you can bootstrap a basic C runtime in like 10 lines of ASM (zero the BSS, setup the stack, jump main). I really love Rust these days, but there's no denying that it's a much more complex language.
Segfaults are not something C is aware of, it's usually the consequence of triggering an undefined behaviour. But UB being... undefined, all sort of things can and do happen, for instance a buffer overflow can segfault, but it can also overwrite some other variable, or the return address of the function, or do nothing at all.
That's really the problem with C. If UB meant segfault, then it wouldn't be so bad because at least you'd immediately know that something went wrong. Instead it's very easy to have small issues go unnoticed for a long time, until your application finally crashes in a seemingly unrelated location or, worse, some clever guy or gal manages to exploit the flaw for fun and profits.
Ummm.. Is that the language? Sounds like the preamble in the code generator. The equivalent Forth VM is just a handful of instructions also but one would not say that's the language.
The C startup routines are not part of the language, but you can think of them as the bridge between your actual machine and a state that matches the C abstract machine. (For a virtual machine this might be two memsets-one to the registers and one to memory.)
I don’t self-promote usually, but I asked some people on Reddit[1] if they would like an educational stream on embedded systems, and a lot of people said yes. I’m doing one on Sunday most probably. Feel free to join in because that’s one of the things I will be talking about.
Hey kahlonel! Just wanted to tell you that I watched your stream and it was really good, I learned a lot. I don't have a twitch account so I couldn't use the chat when you were waiting for someone to show up. So I just wanted to tell you here that everything was great from sound quality to the content you provided! :)
OS dev[1] has a full page on how to do this and more, as well as links to sample code, etc. Unfortunately it's down right now, so here's a webarchive link from 1st Sept[1.5]
A resource you'll find immensely useful is ctyme's interrupt jump table[2][3], which is easier than reading Ralf Brown's ASCII formatted notes[4]. Of course there are easier ways to do it, but printf() debugging (or the assembly equivalent) is still by far the easiest to set up.
If you want to go write code for modern systems -- UEFI code, you can find a tutorial here[5] (I didn't write this, however, so I don't know how good it is, I can't find the resources I used :c), and the EFI spec here[6]
Well it's pretty platform dependent, but basically all you need is to do is make sure that you're in a consistent state with regards to IRQs and other low level configuration, then clear the BSS, then setup the stack pointer, then you can jump to C.
Here's a random bootloader init code I wrote a while ago for an ARM SoC (probably heavily inspired by uboot):
The vectors are a table that must be placed in a specific location in memory and are used by the CPU to know what code to execute when certain events occur. Here I only fill the "reset" entry to call my init code when the CPU boots up.
Everything in my "reset" function up to line 93 is low level ARM things: making sure the CPU is in a well defined mode, disable IRQs, initialize the cache (optional, but good for perfs) etc... All that stuff is usually described in the docs for the CPU/SoC/Controller. Note that this SoC is fairly complex, if you used a smaller microcontroller it's usually much more straightforward.
Then after that you have a loop to clear the BSS (the _bss_start and _bss_end are generated automatically by the linker script, normally when you load an executable the operating system clears those regions for you, but obviously here we're on our own).
Then I set the stack pointer register SP to _stack_base which is also defined in the linker script to point at some memory location (typically the top of the RAM). And then we're good to go, we can do the rest in C with only small bits of ASM here and there for special instructions.
I think the best way to learn that stuff is by doing, get some ARM chip with an u-boot loader and dig into it, break it, see what code does etc... Obviously you need a good knowledge of C to begin with, and at least a basic understanding of assembly, so start with that if you're not there yet.
Just read the source to crt0 which contains the entry point called by the kernel (a symbol called _start or __start) and which finishes up by calling main(). It should be in the source of your c runtime library (just look at any of the open source ones) with different versions for each architecture and with slight variations for the os.
Also: run global initializers. Getting kinda fuzzy on the boundaries, but I'd also add in the process to copy the initial contents of the .data segment from read-only to read-write storage.
"baked into the binary" covers a lot of ground. You've got to get the initial contents of .data initialized somehow. In a typical microcontroller that's a copy from flash to ram.
That's true, I forgot about that because the last controller I used had a bootrom that copied the booloader straight to RAM, so .data was already read-write and there was nothing to do.
I'm in both camps at once. I hate C's footguns, I hate the difficulty of doing some things that are simple in other languages (iterators, coroutines, etc). I love C's speed and flexibility. I love the availability of C: any chip I need to program (that isn't an FPGA or such) I can use C.
So I push for Rust (and sometimes Zig), because they're better than C in many ways, but they do lose the availability and some of the flexibility.
I see them as filling fundamentally different (if occasionally overlapping) use-cases. Rust is a language for high-performance programs which require strong safety guarantees. C is a language to make the computer do what you tell it to.
So for instance, if what I need to do is just manipulate memory with the least overhead possible, C is a really good tool for the job where Rust adds unnecessary ceremony. For instance, if I'm just walking arena-allocated memory, the borrow checker does nothing for me.
Also I think C plays an important pedagogical role. When writing C you learn a lot about what a computer actually does, whereas with Rust you're working with an abstract system.
I think a lot of the reason C gets a bad rap is because it has been used historically in a lot of use-cases where a language with better safety guarantees would have been a better fit, but that alternative didn't exist until maybe quite recently.
> Also I think C plays an important pedagogical role. When writing C you learn a lot about what a computer actually does, whereas with Rust you're working with an abstract system.
That has not been true in decades, if ever. C is very much defined in terms of an abstract machine, and the behaviour of C is not that of the underlying assembly, which itself is an abstraction over the µops which actually tell the hardware how to work.
That's nitpicky though. If you put C and the resulting assembly code side by side on godbolt.org you can see which lines of C code translate to which lines of assembly code, and you can see how this changes when enabling the various optimization levels. Higher level languages which rely more on "zero cost abstractions" are much further removed from CPU and memory.
Sure if you dive down into memory-model and CPU-internal details like micro-ops and pipelining then this model quickly falls apart, but on the surface, the "C abstract machine" still maps pretty well to what's happening under the hood.
Higher level languages which rely more on "zero cost abstractions" are much further removed from CPU and memory.
Higher level languages are much further removed from C's model of the CPU and memory. And despite that "distance" there is virtually no difference between C's and C++'s (and often Rust's) performance. And at the end of the day, it doesn't matter how low level your language is if it isn't getting you any performance uplift.
Sure if you dive down into memory-model and CPU-internal details like micro-ops and pipelining then this model quickly falls apart, but on the surface, the "C abstract machine" still maps pretty well to what's happening under the hood.
A core part of the "C abstract machine" is sequential execution. Instruction-level parallelism is par for the course in modern processors, many of which have hundreds of instructions executing at any given time. So C hasn't been reflective of typical desktop/server CPUs for the last 30 years at least.
Hardware vendors spend a lot of money to make C code execute quickly, but we're now seeing the absolute limits of that.
This is silly. ILP doesn't matter because the HW has kept the same basic ISA with the same memory guarantees since forever. A lot of stuff goes on under the hood to make C code execute quickly (also to make Java code execute quickly, etc). Hundreds of in-flight instructions does not matter when they all commit in order.
> And despite that "distance" there is virtually no difference between C's and C++'s (and often Rust's) performance.
I'm not seeing that in real-world projects though, and it's not only about performance. These are just two random examples from my own experience of where high-level language features get in the way (I guess the TL;DR is: yes, it's possible to write high-performance software in high-level languages, but it's often more effort, and may require to write unreadble and "non-idiomatic" code, because you need to "appease" the compiler much more than in a lower level language):
Yes I mean this will always be true unless you work with raw machine code. The point is the C abstraction is lower level: give me memory, write to memory etc. where with Rust you’re moreso interacting with the compiler in a declarative way
> For instance, if I'm just walking arena-allocated memory, the borrow checker does nothing for me.
The borrow checker is most useful when you are using it in a team or just using external Rust code. By forcing code to be explicit the borrow checker ensures that most, if not all, implicit restrictions are propagated through the codebase. Naturally this makes Rust far easier to compose than C.
> Also I think C plays an important pedagogical role. When writing C you learn a lot about what a computer actually does, whereas with Rust you're working with an abstract system.
C is to modern hardware like the Intel 8086 is to an Intel Xeon Platinum. The mere fact that ISO C runs on some many different types of architectures and platforms proves that C is far removed from any particular computer's mode of operation (which isn't a bad thing). Even programming in machine code wouldn't reflect a CPU's true mode of operation because of things like Macro-op Fusion, Register Renaming, TLBs, etc. Processors nowadays are probably the most dynamic and complex "programs" seen in the industry.
If C and Rust were graded on "similarity" to hardware, then C would be neighbours with Rust and hardware would be located ten timezones ahead.
> I think a lot of the reason C gets a bad rap is because it has been used historically in a lot of use-cases where a language with better safety guarantees would have been a better fit, but that alternative didn't exist until maybe quite recently.
Ada is only eight years younger than C, Pascal pre-dates C, etc. There were alternatives but people used what they knew and what their systems were written in. But to be fair, it is far easier to find new languages and tools today than it was pre-internet. However, the problem with C being used in inappropriate places hasn't gone away. People still push to write new applications and software in C despite the safety problems and developer ergonomics.
I would have said that Zig is for Rust like C is for C++. But I would be wrong. Zig has all the power of C++, but it doesn't hide anything from you, it is not as complex as C++ and it is unsafe. In the future, probably Zig will be the choice for things that require lots of unsafe components and Rust otherwise. There was also an article about this in which the author said that "unsafe zig is safer than unsafe rust".[1]
Moreover, I am also interested in F*, the ability to generate efficient C, ASM and WASM code from it looks promising.
> There was also an article about this in which the author said that "unsafe zig is safer than unsafe rust".
This is the wrong conclusion to derive from one example where Zig checks alignment and Rust chooses not to. (Nor is the conclusion particularly profound or useful: comparing languages by an attribute in cases where they have said they explicitly do not guarantee semantics of that attribute-safety in this case-is not useful.)
The biggest problem C has is the standards committee. They appear to have given up on making any significant improvements to the language.
C could have an optional bounds checking mode. It is possible to design such a feature in a way that allows progressive adoption without breaking compatibility with existing libraries.
C could also choose to move some forms of Undefined Behavior to implementation-defined or even defined.
Many of the current standards committee members are active impediments to progress. We'd all be better off if we hit the reset button on that. Just look at what the C++ group has been doing by comparison...
> There’s a very vocal group that tells you not to write in C because it’s unsafe
I get the reasoning to learn it. Just after you do learn it, stop and use something else. Unless you absolutely have to. I don't think anyone has attempted to compute the amount of money spent on buffer overflow hacks. But I'm guessing it's in the 10s to 100s of billions of dollars if you count all the viruses that took advantage of it in the past in Apache web server or Windows NT's NTFS (or sharing etc...) codebases. And that applies to C++ too.
And hey, I know that sounds extreme - but there really are situations of "unless you absolutely have to" but I wouldn't expect anyone to write a brand new queuing system or database in C or C++ anymore given the alternative languages available.
> but I wouldn't expect anyone to write a brand new queuing system or database in C or C++ anymore given the alternative languages available.
Then you expect wrong, at least about Database Management Systems. New performant ones are often (/ mostly?) written in C++. At least that's what it's like for analytical DBMSes.
And you might also be surprised that many problems originally carried over from C to C++, like dereferencing nulls and memory leaks can be done away with - not by being super-careful, but by sticking to newer language facilities and using some static analysis. Other, like buffer overruns, are easier to avoid with things like spans and ranged-for loops.
If you're writing a web server or an OS, yes you need to be careful and it might be easier to be careful in other languages (allegedly). But if you're making a game or playing around in WASM, it's fine and you're not going to cost anyone billions of dollars.
> I don't think anyone has attempted to compute the amount of money spent on buffer overflow hacks. But I'm guessing it's in the 10s to 100s of billions of dollars...
Why even bother stating this? “I don’t have any data to back this up but look at this huge number!!!!”
One of the tiny gifts you get from Hacker news is that when you put an idea out there, sometimes someone acts on it. It may be wishful thinking, but what if someone attempts to quantify all the C/C++/Java/Perl hacks from history because of this post? :)
I know the $ amount is greater than all other security risks in the history of software - even if I don't know the number the only reason to NOT post something about that history would be to defend C/C++, when in fact I am attacking C/C++'s record here.
For C I mostly agree, but from what I understand there's still a ton of new C++ being written and even replacing "legacy" alternative languages even if they provide advantages.
The list would be infinite; and it's a feature, not a bug. You pick up strategies for avoiding problems along the way, to a higher degree than in less flexible languages.
Valgrind is your friend, I couldn't imagine writing C without it anymore.
Many compilers also ship with verifiers for undefined behavior which has saved me a couple of times.
Also: enable all warnings the compiler offers and compile the code with different compilers (clang, gcc, MSVC - they all have different warnings), add copious amounts of assert checks, compile and test with the clang sanitizers enabled (address sanitizer, undefined behaviour sanitizer, thread sanitizer), and run the code through static analyzers (e.g. clang static analyzer and the one in Visual Studio), minimize dynamic allocation and pointer use.
With all those combined it's quite hard to hit segmentation faults and memory corruption and the source code will generally become "cleaner" and more robust.
I really wish compilers had a -Wstandard and that warned on standard breaking things vs -Wall or -Wextra which has compiler specific junk like -Wparentheses
The thing about Segfaults, is that with a good integrated debugger like visual studio, the bugs that produce them are often very easy to diagnose. As a C programmer I would say they are my favorite class of bugs, because they are so easy to fix, and they show them selves so clearly.
A segfault is really just the CPU and OS working as intended. Not always obvious, but an ill formed access should not fail silently.
The issue with C is the silent issues like buffer overruns, i.e. even if you program defensively and with a proper security model you're still opening up to the possibility of leaking your missile codes by accident because you committed bad code
I agree. Silent issues are the problem that needs addressing.
Personally I think bugs should be addressed, with better debug modes and tools. Take this example:
int i, array[10];
for(i = 0; i < 11; i++)
array[i] = 0;
A clear bug, right? You could write this type of bug in any language. Many languages would stop you from writing outside the array. In C its undefined behavior.
Undefined behavior means that the compiler doesn't have to do anything, therefor it doesn't have to at run time test if the code is correct. It can assume that the programmer knows what they are doing. Thats why C has great performance, but also why bugs can be silent and hard to find.
But, It doesn't meant that the compiler has to ignore the issue. Its undefined behavior, so the compiler is free to do all the checks that a language that doesn't let you write outside an array do. Sure, then C will become just as slow as other languages, but you can now find and remove the silent bug.
This is why I think that debug mode, should be very different from release mode. The two compilers have entirely different design goals. Some compilers are starting to do this. Visual studio will for instance break if you access a lot of uninitialized values in debug mode, something that it will ignore in release mode.
There are loads of things that could be done to improve debugging that doesn't require the language to change or to slow down the release runtime.
> As a C programmer I would say they are my favorite class of bugs, because they are so easy to fix,
I know what you're trying to say, because at least 95% of segmentation faults are easy to fix, but I take issue with this. The hardest bug that I've ever fixed was a segmentation fault!
You've obviously never experienced random crashes where the best idea anyone has is to bisect code changes in production and see when the crashes stop.
And you've obviously never experienced random crashes on customer machines where you've tested the crashing function for weeks, poured over crash dumps and ran the exact same function with the exact same data and could never reproduce it.
I think you're leaving out the option that the person you're replying to has seen even worse bugs because they have better ideas of how to deal with random crashes than bisecting code changes. For example, with the details you provided (of which there isn't much) I would suspect a thread safety issue.
Thank you! I find that the worst bugs are usually the ones involving the logic of the algorithm, and the language has nothing to do with the bug itself. This is why I think that a language that is very explicit, and without side effects and therefor easy to step though using a debugger is desirable.
Buffer over runs, can be trickier, but I have plenty of tooling to find the issues.
Thread safety issues are what nightmares are made of.
For most of my career since the 1980s, I have been a Common Lisp and Java developer. However, I did have an eight year stretch of using C++ at SAIC and later for PacBell, Disney and Nintendo projects.
When I was done with my professional C++ stint, I fell in love with the simplicity of C. It was like a breath of fresh air. I stopped using C largely by accident: everyone who paid for my work wanted Common Lisp or Java, and that lasted until the start of the deep learning craze seven years ago.
I thank the author for making a CC licensed PDF of his book available.
I didn't enjoy this book, to be blunt. K&R still holds up as an interesting read, despite it not being updated to latest ANSI C standards. It is probably not a good first book.
A few C books I like:
* C: A reference manual.
* C: Interfaces and Implementations.
* Deep C secrets.
* C: Traps and Pitfalls.
I can also recommend Deep C Secrets. For me, a lot of C didn't really click until I read that book. In particular, having mainly done Python and other very high level languages up until that point, this book's discussion of pointers and memory were what got C to finally click for me.
Sure, but that is meant as a personal thing and not meant as a general statement. Everyone should evaluate whether the book (any book) works for their needs.
The main reason is that, it was a very dry read with various rules and syntax etc and I got bored looking at it quickly. It almost felt like I am reading an abridged version of the standard itself. The exercises didn't appeal to me much.
I didn't mean to dis the efforts of the author at all. There is no other book that covers the modern parts of the standard well. I am not aware of any other C book that covers the wide chars than this book.
I had the same experience with this book. I set out as an intermediate C programmer and ended up just bored to tears with things I felt were obvious and frustrated that questions about “real” C code were not being answered. Also the exercises were... very involved for just trying to learn potentially simple points. I ended up giving up on the book for now, maybe I’ll pick it back up later or I’ll peruse your list.
I used k&r as my first C book. You can learn C from it. However it does teach a C that is not really used in the wild. Specifically function definitions and tri-glyphs. Its memory management is kind of hand wavy and it does not stress enough the importance of getting the types right (like the nightmare of printf). Both of which can cause memory issues very quickly. If you start with that book I would recommend one of those others to go along with it. Using C is easy. Getting it right takes time, practice, and a few stubbed toes.
I too learned C from K&R and really loved it. I did not have any other thing to do and there was no internet and no one telling me that C is bad. Nor was I aware of other cool languages. I did not even have a computer at that time, so I wrote most of the exercises on paper. Later my father bought me a 486DX66 and I typed in most of the programs in the book. Those were fun times!
This sounds like an interesting organizing principle, but in practice it is implemented quite rigidly, and produces, in my opinion, two serious weaknesses.
First of all, organizing by levels of detail, rather than treating individual topics in depth all at once, means that a fair amount of related information gets spread around the book.
This might be OK if the book had a comprehensive index, or consistently used forward-references to point the reader to a detailed discussion, but the book is generally weak in these sorts of cross-references.
For example, suppose you want to look up formatting strings for use with `printf()`. The index (I am looking at the Manning version) lists exactly one entry for `printf()`, which is on page 8 -- and the actual text there just mentions basically that `printf()` is a function that exists.
You could also look up "format", which gives one reference, to page 5, where the term is introduced but hardly even defined. The index does have more entries for `snprintf()` and `sprintf()`, if you happen to think to look for those. But after you play this game for a while, it becomes clear that the index takes a quite mechanical view of these functions, content to list locations where the function gets called out in the text, but without any great concern for where they are discussed as a cohesive collection of related functionality.
To me, this levels-of-experience approach makes sense for some things, like deferring discussion of threads, and setjmp-longjmp, till later chapters. That's why I say that the book errs by sticking to the game plan too rigidly.
The second major shortcoming of this rigid levels-of-experience approach is that the book delays discussing pointers for a long time. They aren't really taken up until chapter 11. Literally the book doesn't show a basic "swap" function until page 170. This means that a lot of code examples that come before chapter 11 do quite a dance to avoid using pointers.
When, finally, pointers are discussed, I felt that the text was somewhat lackluster. There is nothing wrong with it, but if this chapter was a blog post by some random C programmer, and got posted to HN, I doubt it would garner a lot of upvotes by enthusiasts saying it was the best description of pointers they had ever read.
Thus it had an anti-climactic effect for me -- I expected that deferring the discussion of pointers for so long would have some kind of pedagogical payoff, but there wasn't anything to it that, to me, justified putting it off for so long.
I had a few other problems with the book, but nothing that, by themselves, would keep me from recommending it.
I might recommend the book to somebody who already knows C, and wanted a deep dive on certain topics. However I would steer a beginner away from it.
Somebody else on this thread mentioned another new entry to this field, "Effective C" from No Starch Press. I am in the middle of this one, and so far like it a bit more (though I am not yet a full-throated enthusiast for it). I would note that "Effective C" introduces a swap function on page 16. That would be wildly premature to the author of "Modern C", but in the end I think it will serve the beginning reader somewhat better.
I learned C from K&R. It was lent to me by an English teacher (!) from my high school and I can still recall the smell of it (stale coffee and cigarette smoke). C was my fourth language after BASIC, Pascal and 6502 Assembler.
I use C11 atomics pretty regularly. While lock-free data structures are tough to get right, there are other uses of atomics that are easy. For example, portable lock-free multithreaded statistics counters.
Indeed, C11 and C18 don't bring a lot. I'm glad that they don't because backwards compatibility and roughly predictable development of the language is important to me.
The best book for me was the King book.
Clear. Thorough. And most importantly the order of the chapter topics.
And definitely do the projects to practice.
The best C book is the Kernighan + Ritchie book "The C programming language", hands down. This is also one of the best books on practical computer science available today. Other books could help to provide a more modern take on how to adapt the language to 2020, but the original K+R book is still the best way to really learn the language.
I've been programming in C for twenty years now and K&R has a special place in my heart however I do not think it is the best book for a beginner to learn C.
If you're coming from COBOL and wanting to learn C (like the target audience at the time the book was written) ok perhaps it is a good book for you. But if you're coming from something like Python or Java or JavaScript (or even no language at all) there are better options such as K N King's A Modern Approach.
K&R is a fantastic book in its own right and I certainly think once you feel more comfortable with C it is a superb book to read and more importantly complete as many of the exercises as you can.
It is that I have seen too many people come from higher level languages or with no programming knowledge and find K&R frustrating due to its assumption the reader is already a programmer in some other (1970s) language with a fundamental understanding of some programming concepts.
K&R is a worthy historical artifact on its own, documenting the then prevailing programming styles and problems of interest, and the overall cultural vibe of the programming community back then. It just takes you back to the creative and somewhat unorderly 1970s atmosphere in Bell labs, with its former hippies now wearing big glasses and colourful sweaters.
I think it’s a great book to read first; it is concise and correct and gives a very good idea of what C is about. However, I would immediately follow it up with the newer parts of C, going more into the modern definition of undefined behavior, practices to keep your code correct in larger applications, how to debug issues, etc.
Ideally C should be avoided as much as possible outside kernel code.
However it has gotten so deep into IT infrastructure, thanks to the hegemony of UNIX/POSIX clones, that even if starting today no more greenfield software would be written in C, and its copy-paste compatible languages, Objective-C and C++, it would take generations to clean it up and it would never be 100% replaced, as proven by mainframe environments and their languages.
So for the use cases where C isn't going away no matter what, we should strive for newer generations to improve their code quality and not to repeat bad practices from the past.
Not going to buy another copy just to make someone happy on Internet.
But still, most examples don't proper error correction, don't teach about use of bound checked strings and vectors, and if I remember correctly there are examples with gets().
I hear there are PDFs floating around on the internet, not that I would know anything about this of course ;)
My copy has no examples that use gets, although it is mentioned and I would agree that any such mention without a disclaimer that the function is impossible to use safely is a defect. Error handling, however, is generally present (or left out for brevity and noted). The functions in the standard for dealing with bounds checks are a new addition to the standard and a pox on the language regardless so it's not the best example of something new that the book should cover.
Those are both modern additions to the language, the latter of which I would say is a necessary part of any formal C education (I always mention "K&R with supplements" as the go-to way to learn C). Thus, I wouldn't call it "outdated" but maybe "incomplete"; all the information in the book is fairly up-to-date-but it is missing things that modern C programmers should know.
For best practices you're probably better served by looking at standards and guidelines specific for embedded systems. For example MISRA C. There is also the JPL C Coding Standards, which are based on MISRA and are freely available.
Because there is nothing inherently modern about this code, this is how C has been written for decades if not from the very start (I think you will struggle to find patterns in this code that are not covered by KnR).
That being said, there are books that cover relatively new developments in the C language and its ecosystem, such as "21st Centry C" by Ben Klemens (but even this book is something like 5-10 years old).
So there wouldn't be anything specific for embedded, you can apply the same testing practice as other systems where you ensure you abstract out the functions in such a way that the logic can be tested.
The complexity about embedded starts want to incorporate the testing with things like IO, which is more end to end testing rather than TDD.
Just skimmed real quick. It actually seems p. good and I would check it out again more deeply. https://floooh.github.io/2018/06/17/handles-vs-pointers.html (and the other articles there) have some good ideas about organizing well in plain C / niceties from 'modern' C too.
I'm not exactly sure why it's "modern" C, but it is an introduction to C via establishing a somewhat rigorous and complete fundamental understanding of it.
He makes a point of including all the newer features of C, including fixed-size integers, thread API, restrict qualifiers, and atomics, and dedicate a very significant part of the book to them. On top of that he gives many best-practice advice, that you could call "modern", as they are often violated in 20+ year old C code (and often by people who learned C in the 80's, in my experience).
I love this book in pdf and I would love to buy the printed version. Unfortunately, these editors "Manning" do not want to sell the print version only, they sell you a bundle containing the printed book, an some digital versions: an "ebook", a "livebook" and whatnot. I asked them, and they refuse to sell me the printed book only.
You can buy the printed book alone on Amazon or almost any other book seller. Manning's thinking is that the eBook costs them nothing to distribute so if you buy the print book, they might as well give you a bonus to make you even more happy by giving you the eBook. Source: I have worked with them.
They simply offer you a free eBook version with the purchase of every paperback book. This is true whether you buy the book directly from them or from a bookstore:
>"Free eBook With Every pBook! If you are an owner of a Manning pBook you can get a free eBook at any time easily from your account. If you prefer NOT to have a Manning account no problem, we won't be offended: we will send you a one-time download link after purchasing the pBook at manning.com. If you did not buy the pBook from manning.com, you can still get the free eBook in all available formats by setting up a Manning account, and registering your copy."[1]
It's also worth noting that you are supporting a a small independent publisher that's DRM free, as well as supporting independent authors.
They also regularly have sales, often with 50% discounts, often on holidays. I wouldn't be surprised if there was a sale on Monday.
EDIT: Besides, one of the electronic versions in the bundle is sold separately for $24. Assuming that the other electronic version has the same price, the print version alone is worth $12. I don't want to pay $48 for something that will be "thrown away".
Probably aboard the "hate on C" hype train. If you are curious, it could make sense to not let it get in your way, and continue investigating / learning. There's a lot to be gained from such an exercise.
FWIW if you're looking for practical projects; part III of https://craftinginterpreters.com/contents.html is in C, and if you're interested in graphics stuff SDL is really fun. It all runs in the web / browser too with Webassembly.
Who knows? Maybe I'm a C greybeard and the culture I worked in was one where you can laugh at yourself and your work without feeling like everything has to be so serious all the time.
Refusing to explain in-jokes, even when explicitly asked, is exclusionary and insular, and leads to low influx of new people, resulting in a slow death of a community. A culture, if it wants to survive, must share itself freely to new potential members.
Humor doesn't have to be obvious to be good; and a magician doesn't have to ever give away the secrets of their illusions, no matter how loudly their audience cries out.
But I'm terribly sorry if you didn't get it. Just a simple play on words. It's certainly not insular or community destroying - an accusation like that is simply hyperbolic. Perhaps like most living things, growth only comes when there's opportunity to being tested and stressed.
Integer underflow is undefined behaviour. The joke could be funny, it could be lacking in humour, or daemons might fly out of your nose (for added jocular expressiveness). Anything can happen with undefined behaviour.
In case, someone is interested here are the various C standards drafts:
* C89/C90: http://port70.net/~nsz/c/c89/c89-draft.html
* C99 (N1256): http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1256.pdf
* C11 (N1570): http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1570.pdf
* C17/C18 (N2176): https://web.archive.org/web/20181230041359if_/http://www.ope...
Also, here is a nice Stack Overflow answer with links to C and C++ standards documents:
* https://stackoverflow.com/a/83763/303363
This answer usually gets updated as new revisions of the standards appear.