I've touched Zig a few times over the last year or so, and I like it, but I am still itching for something else that is not as complex as Rust, but not too close to C. I am now playing with Scala Native[0] just to see how it works on the low-level stuff. Zig is amazing especially since you can use the compiler as an alternative C compiler too[1].
I like Lisp and APL, so I have been learning April[2], a subset of APL in Lisp. It is very cool, if you like Lisp and APL. The game in Zig in the article reminds me of the Dyalog APL simulation of a boat navigation app developed in Finland using APL[3].
> ...something else that is not as complex as Rust, but not too close to C.
Go might be what you want. It's boring which means it just works with healthy tooling and libraries. I call it the true successor to C with a 1990s bell labs feel.
Funny you should say that. I barely touched Go, but I felt immediately at home with it from my C experience. Boring is good most of the time. I don't know about Go routines, etc., but the GC differentiates it from C, Zig, Rust, and Scala Native (AOT). I hear a lot of good things about Go including salaries ;)
I had tried vlang last year in June. Has it changed much? It seems to be syntactically close to Rust and Zig. I tried building it with MSVC back then, and it failed (closed issue #5399). I'll have to try again, since they seemed to have fixed it on March 27, or at least showed it compiling correctly in the same manner I had tried.
nim is good, and I was onboard when it was nimrod (liked that name better!). It's very good. Here's where my supposedly polyglot objectivity falls prey to subjective syntax like/dislike. I am not a fan of python's syntax, so nim resonated the same with me. I really love APL/J and Lisp. If Haskell had more uptake in the pragmatic world with a cohesive ecosystem, and more libraries, I would probably be all in. Nim has the ability to --gc:none, so maybe I should take a look again. I found a few libraries, but there didn't seem to be many in machine learning/optimization.
D seems to have a lot of strengths as being less complex than C++, and I have heard great things about it. It seemed to be a better C++ rather than a low-level C. Is that mostly a true statement? Is Bartosz Milewski still a member of the design team?
The way I'd describe it is, it can easily be as low-level as C - stack-allocated arrays, unions, pointer arithmetic etc, even inline assembler - but the object-oriented layer on top of that is more similar to C# or Java than to C++; hence why I thought about it when you mentioned Scala.
It's in a spot similar to Go, in that it doesn't have a VM, but is normally used with a garbage collector (the core language can do without it - the libraries are another matter, though). However, unlike Go, it doesn't adopt the "less features and more boilerplate is better" approach to PL design.
OTOH it does have templates (not just generics) every bit as powerful as C++ ones and then some - but also with fewer footguns, and overall easier and more pleasant to use.
You might also find it interesting that the D compiler has a mode that is specifically called "BetterC", which basically removes all features that are dependent on the D runtime library (GC etc): https://dlang.org/spec/betterc.html
Thanks for the informative reply. I'll have to try the BetterC mode. Scala Native allows you to do all the low-level stuff in the syntax of Scala with Java and C libs available that logically work at that level. Obviously, JVM-dependent libs will not work when you're bit banging with Scala Native. I just picked up the, "Systems Programming with Scala Native" book, which is promising for what I want to do. I just don't have any experience with it on real systems project yet...
I played with Zig recently and loved it. The right tools at the right level with the right degree of utility.
It was very frustrating getting started though as just about anything Zig-related online you read/download/run is out-of-date with the latest version of the language. Small breaking changes go in that can easily stump a newbie like me, and there's not enough online experience of catching similar errors that give guidance.
IMHO I really hope the language stabilises soon. Otherwise it will be a frustrating experience.
I don’t think that makes much sense to say until they actually encounter a scenario where violating semver could be done and even be reasonable, and they actively choose not to.
Pre-1.0 semver is fairly meaningless, so taking it seriously really amounts to… doing anything at all. There’s literally nothing to be backwards compatible with, or to announce incompatibility with. They’ve have no opportunity to show how seriously they take it.
Is this, like, actually true? I was under the impression semver is particularly useful during that stage because it (is supposed to) provides guarantees around which things break and when. I know, for example, many foundational Rust libraries are technically pre-1.0, but because of the semver standards, users are able to use them consistently because they have an understanding of which updates are important and/or likely to break their existing code, even when the long-term 1.0 guarantees don't necessarily exist.
TBH I had looked it up when writing the prior post to make sure I wasn’t just running off my own naivety
From the spec[0]:
> 4. Major version zero (0.y.z) is for initial development. Anything MAY change at any time. The public API SHOULD NOT be considered stable.
> 5. Version 1.0.0 defines the public API. The way in which the version number is incremented after this release is dependent on this public API and how it changes.
Which directly matches my expectations — the primary reason any project uses a version 0 (regardless of semver) is to specifically claim that they promise nothing — “no one should rely on this, and anyone who does is doing so at their own risk…”
Version 1 is I think near-universally agreed to mean “this is production-ready” — where production is vague but roughly implies “If it doesn’t do what it says on the tin, it’s a bug”.
Version 1+ meaning varies depending on the versioning scheme, but 0 & 1 are fairly universal.
Of course, qualify everything with the standard “it’s open source so I don’t actually owe you jack shit”
Any promise/expectation of stability in 0.x.y is outside the scope of semver, and communally is on you. Bevy for example breaks their APIs every 0.x release, and that’s perfectly fine & expected under semver.
In this scheme, 0.x.y is compatible with 0.x.z, so developing pre-1.0 software is as strict for breaking changes as post-1.0, just the numbers are different. For this very reason, many crates don’t rush into 1.0 territory and are comfortable in pre-1.0 zone.
Right; but my point is that “they take it seriously” is something you can only evaluate when they have the chance not to. Right now, they can do whatever they want (except mark it 1.0.0) and uphold all semver rules. They can talk about it, but there’s nothing to do anyways.
The real test is whether they bump it to v2 or withhold an important change because of semver violation.
yes of course that is true, but Andy is pretty vocal about encouraging people to iterate quickly now because there is a desire to keep semver, at least it's not an afterthought.
i dont think they want to stabilize for quite a while -- i think they want it to basically be exactly the language andy wants before it gets stabilized, with self hosted backends and everything working.
ElixirConf speaker here, I think those two items are supposed to be separate items (the talk is not directly related to the Burrito work - which is amazing too).
thanks for the feedback. I keep meaning to finish it up, but things keep coming up (moved across the country, elixirconf talk, major work deadlines), etc. I now have my office set up to continue it, so the next entry should drop... soon. (but I have house guests for the next three weeks)...
I was intrigued so I went to hunt for the Burrito repo[1].
I thought it was some sort of Erlang native compiler written in Zig (which sounds like an incredible pain in the ass), but it's really "just" a cross-platform installer. Still useful !
Nearly every time I've seen Zig mentioned its been in relation to game dev, but still don't really understand why. I understand the advantages of a framework over an engine and totally agree with those, but I still don't know why specifically Zig, if anyone could fill me in
Can't speak for others, but Zig scratches an itch for people like me who started to get fed up with where C++ is heading, but also understand that C doesn't necessarily cut it for higher level gameplay code (especially when working in a team). Zig looks like the perfect way out of this dilemma.
Also, most gamedev middleware is written in C++ or C, and Zig can build those out of the box and directly import C headers so it's trivial to build mixed Zig/C/C++/ObjC projects (and without the hassle of C/C++ build systems).
For me personally, if Rust’s features were limited the safety focused features of the language, I could probably be alright. Although I do not think the concept of ownership is the right semantic basis for a programming paradigms. For me the real deal breakers are the following attributes: the package management system, the generics implementation, the focus on ML styled pattern matching syntax and semantics, and the heavy focus on abstraction and building towers thereof. I am aware that maybe the usage of those things are optional, but in my second-hand experience the Rust community is absolutely devoted to the exclusive idiomatic usage of the language, and I wouldn’t want to spend any time arguing about how I just want to program what I want.
tl;dr nearly all of Rust is geared towards trying to solve problems I am not interested in, but it isn’t really the ‘borrow checker’ area that I personally find most off putting.
On the off-chance that you're asking because you're involved in the package manager, here are a few things I'd really like to see as a crusty graybeard / software carpenter / package maintainer. Maybe some/all of these are already true for Zig's package manager.
1. Don't assume that packages come from Github. Or from any specific URL, for that matter. Or that they're managed in Git. Make it controllable with a config file, and ship a default config file. Let a project-specific config file override a site-wide one.
2. Make sure packages can come from multiple sources.
3. Make it possible to use packages from the local filesystem. Support relative paths.
4. Don't try to do network access unless I specifically ask for a fetch/download/update/etc. I want to be able to extract a few source tarballs somewhere, and then run a build that doesn't require me to be online.
5. Have a good story for working with non-Zig libraries. Like if I want to make a FUSE filesystem or something, it probably depends on having libfuse and libfuse-dev installed from my distro package manager. Zig's package manager doesn't need to interact with apt-get or whatever, but it would be great if it knew how to find local headers and libraries.
6. Include a way for me to generate a source-tarball that bundles the sources for all of my program's dependencies, so that I can archive it and reproduce a build later even if the upstream sources go away.
7. Include a license tracking/reporting mechanism, so that I know the licenses of all software I've pulled in as a dependency (whether directly or indirectly). I don't want a situation where I have some dependency that has another dependency on another dependency, and 3 levels down there's something that's got a license I can't ship. The Yocto project did this in a very smart way, and it's a good model to copy.
8. Have a way to express build-time dependencies and run-time dependencies. Those aren't necessarily the same thing, and might not even target the same platform.
I am certain the following is going to be very much a niche opinion, but I’ll give it anyway. There are two primary reasons I dislike the package management (and yes, this applies to Rust, Zig, Python, Go, basically everything JavaScript, etc) system both in theory and practice.
First, I am of the opinion that any centralized package management inevitably leads to heavy reliance on those packages. The comments of ‘there is a crate for that’ or ‘just look on NPM’ or ‘it’s on PIP’ are often the first answers anyone gives to people looking to write code to do a certain thing. That leads to towers of imports and dependencies. I am fundamentally opposed to not writing the simplest and most specific code for any given problem. Package management is a type of institutional environment directly opposed to that fundamental opinion. I am not interested in generic, allegedly re-usable, solutions, I want the most simplistic/direct solution/implementation I can code. Now sometimes that solution is a library and sometimes that library is complex, but that is a decision that is made purposefully and with the knowledge that the dependency MUST be managed as though the code was written by you. I think the package management paradigm being used/targeted by most modern languages assume that by using a dependency means I am outsourcing a potion of my code to others and that I am willing and able to change my code according to what changes with the dependency. I tend to only use direct source dependencies that I can make completely local and that code just becomes part of my program, and yes that means there are no updates to that code I do not make myself. An update of the dependency itself will only be included after reviewing the update as though it were a new dependency, i.e. purposefully and by me, not whenever the vendor wants and not whatever the vendor chooses.
I am aware that there is no requirement to use package management to incur what I find are negatives, but I truly believe that the environment and common usage patterns push development practice towards the type of coding practices I dislike.
Second, for work purposes, I can not use any code I can not warranty as fit for purpose and conforming to terms of a given contract. This means that any code which has a ‘no warranty’ statement in the license (which is almost all library code in every language regardless of distribution mechanism) I can only use once I have reviewed enough of it and am confident enough in it that I can put specific functional guarantees in place regarding specific performance of contract terms. Package management systems, and their accompanying practices, are not particularly appealing when the package management strategy seeks to continually push updates and continually seeks to stack reliance on large dependency trees. I am often better off designing and programming something myself that attempting to review to an acceptable degree a tangled mess of dependency.
I find that, while often an explicit non-goal for most programming languages, I would best be served by explicitly bare source code libraries, which are stored and compiled locally, and are incorporated into a project purposefully and in the same manner as files authored by me. Again, I am aware my requirements are not industry standard and my opinions are not mainstream, but there ya go.
With you on package management issues. The thing is modern devs just use a package for the simplest things just to feature load the code. Often you don't need all the functionality of the dependency. You'll probably use 0.1% of it. But javascript and rust repos dependency tree is extremely deep on average.
This in fact _hurts_ maintainability as you need to manage this heavy, recursive dependency tree. Ultimately it's not hackable. You can't just change that function to print a warning, rather you have to edit an entire library and work through it's taxonomy.
I'd note that Python is much better than others you list at this. Idiomatic python tends to be "batteries included" and shouldn't use too many dependencies other than those that are crucial to the project (ex Numpy). It's probably also because Python's package management story isn't as polished as others.
In the elixir ecosystem, we have private package management (plus software BOM support), which probably would be useful to you as it would let you share code internally for your own team. As long as zig's PM system gives you the flexibility to repoint the package manager to your own domain, you'd probably find the availability of some package manager better than bare code management, if for no other reason than it will make it easier to have internal consistency with component versions and dependencies.
That's an argument against having any package management at all. I respect that, but I don't really think it's viable in open source communities -- even Perl had CPAN decades ago.
You want the rural living version of programming languages. Package management is a dense urban core.
The full write up shows that the exploit is based on a integer underflow that would be easily caught by Zig and any other programming language with sane runtime checks.
Ironically enough this is the one class of bugs that Rust would not have prevented out of the box since the language defines integers to be wrapping and you need to manually opt into runtime checks.
The main thing that underflow is getting them is a pointer manipulation which lets them do a stack smash in the ZIP code. Needless to say safe Rust does not allow you to screw up pointer arithmetic and perform a stash smash, even if you can't be bothered to use checked arithmetic and so you underflow.
The equivalent underflow mistake in Rust probably panics. However the RCON trick at the heart of Steam which is powering this lets you run any command as the victim if they accept your "invite" so crashing their game is almost certainly already an option without the underflow.
That depends on what you mean by "just like." Zig does not make sound static guarantees about use-after-free, because those guarantees have other costs that Zig would rather not pay. This means that detecting the problem might requires some dynamic analysis. But that doesn't mean that it's "just like" such analyses in C, because Zig does make guarantees that are fundamentally different from C's. In particular, unless you resort to syntactically delineated constructs -- analogous to unsafe in Rust -- Zig does not have pointer arithmetic or arrays of unknown size. Unlike in C, in (the safe subset of) Zig every pointer is known and every object is known. This means that it's easier to build more precise and more effective mechanisms that detect such errors in Zig than in C.
In other words, if you mean that use-after-free is handled in Zig in a manner that's very different from how it's handled in either Java or Rust and more similar to the general approach of how it's handled by various tools for C, then that's correct. If you mean that that general approach is just as (in)effective in Zig as it is in C, then that is incorrect. Because Zig is fundamentally more precise than C, analyses of Zig, even if they are conceptually similar to those done for C, can be more precise and effective.
It's hard to figure out exactly what you're saying, but I think you're trying to imply that bounds checks will allow for the development of some kind of novel use-after-free mitigation. Without a specific proposal I don't know why that would be the case. Zig is not meaningfully different from C++ when it comes to UAF.
> Zig is not meaningfully different from C++ when it comes to UAF.
Without pointer arithmetic, unsafe casts (Zig is much easier to write without such casts than C, at least), unsafe unions, and unknown buffer sizes, the set of pointers in a Zig program can be well-defined, as pointers can come into being only in very specific ways (they have a simple provenance). Because the set of pointers is well defined, it can be precisely tracked and analysed. This means that 1. pointers can be traced even with arena allocators or perhaps even other kinds of pools (provided they cooperate with the tool) and 2. dangling pointers can be detected even without being dereferenced. This is simply not something that analysers for C or C++ can do (at least not nearly as easily).
The way to think about it is that in Zig, when an object (including an allocator) is deallocated, you can invalidate the full set of pointers pointing to it, and that set it the only way of generating more pointers into it. That's not the case in C or C++. For one, pointers aren't well defined (unions); for another, a valid pointer could be used to create an invalid one.
What you've described is a tracing garbage collector (to be pedantic, one where weak pointers are the norm, but the infrastructure and algorithm are essentially the same). In fact, I absolutely agree with you that Zig should adopt tracing garbage collection (a state-of-the-art generational concurrent one with bump allocation in the nursery), and doing so would eliminate most of my complaints about it. Unfortunately, it's unlikely that Zig will ever do this, given everything that I've seen about its design goals of being low-level with no runtime.
You've misunderstood me. My point was that Zig's properties allow a kind of precise analysis that is not possible (or very hard) in C or C++, and so it is not true that it is "not meaningfully different re UAF". As a simple concrete example, I hinted at a hypothetical algorithm similar to that of a tracing GC, that could be used not to collect garbage (I specifically mentioned the use of arenas), but to promptly detect all dangling pointers during testing, including those that are not dereferenced, and those pointing at arenas. That alone is already more effective than tooling you could make for C or C++. But those guarantees allow for other kinds of analysis, perhaps static analysis, that will also be more effective than what's affordable in C or C++.
So while the general approach -- of dynamic and static analysis, as opposed those of Java or Rust -- is in the same broad category of tools for C or C++, their effectiveness, due to Zig's properties, is significantly increased, and that their entire cost/benefit is different. I.e. it could find more bugs for a lower price, so much so that the approach, while underwhelming when applied to C or C++, could well compare favourably with others when applied to Zig.
(A GC -- whether tracing or ref-counting -- might well be adopted for Zig's comptime, but that's a whole other matter)
> As a simple concrete example, I hinted at a hypothetical algorithm similar to that of a tracing GC, that could be used not to collect garbage (I specifically mentioned the use of arenas), but to promptly detect all dangling pointers during testing, including those that are not dereferenced, and those pointing at arenas.
I'm highly skeptical that there won't be too many false positives with such a tool. Systems programmers routinely create temporary dangling pointers and let the values go dead without dereferencing those pointers. It happens most every time you call free, in fact.
I also see no reason why you couldn't create such a thing for C and C++. In fact, it exists: the Boehm GC can operate in such a checking mode. The fact that everyone uses ASan instead is a strong indicator that ASan is in fact a better approach.
Finally, precise tracing GC stack/register maps are a lot of work, especially in LLVM which has poor support for them. (I heavily looked into this for Rust.) Without a serious effort (and it is a lot of work) to generate them for Zig I have to consider it vaporware.
> I'm highly skeptical that there won't be too many false positives with such a tool.
I'm not advocating for a specific algorithm. I merely used a hypothetical one to demonstrate that there is, indeed, a fundamental difference between Zig and C/C++, even when it comes to UAF.
> I also see no reason why you couldn't create such a thing for C and C++.
Because in C and C++ there is no precise set of pointers, and bad pointers can be created from good ones.
> In fact, it exists: the Boehm GC can operate in such a checking mode.
Which would not work effectively for the reasons I mentioned.
> The fact that everyone uses ASan instead is a strong indicator that ASan is in fact a better approach. ... I have to consider it vaporware.
So from your asserted premise that Zig is no different from C in this regard you conclude that what doesn't work well for C must not work well for Zig and use that conclusion as further evidence of your premise? Your premise is exactly what I contest. Zig is fundamentally different because pointers are known and you cannot manufacture bad pointers from good ones. To demonstrate that difference I sketched a hypothetical algorithm that could work well in Zig but not in C. To point out that my hypothetical example is "vapourware" completely misses the point.
You could try to argue that the fact that pointers in Zig are well-defined and can be created only in carefully controlled ways -- and so are fundamentally different from pointers in C -- cannot be effectively exploited, but I don't think that's an easy argument to make.
--------
(> especially in LLVM
Even if we were to talk about specific tools, there is absolutely no need to base them on LLVM. Zig is specifically designed so that backends are easy to write, and an analysis tool need not use the same backend used by the compiler for production code; in fact, switching backends is meant to be commonplace in Zig development, and it is expected that different ones will be used for development and production)
Boehm GC's checking mode works fine in C and C++. The reason why nobody uses it has nothing to do with the fact that it's conservative and everything to do with the fact that Address Sanitizer is just plain better at solving programmers' needs. ASan is about as good as you can do as far as developer tools that find use-after-free problems in memory-unsafe languages like C/C++/Zig go. It would not be a better tool if it were precise at identifying pointers, because of the inevitable false positives that come with trying to scan the whole object graph for dangling pointers in memory-unsafe languages. ASan got so popular precisely because it tries very hard to avoid false positives.
Zig does seem to have some properties that make precise pointer identification possible. But the right conclusion to draw from this is that Zig should use a tracing garbage collector. It's well-known how to use the pointer provenance properties you're talking about to achieve UAF protection: just implement a GC! Trying to get by with things like quarantining memory forever is not going to work in production, and the reasons why Zig programs will supposedly not be vulnerable to UAF problems are unconvincing. It is going to want a GC eventually.
> It would not be a better tool if it were precise at identifying pointers, because of the inevitable false positives that come with trying to scan the whole object graph for dangling pointers in memory-unsafe languages.
I've repeated several times that I was merely demonstrating why Zig and C are fundamentally different when it comes to pointers. You're trying to poke holes at a straw man, and worse, you're doing that while drawing on the very premise which I'm refuting, i.e. that Zig programs and C programs are essentially the same. You could just as likely have assumed that Zig programs behave more like Rust programs (or Go programs), and at the time of deallocation there is just one pointer to the object, and voila, zero false positives.
I am not saying that a Zig program should behave like a Rust/Go/Java program, just that your arguments are begging the question. You start with the assumption that Zig and C are essentially the same and then use the resulting conclusions to shore up that very assumption. But Zig programs are about as different from C as they are from Rust.
You insist that if you're not Java or Rust then you must be C. Zig is so revolutionary precisely because it works like none of those. Now, I don't know if Zig's revolutionary design is revolutionarily good. It's far too early to tell. But basing your criticism on a false premise completely misses the mark.
Of course, it's okay to be skeptical of a new idea, just as I'm skeptical that a type-level shrine for accidental complexity is the way to go.
> But the right conclusion to draw from this is that Zig should use a tracing garbage collector.
I disagree, but that's a whole other discussion. But since you seem to claim that such a thing could exist and effectively work for Zig and not for C, it seems you accept both of my points: 1. that pointers in both languages are fundamentally different, and 2. that this can be exploited for analyses that are completely different in their effectiveness than those feasible for C or C++.
Except that what the user ends up receiving is not a debug build, but a full release, and we can safely assume that at Valve there was no test case where someone would craft a malicious invite with rcon parameters.
This is why Zig maintains these checks in ReleaseSafe mode and gives you wrapping operators (+%=, etc) when that's what you want.
If you want these checks in "ReleaseSafe" mode of your Rust, you can choose to make such a build profile:
[profile.ReleaseSafe]
overflow-checks = true
And of course Rust has both Wrapping types and wrapping versions of each integer operation if you want those, although I think we can be confident that the sort of person who is shipping release builds with overflow checking isn't thinking far enough ahead to have decided what should actually happen when their variabes under/overflow so giving them the option is redundant.
Given that the problem here is they're Wrangling Untrusted File Formats and, unsurprisingly, they failed to do so Safely, they likely should have used WUFFS rather than Rust, or Zig, or any general purpose language.
I think there are a number of disenfranchised "close to the metal" programmers. C has stagnated and won't likely move much, C++(++++...) has reached bonkers level of complexity, D is... D who, and Rust is highly opinionated, which is a double edge sword.
D has garbage collection (though it can be avoided, but perhaps everyone isn't aware of it, or maybe it doesn't have enough benefits then), which I think some game devs want to avoid.
Rust is complicated by its very strict memory safety. While that's nice, I don't think memory safety is game developers #1 priority. A crash is annoying but acceptable.
Zig also doesn't assume a global allocator anywhere (malloc/free), and I think game developers often like to use custom allocators for performance reasons.
Nitpick: With memory (un)safety, a crash is the optimal outcome when something goes wrong. The real problem is when it doesn't crash and the program continues on thinking everything is fine...
I'm quite amazed how resilient games are to this. Like how SMB1 for the NES is capable of things like the minus worlds where you're more or less playing levels that don't exist because the game doesn't expect you to be reading memory past a certain point, but as long as the values are valid, it can still use it to some degree.
There's more extreme situations like Super Mario Land 2 for the Game boy where it's possible for the game to present it's own memory on the screen and if you understand how it works you can do things like ask the game to execute the ending sequence [0]. At it's most extreme a series of TASes was presented in 2017 wherein multiple different hardware systems were linked together via RCEs in different games to make a VOIP streaming setup from a bunch of consoles that don't actually have internet connections [1].
I'm sure there's all kinds of reasons this kind of UB is undesirable, and could be hazardous in some environments, but people have also done some really cool stuff with it!
The resilience is because (1) those titles ran on systems with no MMUs; (2) there was no heap and no malloc implementation, therefore no way to get use-after-free. None of these points apply to modern systems, including consoles.
>"C++(++++...) has reached bonkers level of complexity"
It has and it has not. It can offer level of complexity matching that of a programmer. One does not have to start coding concepts as the introductory course to programming. Take some numbers stuff them into array, sort and print - is a simple piece of cake anyone can grasp.
Would you mind mentioning any noticeable improvements in those versions? Because C seems to remain the exact same for decades now.
I only know of _Generic thingy which is hardly ever used and maybe variable length arrays?
C only adding a few new features only once every few decades or so is actually a feature, not a problem (IMHO of course). C99 was the last big release, while C11 and C17 were mostly minor course corrections and spec cleanup (VLAs have been degraded to 'optional' for instance, which makes a lot of sense, because it shouldn't have gone into the standard in the first place). So far, C23 seems like a bigger release again though.
With no ill intent, could you please tell me why would anyone start a new program in C? I am seriously interested in that.
It of course has to remain with us due to the insane amount of code already written with it and some tooling (and eg for verification certain subsets of C), but it is not any closer to metal than other system programming languages, the “standard lib” is full of foot guns, it has a very weak type system, it has no way of enforcing basically anything. Also, due to the its lack of expressivity it relies on text-based macros (and those I hate with a passion, as there is hardly anything worse than text-replacing source code with no knowledge of syntactic elements).
Languages are taken way too seriously, and there are way too many bad languages out there that try to dictate the shape of your solutions in a certain way. A good language shouldn't do that in my opinion, except for boring and "solved" problems. The problem with languages is (apart from LISP maybe) that you can't abstract over their syntax - so while languages should give you useful building blocks to implement a solution, they should stay out of your way as much as possible.
To me, C is the easiest way to interface with the platforms that I develop for. It's also totally sufficient to implement the ideas that I'm working on - figuring out the data layout, the flow of code and data... It offers a concise syntax for the frequent operations - load / store, arithmetic, dereferences, and function calls. Most languages are already disqualified by not offering or discouraging nesting of data types a.k.a value types, making it way too complicated to simply copy data "anonymously", using memcpy() or similar.
Think about that - the language that people can't stop bitching about because it doesn't offer "generics" is actually the one that allows you to read and write data generically without a tremendous amount of complexity and/or slowness.
That's why my personal route is to learn ways to not need the type based polymorphism offered by more complicated languages, and to get along with just straight code that can push _any_ data payload (type) to the next endpoint - without requiring the multiplication of junk in the type system and/or in the compiled binary.
Macros aren't used that much but they would be missed a lot if they weren't there to help in some cases where we need to hack an abstraction over (lexical) syntax instead of data. I said above that you can't abstract over languages' syntax, and C macros don't really do that... They're a hack, and sometimes tremendously useful.
...and "modern C" is often misunderstood by C++ people who only know the fork of C known as the "common C/C++ subset" (which is basically an outdated, non-standard dialect of C that's stuck in the mid-90s):
I've written a few CLI utilities in C over the past couple of years. Here's why I keep choosing it for tools that I want to distribute.
C is the lowest common denominator. It has good compilers that just work, with no drama or surprises. It's portable across CPUs and operating systems. It will continue to work for all Unix-like platforms long after trendier alternatives have turned to dust.
It has good code formatters and analyzers. It's an easy language to completely understand, and it's easy to read if I don't try to get to clever with macros or type tricks.
I don't have to remember whether ${language_feature} is Considered Harmful yet, like C++ or Rust where the current best practices constantly become Bad Code in favor of the next trendy feature.
The C ABI is the standard for libraries on Unix-likes, and I usually want to use a couple of special-purpose libraries. Calling libraries requires no wrappers, FFIs, or anything else.
Yes, C has limitations. Lots of them. But it also has an enduring momentum that's not going away any time soon.
C is a very simple language. I think Zig will get close to replacing it, but it also happens to be more complex, so I'm sure some people who really prefer simplicity will still use Zig.
I wrote a firmware recently for low power microcontroller. No memory allocations in code. The language is simple and easy to read unless one is purposely trying to be fancy. It is very performant - the end result and compile steps as well. And there are megatons of C libs for microcontrollers. Would not dream of using anything else for this particular task
I am a practical man. The last thing I worried about is a security issues with my code on that specific device. Or should I say non issues since it works like a charm and I just can't see how C "insecurity" can bring any problems to my particular situation.
Sure let WG invest their efforts where needed but for my own case I do not care.
I know nothing about Zig, but looking up the main author (Andrew Kelley) they seem to be interested in game development, judging by some blog posts on their website (https://andrewkelley.me/) so probably at least influenced by that a bit.
I think he started it in order to do real-time audio processing or something like it. There’s a good interview with him on the corecursive podcast where he mentions it (iirc). It’s a great interview.
It's basically the only language that is modern, simple AND has manual memory management (Rust is complicated, Nim has managed memory, ditto with Crystal and Go, D is all over the place and mostly dead, etc...). Some game devs want full control but something more pleasant than C/C++. Essentially the same framework/engine argument applied to languages.
It's a low-level, non-garbage-collected language, which is what you need for game development. There are currently very few alternatives in this space. Simple as that.
I could be wrong but there was an inidividual very early on in the development of Zig that focused on an engine and a tooling system entirely in Zig and has been pretty consistent since. It kind of sparked interest in the game-dev aspect of it.
How does that work, exactly? Zig is statically linking the libs? Does it have it's own package manager with build routines for common libraries across platforms? How does it build SDL, or libpng for example?
The Zig compiler is also a C, C++ and Objective-C compiler (basically by linking Clang into the Zig compiler executable), and by extension the Zig build system (which is also integrated into the compiler executable) can also be used to build C/C++ projects. AFAIK a package manager solution is also planned, but I don't know the details there.
I thought most game devs were into the paradigm of: check external libraries straight into your source control, update them never, and generally treat them on par with your own code. What is it that a package manager does that's particularly useful for game dev? (I don't do game dev, so maybe I'm missing something.)
The open source world will never converge to a single language, so the question is ill posed. Rust seems to have succeeded at becoming a mainstream language, Zig has yet to prove if it will ever be able to get there.
It's somewhat annoying that every zig or go conversation becomes a rust conversation. Outside of being somewhat niche languages, they don't really compare all that well.
They're designed to tackle many of the same problems, but the languages themselves are different enough to appeal to different kinds of developers. So I don't think it's really the same niche in terms of audience.
At least for heap allocations that's taken care of by the 'General Purpose Allocator'. AFAIK it's still possible to return a dangling reference to stack memory though, but this should be fixable without going full Rust, even C compilers have a warning for this nowadays.
memcheck is Valgrind (it's one of the main modules).
I use sanitizers, but they're dynamic as well. There's also additional work to make them work with custom allocators, but that's not too surprising. I am excited for the day I can finally use the ARMv8.3 (or is it v9?) feature that adds essentially HW acceleration so the overhead becomes ~1.2x instead of 5x.
So what if it is proprietary, there are other alternatives if you don't like to pay for tooling.
As for the rest, I really don't get the point, given the plethora of such tooling for C and C++, both as free beer and commercial for the last 25 years.
Yet we have basic vulnerabilities in C and C++ code ALL THE TIME. Empirically, even if perfect tooling exists, you still have to go out of your way to use it.
The right place to enforce these things are in the language and compiler (which I think you agree with). Make it impossible to make mistakes. Which is also why I'm not that excited about Zig. But the point is the tooling story for C isn't a land of puppies and roses, even in 2021.
Zig's "general purpose allocator" achieves use-after-free protection by quarantining memory pages forever: i.e. it never reuses heap addresses. This is unsuitable for production because keeping even a single 16 byte allocation alive on a 4kB page will leak the whole page.
If it were possible to solve use-after-free this way, Microsoft/Apple/Google would have done it long ago for C++.
If it's designed to be a development tool, then Address Sanitizer for C/C++/Rust does everything that Zig's general purpose allocator does and more.
If you follow the strategy behind Zig's "general purpose allocator" to its logical conclusion, you get Google's HWASAN for C++ [1]. It is much more advanced than an allocator that simply quarantines memory forever and uses probabilistic checking to mitigate the problems of quarantining, trading soundness for performance.
Correct me if I'm wrong, but the code is there and licensed under MIT (Expat https://directory.fsf.org/wiki/License:Expat), which is GPL-compatible and "free". What's missing for it to be open source?
Edit: I might misunderstand your comment, ignore me if so.
I like Lisp and APL, so I have been learning April[2], a subset of APL in Lisp. It is very cool, if you like Lisp and APL. The game in Zig in the article reminds me of the Dyalog APL simulation of a boat navigation app developed in Finland using APL[3].
[0] https://github.com/scala-native/scala-native
[1] https://andrewkelley.me/post/zig-cc-powerful-drop-in-replace...
[2] https://github.com/phantomics/april
[3] https://www.dyalog.com/case-studies/simulation.htm