Hacker News new | past | comments | ask | show | jobs | submit login

Yeah, that's a bit of a turnoff. The rest of the article (until the paywall) was alright.

Zig's use of comptime is fairly revolutionary in a c-like language. Compared to the shenanigans required by c++, the entire language* is (should be) available at comptime. Compared to the backtick/quote/macro magic available in lisp... the awesomeness isn't that it's comptime, the awesomeness is that it's type safe.

But, I'd say it isn't even the use of comptime that's particularly noteworthy, it's the lack of pretty much every feature that isn't comptime. This wouldn't be a good idea without comptime being as robust and featureful as it is.

* nope, see puffoflogic's reply




> Zig's use of comptime is fairly revolutionary in a c-like language.

The D programming language popularized it in 2006 or 2007. Here's it in action:

    int square(int x) { return x * x; }

    int[square(3)] a; // declare array `a` of 9 ints
The embedded C compiler in D, ImportC, also can do CTFE:

    int square(int x) { return x * x; }

    _Static_assert(square(3) == 9, "Error, Will Robinson!");
You'll see a lot of that in the test suite for ImportC, because the test suite runs a lot faster if it doesn't need to link and load.


Here's a more complex example. Suppose we want to initialize an array of squares:

    int[4] squares1 = [0, 1, 4, 9]; // doing it by hand

    int[4] squares2 = () {
      int[4] a;
      foreach (i; 0 .. 4)
          a[i] = square(i);
      return a;
     } ();  // doing it with CTFE
squares2 declares a lambda, which returns the initialized array, and then executes the lambda.


D sure looks like the language everybody "C++" actually wanted, if they only knew they wanted it.

I actually include myself in that camp, so thanks for taking the stage :D


The reason D didn’t really gain traction over the years was because it required a garbage collector. Recently it’s going the other direction via betterC (a mode where you can ditch the GC), but many parts of the standard library still require the GC to run properly.


Very, very little of the library requires a GC.

You can ensure your D code doesn't use the GC by adding the `@nogc` attribute.

But really, all this concern about the GC is misplaced. It's just another tool available. You can use it in D, or use RAII, or your own allocation system. It's your choice as a D programmer.


C# hasn't had the same issues,

https://www.wildernesslabs.co/

https://nikolayk.medium.com/getting-started-with-unity-dots-...

Had Remedy experiment worked out, maybe DOTS would be using D instead of having their own compiler infrastructure for HPC#.

Nor Java even,

https://www.ptc.com/en/products/developer-tools/perc

https://www.aicas.com/wp/products-services/jamaicavm/

The main reasons are not having a big name company that would push D no matter what, and lack of focus where D should aim for.


D is a general purpose programming language, it does not aim at a niche.


It is called killer feature for market adoption.


It is a shame that until now the software industry has not figure out how to reliably perform automatic memory management or GC. Unlike the auto industry the manual transmission is going to a dinasour route compared to the now default automatic transmission.

I am giving this analogy since Walter has a degree in mechanical engineering and he has correctly made the GC a default in D but not mandatory, but the choice somehow is not popular in the software industry.


Sadly it took too much time, and lost momentum. C++ version

    int squares1[4] = {0, 1, 4, 9}; // doing it by hand

    std::array squares2 = [] {
      std::array<int, 4> a;
      for (int i = 0; i < 4; i++)
          a[i] = i * i;
      return a;
     } ();  // doing it with CTF
Is as pretty? Not as much, but it does the job.


Ok, try this one:

    string cat(string s) {
        return "int " ~ s ~ ";";
    }

    int test() {
        mixin(cat("i"));
        return i + 3;
    }


For that one D gets one point, even though Circle C++ might be able to do it, it isn't ISO C++.

So what if D wins at code golf against C++?

What matters is being good enough for most compile time use cases, while enjoying the large ecosystem and IDE tooling, perfect is the enemy of good.


The ability to use CTFE to generate strings that can then be fed back into the compiler is a heavily used feature, not just a golf point. It enables the creation of mini-DSLs.


You tend to praise D features, while ignoring that a language on its own doesn't make an ecosystem.

Indeed C# and C++ don't give me all nice features from D, when doing language comparison tables, D wins on that table listing.

Yet they give me their library ecosystem, graphical IDEs, and first day avail on any major company SDK.

I will take that productivity win over the few features that the D language happens to have.

Why is Andrei Alexandrescu helping to improve C++ at NVidia instead of using D, when the language is so great?

Ecosystem, that is why.


The thing that makes Zig comptime radically different is that types are comptime values, so you can make functions that take types as arguments and return new types as results - it's like C++ templates on steroids, yet conceptually simpler.


Well, D does, too:

    template MakePtrType(T) { alias MakePtrType = T*; }
    MakePtrType!int p;

    pragma(msg, "p is type ", typeof(p));

    dmd -c test2.d

    p is type int*
or perhaps I am misunderstanding you?


You aren't returning the new type there, looks like you just inline that code. Zig lets you create and return new types that way while your example looks like just regular template substitution.

Edit: So the main difference seems to be that types are first class citizens in Zig, you can write a normal comptime function that returns a type and either use it to do more metaprogramming or everywhere you would declare a type for a function or variable. I am not aware of any static typed languages that lets you do this with normal language syntax.

C++ does it with templates, but in Zig you can do what C++ templates can but with if statements and loops.


It is returning a new type.

Please show me an example where it differs. D's metaprogramming is very good at constructing types. A member of the D community tried very hard to implement type functions, but it turned out to be redundant with D's existing capability.


Just use the Circle compiler and you have the full C++ language at compile time, available today.


To be fair, now we're well out of the realm of CTFE. (Though that edges into the macros debate.)


It reminds me of Idris, where the types can actually run programs (at compile time) as long as they can be proved to terminate, which is more often than I though.


If anything it is only a simpler syntax for C++ template meta-programming, not needed knowledge about SFINAE and tag dispatching tricks, most of which are anyway no longer required with the improvements made across C++17, 20 and 23.


Zig evaluates compile-time expressions using the semantics of the target, does D do that? (I genuinely don't know; not trying to "gotcha")

   const std = @import("std");

   export fn square(x: c_long) c_longlong {
       comptime std.debug.assert(@sizeOf(c_longlong) > @sizeOf(c_long));
       return @as(c_longlong, x) * x;
   }
If I run `zig build-obj` on that on my i9 Mac, compilation fails with an assertion error. If I run `zig build-obj -target arm-freestanding-gnu` instead, it compiles since sizeof(long) is 4 and sizeof(long long) is 8.


> does D do that?

Yes:

    c_longlong square(c_long x) {
        static assert(c_longlong.sizeof > c_long.sizeof);
        return cast(c_longlong)x * x;
    }


"Zig's use of comptime is fairly revolutionary in a c-like language."

I might phrase that as "statically-typed language". Most (if not all) of the dynamically-typed languages are technically programs that run at the only "compile time" there is and can do arbitrarily whacky things as a result. It is generally a good idea not to think of them that way. I've removed quite a few top-level Perl statements in my time and enforced rules that a module really should be nothing but declarations of constants and subroutines unless there's an absolute emergency. But dynamically typed languages will generally let you read some files off a disk, query a database schema, and hit an HTTP API to build a "module" if you want to.

And in this case the entire language really is available. The limitations tend to come with the increasingly fragile order of operations the modules impose on each other, rather than technical capability.

I don't say this to slag on Zig. Increasing the range of what static languages can do is a legitimately interesting topic. This is to shed light on the general space of programming langauges.


The Java Syntactic Extender, which I think was 2001, should be interesting to you then.

https://people.csail.mit.edu/jrb/papers/jse-oopsla-2001.pdf


Yeah, there's certainly antecedents. There's not a lot new in 2022.

Sometimes it's legitimately an innovation just to put it into a standard library from the very beginning. I've discussed on HN a couple of times how useful it was for Go to ship with an "io.Reader" definition for "things that emit bytestreams" in the standard library. It has very little to do with language features; many other languages could theoretically have done it. Many languages have a sort of half-usable "file" abstraction, but you can never quite tell what calls will work on which type of thing. Having a single concrete interface in the standard library from day one of the Go language drove the entire ecosystem in a single coherent direction, though, and in Go, you can pretty much count on that interface and can nest it quite deeply without hitting a corner case.

I expect it may be something like that for Zig... it's not that nobody has ever integrated code generation at the compile step for a static language before, it is merely that I'm not sure anyone's ever pushed it out like from the very beginning at this level. I may be wrong. I can come up with several examples of much more dynamic languages doing it. I'm sure out there in the world you can combine Lisp macros with some static type system for the same language. But that would be a bolt-on, not something that was in the standard lib from day one.

Thinking of languages not just in terms of features, but in terms of what community the standard library affords is probably the next major frontier in language design over the next 20-30 years. Yeah, I hope that we still see some sort of revolutionary language that solves all our problems, but I expect to see a lot of languages that don't necessarily have any new features per se but are just better standard libraries from day one.


Just an FYI, in 2008 PLT Scheme had a language with static typing and macros capable of comptime-like behavior, called Typed Scheme. It lives on today in Racket.

But yeah, dynamically typed languages have been doing this for decades, particularly prevalent in Perl circles.


> I might phrase that as "statically-typed language".

It it is not revolutionary in the context of such languages. Out of the mainstream but not revolutionary.


It just copied D which did it more than 10 years before Zig existed...


Virgil [1] has had compile-time initialization since 2006. There is no restriction on the code that can be run at compile-time. It needs no specific designation--it is simply the initializers of components (globals) and their variables. The entire reachable heap is optimized together with the program to produce a binary.

[1] https://dl.acm.org/doi/10.1145/1167515.1167489


I think Haxe has pretty much the same feature, which it just calls "macros" (https://haxe.org/manual/macro.html).

I haven't used Zig (yet, maybe one day), but does anyone know if there's a difference between Haxe macros and Zig comptime? AFAIK Circle also offers something similar for C++ (https://www.circle-lang.org/)


Zig's comptime isn't like "macros" in Haxe or Rust or Scala or Lisp which transform one AST into another AST. It's just code that runs when the code is compiled, rather than when the resulting binary is run.

This makes it less expressive, but it also means it is easier to learn and use. (It's a lot like C++'s constexpr)

The interesting part comes from recognizing that by making constructs like structs first-class-values in comptime, you can achieve most of the common things that templates, macros, conditional compilation, etc give you, but with a much simpler feature (which is just running code at compile time)


Lisp and Scheme macros are just code that happens to execute at runtime, too. They don't necessarily need to transform the AST, though that's what they're often used for.

Like with Zig, you are limited mostly by what is available in the environment at macro expansion time; unlike Zig, you can do things that generate allocations.


> It's just code that runs when the code is compiled, rather than when the resulting binary is run.

That’s exactly what Haxe macros are. It’s regular Haxe code that runs during compile, and it has access to all Haxe language features and the standard library.

The only special thing about them is that they can directly manipulate the AST before it gets passed on to the compiler for final code generation.

In a sense, they’re like “shaders” for the compiler.


Macros in Lisp during compilation often extend the compile-time environment and another purpose is to register information in the Lisp IDE (record source code, record source locations, ...).

> but with a much simpler feature (which is just running code at compile time)

Common Lisp does that with the EVAL-WHEN construct, which allows to run arbitrary code at compile time.


Zig's comptime variables had this fairly bonkers behavior that basically evaluate by themselves regardless of the underlying runtime controlflow[1], I thought it was cool, but seems to be disabled on trunk build.

[1]https://godbolt.org/z/dbnG9eev7


From testing it seems like it is executing both branches, i.e. if (b) { v+= x; } else { v += y; } is effectively v += x + y. That doesn't make much sense so I can see why it was fixed.


> Zig's use of comptime is fairly revolutionary in a c-like language

Not revolutionary. Before Zig came out, which was 2016, there was Jonathan Blow talking about and arguably repopularizing it. Lots of people (who knows how many developers) and languages were influenced by him too.

2014, Jonathan Blow, who made lots of videos and gave lots of talks, before going on to officially promote Jai (2016). And it appears he was playing around with prototypes much earlier and talking about it (from 2014).

https://youtu.be/UTqZNujQOlA (Demo: Base language, compile-time execution)


Not very useful to us when Jai has not been released to the public. I would like it to, but currently Zig is more complete than Jai, atleast with respect to it being able to be used by everyone


By the time Zig reaches 1.0, Jai may have too or have been publicly released. And Jai's constraints on getting access to their Beta, appears to have been loosening over time.



> zig's use of comptime is fairly revolutionary in a c-like language.

It's not, it's just fairly insecure. comptime is known forever, in static and dynamic languages. Better languages do know about the security and usage implications and do something about it.


Very little of the language is available at comptime. Allocation is unavailable and the zig developers have declared this a non-feature. That either locks out or requires redundant implementations of a great many things.


If you look at puffoflogic's comment history you can see they are maliciously claiming incorrect things about zig over and over. Here I am refuting them again.

https://github.com/ziglang/zig/issues/1291

This is an open issue from 2018 that I opened that specifically acknowledges this as a valuable use case that should work.

Don't let puffoflogic put words in my mouth. They are a troll account.


D's Compile Time Function Evaluation can use operator new to allocate memory. It's an extremely useful non-feature.

One major use is to create strings at compile time, which are then fed back into the compiler as D code. I.e. metaprogramming.


Is it the plan that FixedBufferAllocator will never be usable at comptime, or just allocators normally backed by syscalls?


FBA was usable last time I tried zig, but I don't count it because if you knew ahead of time how much you needed to allocate, well then you didn't need an allocator at all. I.e. I define "allocator" to mean "dynamic allocator". Note that the buffer provided to FBA would have to be provided at the very top.

Fixed-size allocators are not composable. You cannot write any kind of comptime routine which calls further routines which need to do allocation, and how much allocation they do is based on arguments to your routine - except by forcing your caller to calculate how much allocation needs to be done. But that leaks your implementation details to your caller, largely invalidating the point of making some kind of comptime routine in the first place. Therefore comptime can be used as a toy, or in C-style language-enforced NIH mode (where you [re]write every single routine you need yourself, or at least know the intimate implementation details of all of them), but not in a serious programming stance.


If I remember right, FixedBufferAllocator requires you to set an upper bound for the amount of memory you allocate -- you don't have to get the number exactly right or anything.

Runtime allocators also have a limit to how much memory you can allocate. It's usually higher, but there's still a limit. Your computer doesn't have an infinite amount of memory.


> if you knew ahead of time how much you needed to allocate, well then you didn't need an allocator at all.

Sure, if it's all code you're writing yourself then it's doable to avoid using an allocator. If you want to reuse existing code that does expect an allocator, however, then being able to use a FBA is handy.


You can just return arbitrarily large arrays or whatever, but this is probably not what you want because it's a separate implementation from the one that takes an allocator. I think if FBA works then you have the building blocks to make one that does not fail by running out of memory e.g. by doing the FBA thing but getting a bigger backing array whenever you need it.

Overall I am not the right guy for this conversation because I am too used to never calling mmap at runtime either.


> it's a separate implementation from the one that takes an allocator

I want to give zig a fair assessment so yes, this is absolutely right. This is why I have elsewhere said that zig is basically two separate but superficially similar languages. You can probably accomplish just about anything in comptime zig, if you're willing to completely reimplement it from the core language up. The limitations I mention come in if you try to write code to be used in either comptime or runtime zig. So comptime zig has no growable arrays, no dictionaries, no string searching, no regexes, etc. etc. until these are reimplemented specifically for comptime zig. Mostly this is just sad, that a decent idea was implemented with such a terrible downside that would be so easy to fix: just add allocators. Just. Add. Allocators. But no, the zig team has explicitly rejected this.


> Just. Add. Allocators. But no, the zig team has explicitly rejected this.

I'm torn on this. To me, comptime is the C preprocessor but sane. Anything you can do above that is bonus. I don't need comptime to be totally Turing complete.

And, I do NOT want macros in the language. Macros are a bottomless well of suck. Lisps/Schemes still tie themselves into knots with macro hygeine. Rust's procmacros are terrible and debugging them is worse--just try figuring out what some intermediate expansion did.

The point of Zig is to NOT be Rust/C++/etc. The point is to be C with a bunch of the sharp edges and corners filed off. It's not for everybody. If you want web services, use Go. ML/AI--Python for ecosystem. Rust for strong safety guarantees.

Zig has some nice things while still remaining relatively small. It, like most modern languages, understands that ecosystem is important. It does a better job than almost everybody at cross compiling. comptime gets you a lot of the benefits of templates while dodging lots of the suckage.

However, at some point, you have to declare "Stop, that isn't going to be part of the language" or you wind up with the ungodly mess that is C++.


I infamously declined to implement macros in D. Macros are like giving a machine gun to a self-aware robot.


No allocations? So you can't do string formatting? Or is that a special case that is somehow supported?


You can string format using `std.fmt.comptimePrint`[1]. For example, combined with features like `@compileError`[2], it allows you to make your own error messages (useful for generic functions)

[1]: https://ziglang.org/documentation/master/std/#root;fmt.compt... [2]: https://ziglang.org/documentation/master/#compileError


Ah, so it's not macro-expansion but rather some "comptime" DSL?


It's like C++ constexpr but you can do a million times more things. Not everything though, this isn't Scopes or Jai or whatever.


You don't need those shenanigans when using the Circle compiler, where the entire language is indeed available at compile time.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: