I think Zig is great and I'm following it with interest. But I think people should be aware that it is not really stable yet, at least in my experience from trying it earlier this year.
I ran into a compiler crash: https://github.com/ziglang/zig/issues/7865. I attempted to fix the compiler crash myself (https://github.com/ziglang/zig/pull/8372), and I found a fix that worked, but apparently it was not the right fix, and I was told the correct fix would be "quite hard" and "To fix some of the bugs you would need to rewrite quite a bit of code."
It sounds like a big milestone will be when Zig's stage1 compiler is self-hosting, written in Zig instead of in C++. I got the sense that the C++ code I was modifying is somewhat crufty, and the Zig version of it will be better designed in many ways (ie. https://github.com/ziglang/zig/issues/89#issuecomment-382110...), which will make fixing bugs like the one I encountered will be more tractable. I personally will be interested to revisit Zig when this migration occurs.
I also found that the language was changing from release to release in ways that made me update the code.
This is all totally reasonable for a language in development, and I mean no disrespect to the Zig devs -- it's an extremely interesting language. I only say this because stories I saw on HN gave me an impression that Zig was farther along than it is, at least in my experience.
As a counterpoint, we've written several thousand lines of code in Zig for a new financial accounting database called TigerBeetle [1], and all we've seen were two compiler bugs. The first was already fixed a few hours before we reported it, and the second was not a showstopper and something we could easily work around.
On the other hand, because Zig has incredible velocity, we're also seeing that std lib contributions or fixes we make (e.g. io_uring) are landed quickly, and that the community has some of the most talented programmers we've seen, who are also well aware of where the language is at (and like you, also looking forward to self-hosted).
As for TigerBeetle, when we made the decision to go with Zig, the only alternative for us was C since we needed a safe way to handle memory allocation failure and also wanted a simple language with high orthogonality and power, and C interop. However, C's toolchain even then was still not as "ready" as Zig's and Zig also presented a unique approach to safety (e.g. the compiler can check that all syscall errors are handled at compile time, and integer arithmetic and overflow is checked by default) plus comptime, which is a force multiplier.
To be fair, the days of debugging we save through Zig's safety compared to C's undefined behavior, we could easily spend a few hours contributing small fixes back to the language. And breaking changes between releases have also been fairly tame, and the changes welcome. We appreciate the velocity here.
The final insight for us was realizing that it would also take us some time to get our project to production, and that our roadmap would probably intersect with Zig's stability. We go into this decision some more in the Q&A at the end of our Zig SHOWTIME talk [2].
We could have skated to where the puck was at, or where we believed it would be, and we picked the latter. Technological waves can be incredible if you can spot the value ahead of the curve. In hindsight, this decision has paid off again and again. We would never go back to C. For new projects, there are a good many reasons to pick Zig, and for projects that may have a long half-life, this decision may be all the more important.
This is going completely off topic but Tigerbeetle is really interesting. Is it the first of its kind of are there many other financial accounting databases?
Hey, thanks for the question! I'm answering over in our Discord just so I don't hijack the thread. Pretty awesome what Michal is doing for Zig gaming, and I hope you'll also consider sponsoring him on GitHub sponsors: https://github.com/sponsors/michal-z
For the sake of curiosity, did you consider Rust, and if yes why did you consider it was not suitable for your needs ? I know Zig and Rust are quite different languages, but it is my understanding that they share the same low-level approach without sacrificing safety
I tried, and enjoyed, Zig. My enjoyment rapidly fell off a cliff when I started getting deeper into the stdlib. It feels like (this may not be true) that people contributed random stuff into it, a bit like C++ prior to Boost. The inconsistency hampered my enjoyment, and I headed back to Rust.
I think that's ok for being so early in the development cycle. Currently the Zig stdlib seems to be mostly stuff that's needed by the compiler and build system. At least it's already much more useful than the C stdlib.
For all the reasons that Zig is a better language than C++:
* Debugging is easier, faster, and more straightforward
* Fewer bugs due to footguns in the language
* Typical Zig code runs faster than typical C++ code. Part of this is due to Zig's safety features - there are design optimizations for performance that I won't even attempt in C or C++ because it's footgun city. Meanwhile in Zig it's actually completely safe in debug builds; you get a compile error or runtime panic if you mess up something.
* Floating point operations work on f16 and f128 without a dependency on SoftFloat library
* Cross compilation is trivial
* Comptime features make build options feasible and maintainable, such as enabling/disabling logging, whether to link libc, whether to link llvm, whether to enable tracy profiling, etc.
* Zig's error handling makes correct code the default instead of the other way around in C++
* The std lib HashMap, ArrayHashMap, and ArrayList data structures are really nice
* If the compiler is written in Zig, then adding a new target backend to the Zig compiler makes the Zig compiler work on that target. With another language we are stuck only supporting whatever that language supports.
Not only that but the Zig compiler available for download today is already backed by majority of lines of code written in Zig rather than C++; what is available today is already partially self-hosted. For example the following features are already written in Zig, not C++: `zig fmt`, `zig translate-c`, the command line interface, `zig cc`/`zig c++`, `zig build`, the cache system, compiler-rt, the macOS linker, and compile error checks that operate before type checking.
> * The std lib HashMap, ArrayHashMap, and ArrayList data structures are really nice
Reading Zig's std lib HashMap code the first time was an unforgettable experience for me.
I've spent literally months on some state of the art hash table papers, implementations and tons of hash functions, and Zig's HashMap implementation has got to be right up there with some of the best and most performant out of the box.
However, what impressed me most was just the level of control offered by the interface, and how clean and readable the code is.
There are a couple of reasons why creating a self-hosting compiler can be good idea:
1. Shows to others that your language is capable of a project of moderate complexity, as well as display what an "idiomatic" version of writing code is
2. Remove dependencies on parts you can't control (once you rewrite it in Zig, rather than C++, you don't need to worry about new C++ features or deprecations between versions)
3. Writing code in the language helps catch bugs in the language specification and in the compiler's implementation of the language.
These are just the ones off the top of my head, but people with more PL-Design experience may be able to elaborate.
Contributions are another one I think. Both the authors and potential collaborators might find it more attractive to use their language of choice. Compilers also tend to pose a variety of challenges that force one to really dig into a language.
I know it's entirely other end of the language spectrum, but I think that's why Typescript got successful while Flow did not; Typescript is in Typescript, while Flow is in OCaml. And nobody knows OCaml.
OTOH, eslint is making waves now and it's in golang, so. Who knows.
I'm not a golang fan, but one thing I've noticed about go is that it is possibly the most approachable yet real programming language for beginners I've ever seen. I think this is part of why go is doing so well in the devops space. Sys admins who are not really well versed in programming but know enough to write some scripts here and there are able to bring their domain specific knowledge of admin tasks and write code to contribute.
I'm building relatively small game, I don't really need full game engine.
I try to write simple code that does only what I really need.
I want to push Zig forward, I want to promote it.
I was working with UE5 and Unity at AMD - those engines are really huge, bloated and they are changing very fast. I want to keep control over my code. Again, I don't need 90% of their features.
I like low-level stuff and I want to master my skill.
> Why zig-gamedev project?
I want to help others learning Zig. I want to create demos, audio-visual experiments and even some art stuff. I'm interested in intersection of art and science.
I want to prototype and experiment with ideas that potentially will be used in game (tech, algorithms, etc.).
> Why Windows and DX12?
DX12 is really well supported by Microsoft. They have great tools. They have interop with Direct2D and DirectWrite. They have DirectML (built on top of DX12). Windows itself has builtin decoders for most video formats (I will need this and I don't want to include entire ffmpeg).
I'm most familiar with DX12. DX12 is a bit less verbose than Vulkan.
I don't have a bandwidth to care about other platforms. Focusing only on Windows lets me write simpler and often more optimized code.
Also, drivers are really mature and stable.
> Why to take risk?
I was working for big corporations for ~12 years now. Money is good but you don't really realize your ideas there. You don't own what you build. Now, I want to create my own stuff, my way, using my ideas. Money will come later.
I'm using Zig language full-time to create an indie game. Also, I've created zig-gamedev project to build game development ecosystem for this language. Using DirectX 12 for rendering, Bullet for physics. I'm ex-AAA game developer and graphics programmer.
If the OP is an ex-AAA graphics programmer, DX12 is probably more familiar than Vk. AAA likely means consoles which at the very least means you end up doing DX12x for Xbox, and at that point you may as well also do DX12 for PC.
Could be personal preference. I worked with OP at Intel during the early days of Vulkan and he was the go-to person for anything Vulkan related, so he's certainly familiar with the API.
Michal is a seasoned AAA dev who has worked at AMD, Frostbite, EA DICE, Intel - I'm sure he knows exactly what tradeoffs he's making.
Different folks have different goals, and that's OK. My goal with WebGPU is to aim for a future where we have truly cross-platform graphics. I suspect your goal is similar.
But I don't think it's wrong to aim for the present, to use an API you're familiar with, or to want to get as low-level as you can get. That's pretty admirable, honestly, and I'm happy people are doing that in the Zig ecosystem.
Heck, maybe one day Michal's work with DirectX here leads to a DirectX backend for a Zig WebGPU implementation. :)
Note, that it is easier to ship a game for one platform. I don't have resources to test several platforms.
WebGPU is just 3D rendering. I will also need vector graphics, machine learning stuff, video playback, etc. Windows gives me that for "free" (DirectML, Direct3D 12 Video, Direct2D, DirectWrite).
My main goal is to ship a game and build interesting demos/mini-games as part of zig-gamedev project. I don't want to spend my time supporting and maintaining code for several platforms.
Also, using wrappers like WebGPU adds another layer of abstractions. I want to write my shaders in HLSL using dxc.exe, I want to directly see messages from DX debug layer and have full control over my code. WebGPU would just slow me down.
My game will run only on Windows 10+ and I'm fine with that.
I am using WebGPU from D, it's an interesting solution and works well, but it has some drawbacks. First of all, because you have to match so many APIs, it only offers a common subset of functionality. Secondly, because it's primarily a Web API, it has a lot of validation, which wouldn't be needed for a desktop application. Thirdly, while it's still a low-level API, it doesn't offer the kind of control that AAA engine devs would like. It takes care of synchronization , memory management, transition barriers. AAA devs would prefer manual control over that.
- Fast compilation: isn't it llvm? So how faster compared to C++/Rust/others that use it?
- Ownership rules/ECS: what about zig makes this especially good? The only language that comes to mind in this respect is Rust and its borrow checker, is it similar in Zig?
1. Both zig and rust compile to intermediate representations which are then compiled to llvm IR. Most of the compilation woes of rust are in the early stage IR.
2. Zig doesn't have ownership rules, which can get in the way of important data structures for gaming.
Part of Rust's problem last I knew was it throws a lot more than it necessarily has to at the LLVM backend. So a front end that takes the time to lessen the burden could run faster despite using the same tooling.
Part of it (perhaps the majority) is excessive intermediate code. But at least some of it is that Rust's abstractions are more complex/involved. The "zero cost" doesn't refer to compilation time.
That's true of borrowing, but I'm not sure if it's true for other abstractions. I imagine generics (due to monomorphization) adds to the burden on the LLVM side.
Memory safety is a pain point of C, and Zig doesn't fix that. There is no free lunch: you either accept garbage collection or deal with something that resembles the borrow checker.
I believe Zig takes a more nuanced and balanced approach to memory safety as a spectrum, rather than the extremes you present of either GC or borrow checker.
For example, Zig offers spatial memory safety, and provides test allocators to catch temporal memory safety issues. That's already an order of magnitude improvement over C.
Memory safety is also just one aspect of safety, whereas sometimes programmers conflate the two. It's important, but things like checked arithmetic should also be right up there, and should be enabled by default in safe build modes. I think Zig's approach here is also spot on, having worked a little in security, where an integer overflow can be almost as dangerous as a buffer overflow. Yet I don't see many other languages taking checked arithmetic as seriously as Zig does.
A language isn't just a compiler and a spec. I don't know the full history of all these projects, but IIRC Ada's compiler wasn't free for a long time.
How you structure the financials and the community around the language has also a gigantic impact on the final result, and this is an area where Zig bringing to the table something completely new.
Although most people associate Object Pascal to Borland due to the Turbo Pascal branding done by them, the language was originally created to write Lisa OS (Clascal), and then when the project got replaced by Mac OS, with the help of Niklaus Wirth input, Clascal became Object Pascal and was the main language until the C++ rewrite that took place in the early 90's.
Outside Apple computers, the dialects created by Borland gained such following, specially in Europe, that Turbo Pascal became the official Pascal dialect, even though Extended Pascal fixed most of the original design flaws.
Naturally they going enterpreisy lost the crowd to VB and VC++ folks (later .NET).
Modula-2 did have some nice offerings, specially on Amiga, but on the PC and Mac, Turbo/Object Pascal made it irrelevant as it offered all the improvements Modula-2 brought to the table (no one cared about co-routines on home computers back then).
Ada was the only one from those that yeah, actually quite expensive, and I think only SGI and SUN had UNIX compilers for them, with HP having BASIC and Pascal compilers for their OSes.
Then there was the Amsterdam Compiler Kit, the "LLVM" for the 1980's, which had support for C, Pascal, Modula-2, Occam, and BASIC.
Looking forward to see how Zig evolves, specially regarding issues like #2301.
You say "minus compile time execution"... but when you take comptime away from Zig, you lose both generics and compile-time reflection. The remaining language is C without a preprocessor. So, yeah. Strip out the most useful and innovative feature from the language, and it looks primitive.
The languages I mentioned all got generics during their lifetime, so yeah they were all more powerful than C.
C won due to UNIX, had UNIX not been a kind of free beer that companies could build their workstations with and universities avoid paying for commercial OSes like VMS, history would have taken a different path.
Also, lisp has pretty much always been more powerful than C. But what zig brings to the table is an extremely simple language which is also very powerful. Much of the power results from the entire language being available at compile time. You seem so ready to throw that away in favor of could-have-been nostalgia, I wonder if you've taken the time to understand what you're criticizing.
I was quite clear that compile-time type reflection was the only thing missing.
Zig isn't the only AOT compiled language with compile-time type reflection in 2021, and exactly because of could-have-been nostalgia, we don't need newer systems programming languages that don't have an answer for use-after-free in safe code.
Adding bounds checks and ASan is not an order of magnitude improvement over C. Chrome, for example, is doing all of this already in C++ in a more advanced way than anything I've seen in Zig. Clang offers UBSan [1], which is extremely advanced. Yet it is not enough.
It is not a "nuanced and balanced" approach: Zig is simply not memory safe.
If you're shipping a game that runs on a player's computer, you're most likely going to make production builds with -O ReleaseFast (safety checks off). Rust is overkill for this use case. The only benefit memory safety brings to this use case is making debugging easier. But if we're measuring how debuggable a language is, there are many more factors, such as iteration speed due to compilation times.
Memory safety also adds reliability, by catching bugs statically that you didn't catch during (automated or manual) testing. It's the same argument as for static typing.
It's of course true that some developers may judge the tradeoffs differently for their individual projects—that's why they're tradeoffs! But there are benefits to memory safety that go beyond security.
the only memory safety gamedev needs are bound checking and use after free protections
everything else is just bloat and noise that hurts iteration time
and even if one would still value them, you'd need to check only once for whatever memory check you want to run, when you build your allocators for example, and not at every builds, and you could even write the logic yourself and have a debug allocator to ensure memory safeties
you want sub second and not "double digit seconds" build times
it takes one to try to make a game to trully understand why iteration time is far more important that anything else (other than performance of course)
you don't want to wait multiple seconds everytime you change the speed of your character, or tweak the rendering/AI code
that's why then some devs end up using scripting language and they loose all the advantages of their native language, because they want to speed up iteration time
that's why i personally stick to D for my game, my engine + game fully rebuild in under 1 second
you don't get to create memory bugs when you work on your gameplay code ;)
> the only memory safety gamedev needs are bound checking and use after free protections
Probably if your game is single-threaded …
Rust's raison d'être was type-check thread-safety, and even if we don't talk about this aspect much anymore it's still the domain where it has no competitors (Pony could have been, but didn't get traction).
Is GeneralPurposeAllocator the one that quarantines memory forever? That isn't practical, as I've mentioned before. To name one problem, allocating 1 byte in a 4kB page leaks the entire page.
Zig fixes many more memory safety issues than C or C++ though, simply by being less "sloppy" and enforcing more correctness (e.g. no implicit type conversions, no over/underflows, proper range-checked arrays and slices etc...) - IME most memory corruption issues in C and C++ are actually secondary effects of such simple correctness issues. Zig just isn't quite as "extremist" as Rust (also, Rust is a great language for writing a sandbox, but if a memory-safe language must be used inside the sandbox to prevent damage outside the sandbox, then it simply isn't a sandbox.
"Widely used" might be a bit of an exaggeration, MSVC only added address sanitizer support very recently but doesn't support any of the other sanitizers, and none of the sanitizers are enabled by default in Clang.
The biggest companies on the planet with massive C++ codebases are all very aware of the clang sanitizers and use them regularly.
UBSan has far more deployment than Zig.
As much as I love Delphi, the language syntax hasn't aged well. Single pass compilation was great to have in the 1980's but nowadays even the humblest Raspberry Pi will chew through thousand of lines of C++ and link an optimized executable within seconds.
We deserve and can afford a lot more comfort than Pascal (and siblings) provides. Incidentally, modern language ergonomics also adds much in the way of coding safety, making usage of explicit allocation control a more manageable problem than it was 30 years ago.
You could build the same argument for all other types of correctness: either your program is formally validated to be correct, or it's broken.
While there is some truth to that, if that's the only analysis you're willing to do, then you're going to fail spectacularly hard at recognizing the advantages of TypeScript over JS, for example. Zig and C share a similar relationship across many axes, including memory safety.
Personally I dislike the @ love, feels a bit like being in Objective-C land, imports feel like JavaScript AMD modules, and still use-after-free issues.
Otherwise it is pretty much Modula-2, with C flavoured syntax and compile time execution.
Well there are still quite a lot of compiler bugs. I personally have open issues going back as far as 2018. Knowing the compiler has bugs and that new ones are constantly being discovered makes debugging a pain because you have a lot less confidence that the problem is your code. I have many times blamed the compiler for my mistakes, something I've almost never done with C.
This is really looking great! You say you left your job... do you have income yet, or is that an item on the todo list? Any clues about the game you're working on?
No, sorry, it will stay Windows 10+ only. I want to focus on building interesting stuff (demos, games, audio-visual experiments) I don't have bandwidth to support and maintain code for several platforms.
Zig is a really interesting language for game dev.
I was thinking about using Zig with raylib for my next project. The show stopper was that C ABI compatibility is not yet complete: https://github.com/ziglang/zig/issues/1481
I could have probably worked around it but decided to go the C and Lua for scripting route for now which works great but I can totally see Zig as a decent C replacement.
I am not interested in Windows but I am wishing you best of luck with your project. Seems quite interesting.
One of the nicer things about Zig is that the unoptimized debug binaries are still pretty fast, often only 2 or 3 times slower than the optimized ones rather than 10+ times slower. Presumably because Rust and C++ have a lot of abstractions that have to be sent through the optimizer to be truly "zero-cost".
That, combined with generally really fast compiles (again relative to C++ and Rust) is a nice combination for game devs.
I don't really get the "I Left My Job to write $FOO in $BAR language" reason.
I can understand if someone left their job to write $FOO - you see an opportunity and you take a risk on it. What does the language have to do with it?
To me, taking such a risk, and then adding further risk (using a language you weren't using daily) seems to be a recipe for disaster.
If I were taking a risk (by leaving my job) on a new product, I would do my best to reduce all the other risks (use a familiar and popular tech stack, for one).
As someone who did something similar in my youth my argument was that the new language presented unparalleled productivity opportunities vs the “old”
The real reason was of course that I wanted to play with the new thing.
You are right, Market fit and the idea matter most (when assessing a new company)
But not all ideas should be weighed purely on profit. If op is in a healthy financial state and can shoulder the risk for a year, it’s hard to weigh the opportunity risk for him as a person. It might reinvigorate a love for programming or lead to a job in this new language. And ofc it may also just work ;)
You're not factoring in the fairly huge opportunity cost of missing out on a major technological wave, not being one of the early domain experts, not to mention all the benefits that come with riding the wave.
Zig is perfect for game developers, and being one of the first in the space is going to be an advantage.
Also, the product here is actually defined as "Building gamedev ecosystem for @ziglang!" — Zig itself is part of the strategy here.
> I can understand if someone left their job to write $FOO - you see an opportunity and you take a risk on it. What does the language have to do with it?
There are no business opportunities in independent games - they are passion projects, with little chance of payoff. So, if someone is passionate about his game idea and also passionate about Zig, it makes sense to combine both motivations in one project. He might not have enough passion to do the game in C++ or some other language he already knows and hates :)
I think it's interesting, because product/market fit is really important, and you have one camp of guys saying that programming language is almost irrelevant. But then it isn't always the case, because choosing the right language constructs can make or break your software depending on the domain, and have a quicker time to market. I think the problem lies in a lack of scientific literature, and a collision of culture between result oriented business people, and geeky explorative tech people. I would love to find a way to bridge the gap of understanding, but to even approach this problem in a correct way seems difficult.
Agree. If you want to create games for a living then you need to think game design/business first and tools second. The hardest part of doing games is coming up with compelling game designS that people actually want to pay $ for. Tech is very low on the list of what’s important.
Sometimes you just need to do something different then working for a boss. We are in a privileged position that some can work 1 month and use it to live 2 comfortable months off it. Why not take some risks in life.
Is anyone using Zig on embedded micros yet? M0/M3 or so? I do a bit if C code on st micros and Atmel chips, I’d love to have a modernized C in this world.
I do a bit of game dev, and whilst I've been really interested in getting into a framework rather than using an engine, so far I haven't been able to budge from Godot.
So my question is, why is Zig so good for game dev specifically (thats by far the most common use I've seen for the language so far, in my limited dealings with it)?
How is zig compared to nim, where you can also disable safety(even disabling the gc)? To me nim seems more "ergonomic", but they both try to hit the same problem space.
Asking since I want to build a ray tracer and voxel engine in nim as a side project at some point. But I really like the approach of high level safety and optional low level performance from both languages. I was considering Rust, but I don't like having to deal with the borrow checker all the time.
Nim is a little bit more straightforward for writing application code, by dint of simply having more features to throw around. Both languages can be pretty darn fast, enough that I would not rate it as a factor for most applications, including rendering side projects - the speed difference we are talking about really has to do with how detailed your resource allocation methods are, and if you want that form of speed you have to invest time researching memory access patterns for your use case to get beyond educated guessing. So you would have to turn your voxel engine into full-time work to get to the point where it has both usefully broad functionality and top-end performance.
Expect some maturity/stability issues with either one. Zig is much earlier in its lifecycle and I have seen showstoppers, but it's also been moving faster. Nim has struggled in the past with stabilizing its high-concept features - use it mostly like C and it's fine.
Zig's biggest appeal is in how close it is to a fully bootstrapping environment - low level, clearly defined control over all resources is just what you want if you are writing a kernel.
I think most people in that situation would have saved up a lot of money so that they can support themselves properly during that period of time between jobs or between sources of income.
If anyone thought this was an oddly specific remark (as I did), I looked it up. The 1989 Namco/Toaplan side-scrolling arcade shooter "Zero Wing" [1] is has two notable facts that make it a fun reference here:
1. It's the game from which the "all your base" meme comes.
2. The fighter craft that the player controls is called "the ZIG".
I cannot believe I had an opportunity to make an AYB reference in 2021 but here I am. The fact that nobody else had made one, or has made one, in reply to this post leaves me feeling old.
As a former game developer, I don't get why the people start from scratch by developing their own rendering engines or even game engines. Using Unity, Unreal or even Godot is helping you to launch your game 100x faster.
I know it might be fun to develop your own engine but it will take a lot of time and it still won't be as usable as an engine developed by 100x more engineers.
That's it right there. You can spent years pretending to write a game while optimizing your from-scratch engine for features that probably no-one will ever use [1]. But I think it depends on what your goal is.
If it's a small hobby project, and you enjoy the process of writing your own engine (which, granted, can also be a great learning experience), then why not? You're probably in for the fun anyway.
I know some indie devs who learned the hard way that if your goal is in fact to eventually release a game, doing it without using an existing engine can reaaally drag things out.
I think it's tempting: you can get started on a new "engine" pretty easily, and get something up and running in a day or two. But then comes the long tail of adding all the other stuff, which often isn't so much fun.
A lot of the game development process isn't actually programming, so by writing everything from scratch, a programmer can artificially shift the focus into the direction they're more comfortable with.
Well, though I’ve also seen people do the same thing inside of existing engines, for example trying to use Unity but become so frustrated with its design so much that they begin to spending their time building their own external systems and tools. It seems more of a mindset problem than what engine you actually use (whether you build your own or not).
If you have some amount of self-discipline and implement your custom engine to be minimal and only tailored to your game (as many indie games have demonstrated), I don’t see how it can’t be done.
Unity is used more than any other game engine in the world to release successful commercial games. If you can’t figure out how to use Unity to make commercial games then the problem is you not Unity. I have released commercial games using 3D game engines I wrote from scratch years ago before Unity existed. I would never recommend doing that today when Unity is available and so easy to use compared with having to write everything from scratch.
It depends, writing a very targeted engine for your game isn’t necessarily that much slower than using a general purpose engine off the shelf.
The problems usually start when people are more interested in writing the engine than making a game. Then they gravitate towards all sorts of fun technical problems that would be useful in a general purpose engine but not really for their specific set of problems. I’ve also seen this tinkering trap with general purpose engines but quite technical projects.
Have you ever written a 3D engine from scratch used for a commercial game that needed to work on Xbox, PlayStation and PC? I have. You clearly haven’t. I would highly recommend using Unity or some other off-the-shelf game engine instead. However I do agree with your next point. People think that the 3D engine is the game. That is 100000% wrong. Top selling games often have unimpressive tech. Amazing “games” that are actually more like tech demos fail to sell.
Sure if you shift the goal posts by adding more detail to the context then the situation is different. There's a reason the first two words of my reply were "it depends".
Please don't make assumptions about my experience based on that though. It's pretty impolite before I'm even able to respond to the new criteria and makes you look like you're not actually responding in good faith.
It is quite obvious to me from your comments that you have zero experience writing commercial 3D games. However if I am wrong then you are welcome to elaborate.
How about you make your arguments stand for themselves? I have no idea of your experience either but I don't go around insinuating you are useless rather than responding to your points. Instead I clearly lay out why you're wrong and I note you're clearly not actually able to respond to those points. Instead doubling down on baseless accusations.
Be kind. Don't be snarky. Have curious conversation; don't cross-examine. Please don't fulminate. Please don't sneer, including at the rest of the community.
Comments should get more thoughtful and substantive, not less, as a topic gets more divisive.
When disagreeing, please reply to the argument instead of calling names. "That is idiotic; 1 + 1 is 2, not 3" can be shortened to "1 + 1 is 2, not 3."
When exactly did I insult you? As far as I know I didn’t. Pointing out that you don’t have the experience to know what you are talking about is not an insult. It’s just a fact. You can choose to make all the mistakes yourself, or be smart and skip years of wasted time by listening to people who have already been there done that. Your choice.
I do game development consulting now and then (speedrun mode, leaderboards, randomizers, supporting infrastructure, etc.) and it's really nice to see Godot mentioned here. I don't often make projects from scratch but it is genuinely a pleasure to use.
In regards to developing your own engine - it's almost never more efficient to do so nowadays, but it is almost always significantly more fun. If the author has enough wealth accumulated to quit their job and work on a passion project full time, I imagine they can afford to do something just for fun as the baseline.
For me personally, as a hobbyist game dev, Unity and the like feel like I'm having to relearn how to do something I already know how to do if I just write code instead of relying on someone else's gigantic abstraction. And I also very much like the idea that I can make my small games super small and port them to literally anything without having to hope some third party adds support for it.
There are some great titles where the whole game was built around a unique feature provided by a custom engine. It probably wouldn't have been possible to do "Teardown" in Unity, for example; not without making some serious compromises, at least.
Maybe it would be feasible now, since Unity introduced the new job system, a scriptable render pipeline, and HPC# (high performance C#) with the Burst compiler. But these features didn't exist when Teardown started development, and right now they still seem to have some warts (the DOTS stack still seems to be a bit unstable and subject to frequent change)
This really depends on the game and your personal goals. Engines like Unity, Unreal or Godot must cover the entire game development landscape, but the 'engine feature slice' needed for one specific game may be so small that such generalist engines don't provide much help or are even actively harmful because they force you into building the game in a certain way. The most useful engine-part in such cases might be the tiny platform abstraction layer to work around driver- and OS-API bugs - but usually this platform abstraction layer can't be used alone, and alternatives like SDL also have accumulated lots of such fixes and workarounds over the decades.
As with everything it would depend on the type of thing you’re making and whether you rather deal with problems of an existing solution or the ones you imposed yourself.
Even ignoring that. I think there is a more fundamental question here as well. What type of maker do you want to be?
100% agree. I don’t know why you were downvoted. I assume that some people hate it when you dare to talk reality. I have run my own successful game biz for 5+ years and have written more than one commercial game engine that was used to release commercial games on consoles and PC. And I 100% agree with you. Unity (for example) will make you at least 10 times more productive than writing a 3D engine from scratch.
It depends on many factors. You are describing your experience and then extrapolate for everyone. My game will be pretty small, not fully 3D. I don't need 90% of Unity features and Unity bugs :)
Before making my decision to go with Zig I've built small game in UE5 (https://michalziulek.itch.io/upacman). I've also used UE5 and Unity at AMD. I simply don't like those engines.
Different people have different approaches and mindsets. I'm much faster building my small, minimal codebase that does only what I need. Also, I'm improving my low-level programming skills and experimenting, doing things differently than most people (not using Unity) might lead to very original game.
It’s great that you have some experience with commercial game engines. Let me caution you though that successful games are 99% choosing the right gameplay and 1% choosing the tech. In other words, I recommend getting a gameplay prototype working first without thinking tech at all and then picking the right tech for your chosen gameplay. The big mistake that developers always make is to think that the 3D engine is your game. That is false. Take a look at the top selling indie steam games. None of them use unusual/advanced tech.
I'm perfectly aware. I don't want to build advanced tech, it's not my goal. I want to build simple tech that lets me build my gameplay easier than it is in UE5. In my opinion gameplay programming in UE5/Unity isn't nice at all. Way too abstracted and too generic for my usecase.
In my case, simplicity is the key for productivity.
I ran into a compiler crash: https://github.com/ziglang/zig/issues/7865. I attempted to fix the compiler crash myself (https://github.com/ziglang/zig/pull/8372), and I found a fix that worked, but apparently it was not the right fix, and I was told the correct fix would be "quite hard" and "To fix some of the bugs you would need to rewrite quite a bit of code."
It sounds like a big milestone will be when Zig's stage1 compiler is self-hosting, written in Zig instead of in C++. I got the sense that the C++ code I was modifying is somewhat crufty, and the Zig version of it will be better designed in many ways (ie. https://github.com/ziglang/zig/issues/89#issuecomment-382110...), which will make fixing bugs like the one I encountered will be more tractable. I personally will be interested to revisit Zig when this migration occurs.
I also found that the language was changing from release to release in ways that made me update the code.
This is all totally reasonable for a language in development, and I mean no disrespect to the Zig devs -- it's an extremely interesting language. I only say this because stories I saw on HN gave me an impression that Zig was farther along than it is, at least in my experience.