I think there is a mistake when calling the setup_camera function in main. You call setup not setup_function, which I think is incorrect. Thanks for writing that up! Great article.
I was wondering if there is any low hanging fruit for WebGL build size improvements.
I did a Unity test build for comparison against your rect example, with a cube and a single script that does nothing (so they can't e.g. strip out Mono).
Listing sizes uncompressed, 7z Ultra (a decent proxy for Brotli), and wasm-opt + 7z Ultra respectively:
Rect example: 13473kb, 1964kb, 1720kb
Basic Unity: 9528kb, 2304kb, 2291kb
Bevy is still leading, but it's getting uncomfortably close.
The uncompressed sizes matter a lot more than any other asset type, as it's all code that has to be compiled, running poorly until the optimized compiler has finished with it. On a Chromebook, my experience is that much code gives you 30-60s of poor FPS before it comes good.
I understand this is not necessarily a focus of Bevy, but I do think that you have an opportunity here to build the first game engine without significant downsides on any single platform. Filling the code base with a bunch of feature flags may not be worth it to achieve that goal, though.
The modularity of Bevy is super exciting (I've asked you before about WebGL builds on here before and you mentioned this). But Unity ships quite a bit of stuff in a basic build and is probably fairly similar in scope to Bevy with all crates.
I've been sponsoring for close to a year, but haven't kept up with the Discord in many months. Currently I'm just been checking for updates every couple of weeks, and sometimes checking the state of GitHub issues and milestones.
However, is there any better way for me to keep up with progress? If not, has there been any thought about having regular status updates? They don't have to take a lot of your (or someone else's) time, but just a few sentences on what's been decided, what's happened, etc, would be nice! Once every 30 or 60 days would definitely be enough.
Going forward, we will be doing a curated This Month In Bevy newsletter. We've had templates in draft mode for awhile, but getting 0.6 out the door has taken precedence. I'd like to release our first newsletter this month!
That's great to hear! Thank you for all the hard work, I'm looking forward to see Bevy grow, and really hope it'll end up becoming a worthy competitor to Unity/UE/Godot for commercial games some day!
I spent a few days making a (very) basic 2D game engine in Rust for fun. One thing I found is that I didn't find Rust that helpful in preventing bugs due to the nature of game engine code. For instance, I was using a generational array to store components. At some point I had a use-after-free bug in the generational array code that the Rust compiler could never catch. Also, components tend to have circular references to one another, which was very annoying to program in Rust. I'm curious if you've experienced something similar and what your thoughts are now, after many months in the trenches, on Rust's suitability for a game engine.
Although I'm not _cart, I had a similar experience in making a (hobbyist) 2D game engine both in Rust and C++, and the problem you're facing (similar to the ABA problem) is not a memory safety error but more of a logical bug inherent in naively programmed object pools and is totally language-agnostic. When you create an object pool as Vec<Option<T>> and use a single array index as resource IDs, you risk this scenario: "X has a reference to resource A from object pool, A is destroyed and later reused by the object pool for resource B, now X has a reference to resource B". The problem is that the resource IDs will become invalidated as the object pool reuses its slots. The incremental generational counter is a way to check object lifetimes in object pools at runtime, and this is a solution to a logical error (which can be applied regardless if your language has a borrow checker or not). If you've had weird errors while using generational arrays, chances are that 1) you've exhausted your generational counter and it has overflowed 2) your generational array code is incorrect.
The verdict: Rust's lifetimes does not make you safe from non-memory-safety related bugs. It still gives you some really powerful abstractions to fight these bugs (like enums, traits, Option<T> and Result<T, Err> types), but other than that you're on your own.
(About circular references between components... doesn't this also get solved by generational indices? With Arc<T> types you're going to have circular dependencies that don't get freed because of reference counting, but with generational indices you're free from that issue since you're manually managing resource lifetimes anyway. And if you're having trouble figuring out how to manage these dependencies, the solution might be to refactor your code. My experience of using generational arrays was that it will naturally move your code-base towards centrally managing resources in a unified fashion, which is rather different than the usual Rust/C++ model of every object having its own independent ownership. After embracing it I tend to have less of those resource management dependency headaches.)
I want to point out to unfamiliar readers that use-after-free in a rust generational array is a logic bug, where as use-after-free in just about any other non-rust context is undefined behavior. Rust's safety is helping here, not by preventing the bug altogether, but by severely limiting the damage it can cause.
Congrats for the release, looks great! Not a criticism, but model projected shadows (not prebaked on lightmaps) are very jagged or really low-res, reminds me of early Unity shadows. I know Unity worked hard to improve them for years and acquired experts on shaders, because users complained and even moved to Unreal for that reason alone. Not sure if there are now well known solutions to get highly defined shadows, but I'm curious on your plans to improve them.
Yup shadows are notoriously difficult to make pretty in all situations. Fortunately we do have a number of "shadow quality" improvements in progress: https://bevyengine.org/news/bevy-0-6/#more-renderer-features. And we'll probably be improving shadows indefinitely from now on. The tech in that space is constantly evolving.
I'd like to see irregular Z buffers deployed in a modern graphics engine, as they actually provide pixel-perfect shadows without the need for raytracing hardware. They're pretty experimental, though. Maybe I'll take a stab at implementing them in Bevy sometime :)
I'm using Rend3 for a Second Life/Open Simulator client.
The current system with Rend3 has one thread doing nothing but refreshing the screen, while other lower-priority threads are independently changing the scene based on incoming messages from the network. Is that something the Bevy architecture can handle? Should I consider switching to Bevy?
Pipelining allows rendering and app logic to run in parallel. You can adapt to changes from the network when the app logic runs, so in that sense, it seems like a good fit. And you can use bevy_tasks to spawn background tasks (and spawn tasks across different thread pools, if that is required). I think it is worth considering Bevy, but I also think Rend3 is a great piece of software. We discuss rendering with that project's lead constantly :)
Looks really nice! If one was to implement a simple "3d arcball rotator" viewer for 3d formats to get familiar with the engine (i.e. load obj, view) can you suggest which examples would be most relevant (presuming bevy architecture is the only unfamiliar topic here)?
I noticed the short section on changes to Bevy's UI. I expect both GPU rendering and ECS will be important capabilities to consider for new GUI systems, and Bevy does both well. How do you imagine Bevy fitting into the budding Rust GUI story a la https://areweguiyet.com ? An entrant itself? The backend for one? Would you consider pure GUI applications an important use-case for Bevy now, or perhaps some time in the future?
So, I'm not Cart, but I'm a major contributor. My vision for this is as both an entrant and a logic-backend for GUI. In terms of use cases, I expect that: games > servers > scientific simulation > pure GUI, but all four are something we're trying to be actively mindful of. Accessibility matters, but native-look-and-feel is effectively a non-goal at the engine level; there's a lot of other frameworks that can provide this and cross-platform unification is more important.
We're planning to follow Godot and dogfood the UI to build out the editor, so support for pure GUI applications is something that we'll be actively testing.
If I'm a dev looking at engines to use, would now be the time to jump to Bevy, or is there expected to be significant changes to API and data structures moving forward still?
Things have started to stabilize, but unless you're willing to get your hands dirty and deal with regular breaking changes, I don't yet recommend staking your livelihood on it. That being said, by the end of the year I think we will be _much_ closer to our first stable release, and I expect the number of "serious projects" adopting Bevy to increase throughout the year. Now is the right time to start _evaluating_ Bevy for future projects (and experimenting with it).
Thanks for your work. What is the support for low-end and mobile devices? I have tried developing with bevy on my old macbook with an Intel Iris graphics card, maybe not ideal but that is what I have and I want to support most hardware. Last time I tried the simple cube example made my fans spin. Is this something that can be alleviated?
We support low-end devices well (although this is a constant balancing act)! Bevy runs well on iOS currently (and we have WIP Android support ... we're working on re-enabling that asap). Fans spinning is a known issue, but it is generally decoupled from framerates (which tend to be fine). On some machines we are over-engaging cores, even with empty apps that do nothing ... my current theory is that we need to adjust how our task pools are managed.
iPhones have insanely performant GPUs for the form factor. I wouldn’t be surprised if the newer ones get better performance in Bevy than a lot of perfectly usable laptops.
I haven’t tried bevy, so this question may sound naive. Does the new renderer support shadows for 2d rendered games? Something I’ve played around with in Godot and can look quite nice.
Also, I tried to search for the answer to this: is there any information on ways to make a game build with bevy moddable?
I can't speak for cart here, but from my perspective as a contributor to `bevy`, it's extremely useful having freedom from backwards compatibility concerns to improve our APIs.
For example, we want to rework our input handling, asset handling and UI; supporting this current UI solution indefinitely would be frustrating when we hope to make something much better once we can focus on that. As an example of this, we massively changed our rendering APIs for this release, allowing the improvements listed in the blog post to be implemented.
I wondered why the Rust community seemed so averse to 1.0 releases, but Rust for Rustaceans makes the interesting point that there are a lot of cases in the Rust language where adding new features technically qualifies as a breaking change. There doesn't seem to be a lot of room for the minor version number - everything is either a revision or a major release, and people aren't ready to commit to that level of strictness until they're feature-complete.
It looks great in a lot of respects, but I can't help but notice the aliasing. In the example scene at the top, when in motion, it probably looks like there's a snow storm going on in the foliage and on most of the wall/floor textures.
To me good anti-aliasing strategies is the single most important factor when it comes to graphics. I don't really care about things looking realistic or whatever, but I do care about my screen not being full of moving, weird, and distracting grainy patterns.
This involves more than just softening the edges around polygons. There's also textures that contain shapes and the borders in textures with transparency to think about (usually foliage).
My go-to solution in games with lackluster anti-alias is to just render them at a super-high resolution (4k/8k) and downscale, but that's not great for performance usually. You can compensate a bit because you won't need as much anisotropic filtering and such if you downscale, but even on expensive hardware that's not a solution in all games.
To get to the point: In some older games I've seen blurring (possibly due to hardware limitations) of further-away detailed textures reducing aliasing. I'd love to see that technique revived in more modern games, possible with the help of AI to detect patterns/edges on textures that may be susceptible to aliasing, and selectively running those through a filter - basically shifting the work from the artists having to design textures that don't create artifacts to something automatic.
In general good AA tends to be a second-class citizen in many modern game engines, partly because of how their renderer is configured by default (looking at you, Unreal). I really wish it wasn't so.
The aliasing you're seeing is "texture aliasing" from a lack of mipmaps. We have an implementation in the works, it just didn't make it into this release. This is a straightforward and well understood problem. Thankfully our mesh edges are already crisp (thanks to MSAA) and we have a number of other anti-aliasing implementations in the works (TAA, SMAA, FXAA), each with their own strengths and tradeoffs.
That's cool - as are mipmaps, but even with those you'll still have patterns emerge that don't exist on the original texture. Humans are really good noticing patterns, so they're distracting (a tiled floor doesn't doesn't look like it has u-shaped curves in reality). Those patterns not having sharp/grainy edges anymore already helps a lot though.
Maybe what I'm asking for is impossible, because fighting those patterns any further would just go too far into blurry territory. It might just be an inherent property of projecting something on a grid of neatly ordered pixels, which is very unlike the receptors in our eyes.
Though to be honest, I didn't even consider that it may just be missing mipmaps making it look like that. My perspective is much that of a consumer who spend a lot of time trying out various settings/technologies and ordered them into "looks bad" and "looks better".
(May have been featured on HN recently, I can't remember how I got there).
Often mipmaps are pretty much automatic from the player's perspective, it's often kind of baked in by the developers when processing the texture files. It can probably be disabled but I can't remember seeing that option in a game for a while.
I'm curious to see how Bevy's doing it. I'm making a game in Godot at the moment, and their is the option there to generate mipmaps or not for an imported texture, and you can choose use a bilinear or trilinear filtering for the mipmaps, but that's about it (maybe there's more in the API, I haven't checked.)
> Maybe what I'm asking for is impossible, because fighting those patterns any further would just go too far into blurry territory. It might just be an inherent property of projecting something on a grid of neatly ordered pixels, which is very unlike the receptors in our eyes.
Basically it solves this exact problem - that rendering a high res image at a lower size can lose details at points you don't want them to (e.g. because some part of the texture like a line is entirely in subpixel "space"), and pre-processing a lower res image to switch to at different distance thresholds from the camera. The result is actually better details at scaled down sizes, with much less flickering, even though the actual rendered texture at a distance can be much lower quality.
They can also save a bit of GPU power and possibly VRAM, as the lower res, distant textures stream much more quickly than the ultra high res, near ones.
This release primarily constitutes a massive rendering API rework. You can think of it more as a foundation that makes it easier to implement better rendering features in the future.
If you check the release notes, you'll see HDR/Bloom support was dropped at the last minute, and several other major rendering features are still pending implementation. [0]
The great part about Bevy is that it's modular, so you can swap out the renderer if you like. For example, there's already several people using Bevy with Embark's kajiya renderer.[1]
You're the first person I see that also has this position! Anti-alias makes the difference between an image looking like it's a rendered set of datastructures by a computer, or if it's a virtual world you happen to have an image of. All other aspects of graphics - light, shadow, texture resolution, polygon count - can be considered endearing style, but not
aliasing and bad texture filtering.
> To me good anti-aliasing strategies is the single most important factor when it comes to graphics. I don't really care about things looking realistic or whatever, but I do care about my screen not being full of moving, weird, and distracting grainy patterns.
I don't think I've ever heard this perspective before. Thank you for sharing your thoughts. This one is pretty unique, I think.
> My go-to solution in games with lackluster anti-alias is to just render them at a super-high resolution (4k/8k) and downscale, but that's not great for performance usually.
By "lackluster anti-alias" do you mean some sort of home-grown MSAA that's garbage?
Because you're hand-rolling SSAA/FSAA, can't you just override the AA settings in the graphics driver to force FSAA or something?
> To get to the point: In some older games I've seen blurring (possibly due to hardware limitations) of further-away detailed textures reducing aliasing.
It sounds like your complaint might actually be with anisotropic texture filtering [0] which reduces the blurring you're describing, and is generally considered to improve quality.
Curious if others are able to see the WebGL2 examples working on iOS. They do not render for me, but WebGL2 definitely works in general, at least on 15.1+ (as per other sites and my own dev work)
Instead of doing all operations required to render the gamestate in a single frame you spread them in parts across multiple frames. This means that the output represents the state of the game some number of frames ago, increasing the time between your control inputs and what you see changing on screen.
This kind of pipelining is really about efficiency. Without it, either only the game logic is busy (utilizing just the CPU, most often) or the renderer is busy (utilizing both CPU and GPU). By starting next frames game logic sooner (concurrently with current frames rendering), one can keep a steadier load on both CPU and GPU with less idle time.
You can either use rust-analyzer[1] (for which vscode is most well supported), or IntelliJ Rust [2]. Personally, I use `rust-analyzer` with vscode as a bevy developer.
I use Neovim 0.6.0 with its built LSP with some plugins including simrat39/rust-tools.nvim. It uses rust-analyzer. It works extremely well but I don't use the debugger at all. I've used IntelliJ with a Rust plugin, to and it has debugger support of sorts, but the overall experience was sluggish and I never went back.
Seconded! It all works incredibly well. Rust with Visual Studio Code and the relevant suggested extensions (CodeLLDB, rust-analyzer) must be least hostile expert system I've ever used (and I've tried a few).
I _love_ the Rust plugin for Intellij. I have used VSCode a ton as well, but never for Rust. In general, I think Intellij is heads and shoulders above VSCode, but it is possible that 'rust analyzer' makes all the difference (I might check it out sometime).
Yep, and I've used bevy. Just get VSCode and the rust-analyzer. It's remarkable how well it works together with the language. There's also a thing that prints the errors and warnings in line to make it stand out more, sometimes you'll miss a squiggle.
But also the suggestions are spot on quite often, eg finding the right import that you can just click without scrolling up to the top to insert, or adding/removing the & and * operators. You'll almost never come across a situation where you think it will compile but it doesn't.
I've tried Bevy v0.5 and it was very immature (as a matter of fact, all the Bevy "games" are single-screen). By looking at the release notes, it's not clear if it now supports OpenGL, which means that on some setups, it's not even possible to run correctly at all.
It may be fun to write a hello world, but in this case, there are much easier (and more stable) engines.
A typical setup if VSCode + Rust Analyzer. RA wasn't very stable around 6 months ago, but it was stable enough to be used regularly.
What do you mean by single screen? There's no technical reason for that to be the case and there are already games that have multiple screens, even past the basic main menu + main game loop ones. You can find multiple functioning and fun, if small, games here https://bevyengine.org/assets/#games.
> it's not clear if it now supports OpenGL
Why do you need opengl when you have support for D3D, Vulkan, Metal, OpenglES, and WebGL2? What amount of the market are you missing?
The reason you don't see many Bevy games is 1) yes, it's new and relatively immature and 2) it takes a long time to make a game. That said, I've had a lot of fun building out a tbs game with it and have been very impressed with the foundation so far. On top of that you get to write in Rust which for me is a huge productivity boost when compared to writing in say C# for Unity, but that's subjective.
> What do you mean by single screen? There's no technical reason for that to be the case
I've programmed on v0.5, and SystemSet, which is the base for state management, was very immature - it didn't even work with FixedTimeStamp (because they will compete on RunCriteria); I think it's this one: https://github.com/bevyengine/bevy/issues/2233.
I've tried implementing multiple states in a game, and it just didn't work. And I don't even know if it was a bug in my code or in Bevy, because state support is not stable, and the documentation is lacking.
Without proper state management, you'll end up running either a large amount systems, or fewer systems, with code for all the cases inside.
Due to the problematic RunCriteria design, even synchronous assets loading is problematic. There is a plugin for that, but what if another plugin causes RunCriteria conflicts? Who knows.
> Why do you need opengl when you have support for D3D, Vulkan
Because Vulkan may be broken for one reason or another, and then one ends up with a system where OpenGL is not supported, and Vulkan doesn't work. My (modern) system is such case.
> Why do you need opengl when you have support for D3D, Vulkan, Metal, OpenglES, and WebGL2? What amount of the market are you missing?
My take: Software isn't used exclusively on new computers.
I was just reading a forum thread full of complaints that a certain emulator won't work on 4th-gen i5 laptops. I want the software I write to be accessible to owners of machines at least as old as that, and I hope I can do that with Bevy.
Bevy is still in its very early days. Using their limited development time to develop for what is already today old hardware doesn't make much sense considering its peak is likely another 5+ years in the future. By the time Bevy is very mature, the hardware that is only able to run OpenGL will be even less relevant and a very small niche.
That being said, as far as I know Bevy is modular enough to write third party render backends. If it keeps growing, I'm sure someone will do that.