I think it always has had a trial version. Back in the early 2000's my friend and I would mess around with the settings, hit render and then go upstairs to play with lego's or something for a couple of hours, and when it was done it would just be fancy terrain but without details like grass, trees or even rivers I think.
This project generates synthetic computer vision training data. The arxiv paper has more detail including some cool pictures of random creatures it can generate. The images are nice but all of them are nature settings so I assume one would have to supplement this type of data with another data set for training a computer vision model.
Maybe this IS the demoscene of today? I saw more insanely beautiful computer generated pictures in the last couple years than I saw in previous 10, AI or no AI.
This feels like we've got all the pieces of the puzzle to create a reality experience - I'm pretty sure with visuals like this and haptic feedback that your brain will fill in any gaps once you adapt to this given enough time.
You could use this with a VR headset, monitoring heart-beat, temperature and adapt the environment based off the experiencer's desires.
It feels like we're on the precipice of recreating an experience of reality, which may reveal more about our existing reality than we ever expected.
From the homepage it sounds like they've prioritised geometry fidelity for CV research rather than performance:
> Infinigen is optimized for computer vision research, particularly 3D vision. Infinigen does not use bump/normal-maps, full-transparency, or other techniques which fake geometric detail. All fine details of geometry from Infinigen are real, ensuring accurate 3D ground truth.
So I suspect the assets wouldn't be particularly optimised for video games. Perhaps a good starting point though!
Epic did say that you might in some situations forego normalmaps with Nanite and save disk space even though you have super detailed models so it DOES fit in this context.
Also, video games are used to take a high poly model and bake a normalmap corresponding to it on a lower poly model anyway so it might also be used that way. I think Doom 3 was the first game to show the technique?
With nanite normal maps are less required than otherwise because the detail is preserved in the mesh. You could make the argument that micro detail normal maps are still useful but those aren't always generated from the mesh. Especially if they are tiling.
I doubt they prioritized it. To get normal maps you usually first need a high resolution mesh, but then need other steps to get good decimation for lods and normal bake. That's mostly extra work, not alternative work that wasn't prioritized. If by transparency they mean faking aggregates, you also need full geo there before sampling and baking down into planes or some other impostor technique.
This looks rather extremely similar to something that Unreal already natively supports. Here is a demo video from them - https://youtube.com/watch?v=8tBNZhuWMac
It looks great, but I'm missing what's innovative about this? AAA procedural folliage has been done for 20 years, terrain too. Blender has had procedural geo nodes for a long time too. What is so interesting here?
The whole point of this is to generate diverse training data with accurate labels for model training. If you actually want to create nice scenes, use normal blender and some plugins and free online assets.
Blender is pretty approachable. How far are we lowering ourselves to be considered average joe? If it's "a few hours of poking" then sure. But you can create some decent procedural nature stuff with a few free plugins and a weekend or two of learning the basics of the program.
Can we not let creative things take a bit of minimal effort? When we already have tools that make it nearly trivial as is.
true, but that's very easy to do as long as the LLM can create the code, which shouldn't be difficult... idk blender's code lang used (i know godot is very python-like), but most code is nowadays some form of cpp (or it is cpp), so with a few-shot it might be even possible to get a cpp trained llm gen the correct code
reply