Hacker News new | past | comments | ask | show | jobs | submit | Macuyiko's comments login

A few weeks ago I was planning to design a model I could send to a local 3d printer to replace a broken piece in the house for which I knew it would be impossible to find something that would fit exactly.

I looked around through a couple of open source/free offerings and all found them frustrating. Either the focus on easy of use was too limiting, the focus was too much on blob, clay-like modeling rather than strong parametric models (many online tools), or they were too pushy to make you pay, or the UI was not intuitive (FreeCAD).

OpenSCAD was the one which allowed me to get the model done, and I loved the code-first, parametric-first approach and way of thinking. But that said I also found POV-Ray enjoyable to play around with around the 2000s. Build123D looks interesting as well, thanks for recommending that.


The major advantage of Build123D for your use case -- sending it to someone else to fabricate it -- is STEP output support.

This really expands your options for what you can make and who you can ask to make it. There are now some online fabrication places that will do CNC from mesh formats, but really the only way to have proper control is sending them a STEP file.


I follow RL from the sides (I have dabbled with it myself), and have seen some of the cool videos the article also lists. I think one of the key points (and a bit of a personal nitpick) the article makes is this:

> Thus far, every attempt at training a Trackmania-playing program has trained the program on one map at a time. As a result, no matter how well the network did on one track, it would have to be retrained - probably significantly retrained

This is a crucial aspect when talking about RL. Most of the Trackmania AI attempts focuses on a track at a time, which is not really a problem since they want to, given an individual track, outperform the best human racers.

However, it is this nuance that a lot of more business oriented users don't get when being sold on some fancy new RL project. In the real world (think self-driving cars), we typically want agents to be way more able to generalize.

Most of the RL techniques we have do rather well in these kinds of constrained environments (in a sense they eventually start overfitting on the given environment), but making them behave well in more varied environments is way harder. A lot of beginner RL tutorials also fail to make this very explicit, and will e.g. show how to train an agent to find the exit in a maze without ever trying it on a newly generated maze :).


By the end of the article, and in the subsequent article, they're no longer doing it one track at a time.


At first I thought you were talking about some Rocket League AI stuff haha


Very disheartening. HF is doing so much good in the AI community, much more than regulators understand at the moment.


What does this comment mean? Why is it disheartening? What do regulators have to do with it?


Have a look at https://arxiv.org/pdf/2306.11695.pdf which also uses the norm of inputs based on calibration


Wow, this brought back memories. I could swear I wrote a blog post about this years ago but couldn't find it.

A quick search on the local file system revealed `vnccrawl/crawler.py` from 2016 [1] using what looks like a Shodan data dump and calling out to `vncviewer.exe`. I remember randomly logging into some instances and also seeing a lot of cool random systems, including a lot of them controlling industrial systems. Guess I never ended up writing that post.

One would think that on today's Internet it would take only a couple of seconds for those to get compromised, but obfuscation as security, perhaps?

[1]: A random tip from that file: Using a password of 12345678 gives access to way more 'weakly secure' instances.


This reminds me of a short story by Ken Liu, The Message, which details a xeno-archaeologist digging into a place full of radiation. The main character doesn't get the warning message until it is too late and almost loses his daughter.

Googling it now it seems at one point is was going to get adapted to film [1], but seems like that went nowhere.

[1]: https://reactormag.com/ken-lius-the-message-to-get-big-scree...



v1 used a very limited (albeit very easy and already quite impressive) form of transfer learning, e.g. take a pretrained network's 1000dim vector outputs given a bunch of images belonging to three sets (since the original was trained on Imagenet), and then just use K-NN to predict what a set "new" image falls into.

v2 does actually finetune weights of a pretrained network. At the time, it was a nice showcase how fast fast JS ML libraries were evolving.


Came here to cite your work, I even mention "CloudForest" in my slides still as "an interesting implementation that is also capable of handling NANs in DTs in a slightly different way." Crazy this has already been 10 years.


Very cool work.

As a bit of tangent (but wondering whether someone can answer) - the article also makes mention of point based rendering and indeed the fact is has been a staple of particle systems for a long time. However, especially with recent games, I have noticed (purely subjectively) a very subtle shift to a new style of particle systems which are on the one hand fully point oriented (compared to (textured) fragments) but on the other behave more like a physics systems.

Examples:

- Hogwarts (heavily): https://www.gamespot.com/a/uploads/original/1816/18167535/40... - Forspoken (heavily): https://oyster.ignimgs.com/mediawiki/apis.ign.com/project-at... - Starfield (though more rarely): https://dotesports.com/wp-content/uploads/2023/08/temple-loc... - AC6 - FF16 (heavily)

It's more obvious when you see it 'in motion'. The common denominator seems to be particles as colored transparent points with physics. Especially on console systems it seems that developers are using this for very cheap (CPU-wise, all on GPU) effects.

Anyone in gamedev who has some insight in this?


I'm not in gamedev anymore (and didn't do graphics when I was), but my vague impression is that real-time fluid simulation has become more common over the past decade, and I think that's what you're describing as "physics" here and is what makes point-based VFX actually look cool.

Without a fluid sim, point-based particle systems just look like fireworks. It was a cool effect in, like, the early 90s, but is is passe today.

The next step up from that is having each particle move independently with its own physics (some momentum, maybe a little wandering around from "wind", etc.) but then rendering them using little texture billboards. That's what games did up until relatively recently and looks pretty good for explosions, smoke, etc.

But now machines are powerful enough and physics algorithms clever enough to actually do fluid simulation in real-time where the particles all interact with each other. I think that's what you're seeing now.


Very interesting, indeed, they seem to be driven by better fluid simulations... remarkable that they find their way into games. I was always under the impression that Navier Stokes was hard in 3d, but it does seem like there are performant solutions now that are easily offloaded to the GPU, e.g. https://github.com/chrismile/cfd3d (and NVIDIA also has some blog posts about it).

Edit: I also just found this: https://www.youtube.com/live/569oSOSoKDc?si=8V5buRMoI3IKqLQp... -- which is very close to what you describe and fully matches the kind of particle systems I was hinting at, thanks!


Do you have any example of particle systems without fluid sim? (e.g. videos on youtube from old games, or names of old games that used them?)


Engine rather than a game, this video about UE4's particle system shows a lot https://www.youtube.com/watch?v=OXK2Xbd7D9w


The first thing that came to my mind was Portal 2, the "you can't portal here" shower of sparks when you're spamming the gun in the elevators. :)


I'm not exactly up-to-date on current techniques (I've been out of AAA game making for a while now), but here are some general observations that might be useful.

Way back during the transition from Doom to Quake, which in some ways really marked the transition from 2D to 3D, Quake's particle systems also relied overwhelmingly on small flat colored particles and their motions, rather than larger textured sprites and their silhouettes. (Quake did use a few sprites, but it was few and far between).

And I think the reasoning was pretty straight forward even back then; in a 3d game world, there are a lot of conceptual and architectural benefits to only working with truly 3d primitives - and point sprites often can be treated like nearly infinitely small 3d objects.

Whereas putting 2d sprites into a 3d scene introduces a bunch of kludges. In particular, 2d sprites with any partial transparency need to be sorted back to front in a scene for certain major blend modes, which gets really troublesome as there are more and more of them. They don't play nice with zbuffers. And because they need to be sorted, they don't always play nice with the more natural order you might prefer to batch drawing to keep the GPU happy. And likewise, they have a habit of clipping into 3d surfaces in ways that reveal their 2d-ness. There's probably more things I'm forgetting.

These are all issues that have had lots of technical workarounds and compromises thrown at them over time, because 2d transparent textures have been so important for things like effects and grass and trees. Screen door transparency. Shaders to change how sprites are written into zbuffers. Alpha testing. Alpha to coverage. Various follow-on techniques to sand down the rough edges of these things and deal with aliasing in shaders. And so on.

And then there's the issue of VR (or so a cursory skim suggests). I haven't spent time doing VR development myself, but a quick refresher skim of articles and forum posts suggests that 2d image based rendering in 3d scenes tends to stick out a lot more, in a bad way, in VR than on a single screen. The fact that they're flat billboards is much more noticeable... which is roughly what I had guessed before I started writing this comment up.

All of those reasons taken together suggest why people would be happy to move on from effects based on 2d texture sprites in 3d scenes, to say nothing of the other benefits that come from using masses of point sprites specifically themselves (especially in terms of physics simulations and such).


I've certainly seen game engines trying to advertise much more-involved particle systems over the last few years, that is for sure. ;PPPP

Which in some ways I guess makes sense, as the shift towards AI has heavily pushed GPUs almost towards a "sizeable-reduced-instruction-set-GPU-in-GPU" approach with RT cores/tensor cores, etc.... ;P

Side note as well, in/from my experience at least, I think these systems may not be as hard to render as it is for the author. A few years ago, someone (at NVIDIA I think?) wrote kernels to move the SE(3) kernels to tensor cores, so I wouldn't be shocked if some of that could be ported to the spherical harmonics portion of the Gaussian splatting during both compression and runtime: https://developer.nvidia.com/blog/accelerating-se3-transform...

Also, side side note, gaussian splatting should be quite efficient...I think? Due to technically always having support in 3D space (and hopefully not too much of a problem with having good support in 3D space). This should mean that even 'sloppy', quick-conversion calculations should work pretty decently in the end.

I say all of this knowing very little about most optimizations like billboarding, how things like nanite work, etc, etc. I do like it tho! ;PPPP


I know in Unity, they added a VFX Graph several years ago, that allows for these kinds of fine-grain particle effects. You can create a beautiful vortex of glowing points relatively simply.

I'm assuming Unreal Engine has something similar, but I don't much experience with it.


I remember that kind of "particle effect" originally being shown off ~11 years ago by the Unreal Engine 4 demo called "Elemental": https://youtu.be/dD9CPqSKjTU

I don't think they're "from" that, but that's the first landmark I can point to.


I think this is just an artifact of people using much higher particle count these days. When you have more of them it's more noticeable that they are points.


Well... Valve is a very interesting study in that regard. They have voted for violence, topics bordering to meme hate speech, pornography, but have voted against crypto shit and now AI.

Not making a verdict either way but I find it interesting. I'd like to know more about the internal discussion(s) that took place to establish their frameworks. Especially given the company is private.


For what it's worth, I think I would vote the same way, perhaps with the difference being against the hate speech depending on how bad it was.

Games with significant Crypto and AI art components bring significant risks to Valve in both legal and social contexts (99.9% of modern crypto-related projects are an intentional scam, and AI art is a legal minefield right now).

On the other hand, violence and pornography are much more accepted by society (in the context of fictional enterntainment).


I think it makes perfect sense that they care more about their users not falling for a crypto scam on their platform than the actual content of the games


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: