Hacker News new | past | comments | ask | show | jobs | submit | ianthehenry's comments login

https://bauble.studio/ is a programmatic 3D art playground that I've been working on for a while now, and I'm pretty excited about it! It's based around signed distance functions, which are a way to represent 3D shapes as, well, functions, and you can do a lot of like weird mathematical distortions and operations that give you cool new shapes. Like average two shapes together, or take the modulo of space to infinitely repeat something... it's a really fun and powerful way to make certain kinds of shapes.

SDFs are very cool in general, and widely used in the generative art communities, but kinda hard to wrangle when you're writing shader code directly. They really are functions, but GLSL doesn't support first-class functions, so if you want to compose shapes you have to manually plumb a bunch of arguments around. So Bauble is essentially a high-level GLSL compiler that lets you model SDFs as first-class values, and as a result you can make a pretty cool 3D shape in just a few lines of code. And then 3D print them!

I need to do some actual work to promote and publicize it once I'm done with the documentation and implement a few more primitives, but it's very close!

The docs have lots of examples of the sorts of things you can do with SDFs: https://bauble.studio/help/

And for examples of some "art" that I've made with it recently:

https://x.com/ianthehenry/status/1839061056301445451 https://x.com/ianthehenry/status/1839649510597013592 https://x.com/ianthehenry/status/1827461714524434883


I really want to look at your art but it’s on twitter so I can’t!

oh yeah good point. i probably shouldn't link to that anymore. it's all on mastodon too! https://mastodon.social/@ianthehenry

Thank you, I really like the default tutorial how one can play with it. Is it possible to visualize data with this?

Depending on the data, maybe? SDFs aren't great at rendering large numbers of enumerated objects -- something like a point cloud would be prohibitively expensive, so I wouldn't think to use them for like traditional graphing.

this is really great: https://mastodon.social/@ianthehenry/113223607547344491

How did you do it? Whats the shading factor?


Thanks! Here's the source:

    (color r2 (vec3
      (fbm 8 :f (fn [q] (rotate (q * 2) (pi * sin (t / 100)) + [t 0]))
        (fn [q] (cos q.x + sin q.y /)) q (osc t 30 30 10) | remap+)))
Basically taking the function (1 / (cos(x) + sin(y))) and adding it to itself 8 times, each time scaling and rotating the input a little more (:f).

I'm curious if it looks the same on all GPUs because it kinda relies on floating point precision errors to give it that film-grainy textured effect. And it definitely divides by zero sometimes.


I think this is awesome and have already sent to a few people

hey thanks!

FYI, the default file errors out!

So if you've ever loaded Bauble before you might have a stale and no-longer-working version of the tutorial cached in localStorage -- if you just clear out the script and refresh it will restore the default one. If that's still erroring, please let me know!

Not OP, but I've aso loaded Bauble before, and clearing localStorage didn't help (Firefox, macOS)...

(torus :z 60 30 | twist :y 0.07 | rotate :pi :y t :z 0.05 | move :x 50 | mirror :r 10 :x | fresnel | slow 0.25)

error: script:16:1: compile error: unknown symbol twist

  in evaluate [lib/evaluator.janet] on line 81, column 7

  in bauble-evaluator/evaluate [lib/init.janet] on line 8, column 12

Oh yeah if you clear localStorage like from dev tools, Bauble will re-save the script before refresh, putting it right back where it was. But if you like cmd-a backspace to empty the contents of the script and then refresh it’ll load the default.

https://bauble.studio/ is a lisp-based procedural 3D art playground that I hacked together a while ago. It's fun to play with, but it's a very limiting tool: you can do a lot to compose signed distance functions, but there's no way to control the rendering or do anything "custom" that the tool doesn't explicitly allow.

So lately I've been working on a "v2" that exposes a full superset of GLSL, so you can write arbitrary shaders -- even foregoing SDFs altogether -- in a high-level lisp language. The core "default" raymarcher is still there, but you can choose to ignore it and implement, say, volumetric rendering, while still using the provided SDF combinators if you want.

The new implementation is much more general and flexible, and it now supports things like 2D extrusions, mesh export for 3D printing, user-defined procedural noise functions... anything you can do in Shadertoy, you can now do in Bauble. One upcoming feature that I'm very excited about is custom uniforms and embedding in other webpages -- so you can write a blog post with interactive 3D visualizations, for example.

(Also as a fun coincidence: my first cast bronze Bauble arrived today! https://x.com/ianthehenry/status/1827461714524434883)


Am I right that the output of the lisp code is ultimately a plain GLSL shader (like one might find on shadertoy.com)?

I built a SDF-based rendering system (2D) for my game, and one of the big hurdles was how to have them be data-driven, rather than needing a new shader for each scene or object.

Would be curious if/how you tackled that problem (:


Yep! It just outputs GLSL. It doesn't do anything smart -- it's a single giant shader that gets recompiled whenever you change anything, so it wouldn't really work for something like a game. I mean, it could handle like basic instancing of the form "union these N models, where N<256" but there's no way to change the scene graph dynamically.


I've done this for a project where the SDF functions are basically instructions, and you can build up instrictions on the CPU to send to the shader. and then the fragment shader runs them like a mini bytecode interpreter. You can tile up the screen to avoid having too many instructions per fragment. Kinda wild idea and performance may vary depend on what you're doing


That's pretty similar to what I'm doing!

The CPU builds an RPN expression (like "circle, square, union, triangle, subtract"), and the shader evaluates that in a loop.

I wasn't able to find examples of other people doing similar, but it seemed too useful to not be invented yet (:

Do you have any links to your work?

I'm writing some blog posts for my approach, but haven't finished them yet.

> You can tile up the screen to avoid having too many instructions per fragment.

I don't quite understand this part... If a given SDF needs N instructions to be evaluated, then how does tiling reduce N?

> performance may vary depend on what you're doing

Yeah, fill rate was not good enough with a straightforward approach, so I had to cache the evaluated distance values to a (float) texture atlas, then use those to render to screen. Luckily, standard bilinear filtering on distance values produces pretty decent results.


Yes sounds like the same thing! I also couldn't find anyone else doing it. Sounds super interesting what you're doing so I'd love to read your blog post when it's done if you want to drop me a message/email.

My project was using 2D SDFs for UI which meant you could use a bunch of primitive shapes and union/difference between them, and also add outlines, shadows, glows etc. This means that if you tile up the screen and use a union between two rectangles, only the tile with the overlap needs to calculate the union. It's a little more complicated in 3D with frustum culling.

I was doing it in webgl which doesn't have storage buffers and so I had to use uniforms to pass the data which is a huge limitation. Apparently webgpu could be better so I will try to figure that out one day. But it is early prototype so no links or anything yet.


He made quite a few useful videos demoing it.

@ian Adding this to your help page would be helpful.

https://www.youtube.com/@ianthehenry/search?query=livecoding


Looks amazing, I was having fun with cssdoodle, and now I have two cool sites to do some programming+arts.


Thats kinda cool ngl!


This is completely nuts. Well done.


This is phenomenal. Thanks for sharing.


This is amazing! Thanks for sharing


Stunning.

That site needs to be seen. Thats great.


Wait this is very interesting but I don't follow -- how do the template arguments let you identify the callsite? I thought this was basically just syntax sugar for memoize(doSomethingVeryExpensive, x)([""]), but there's something extra on that argument list that's stable across invocations?


It looks like the first argument passed to tagged templates is always the same across all invocations for the same callsite.

> This allows the tag to cache the result based on the identity of its first argument. To further ensure the array value's stability, the first argument and its raw property are both frozen, so you can't mutate them in any way.

https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...


Ah, thanks! Got it. Okay that's wild. I would not expect that to be stable across dynamically-generated template functions, but it seems to work!


This is how lit-html implements efficient re-renders of the same template at the same location in DOM. It uses the template strings identity to mark that some DOM was generated from a specific template, and if rendering there again it just skips all the static parts of the template and updates the bound values.

Tagged template literals are crazy powerful, and it would be really neat to add some of that magic to other language features. If functions could access the callsite, you could implement React hooks without the whole hooks environment. Each hook could just cache it's state off the callsite object.


The key thing to note here is that it is completely _lexical_ in scope - this only generates one tagged template even when called multiple times:

  function lexicalMagic() {
    const callHistory = [];
  
    function tag(strings, ...values) {
      callHistory.push(strings);
      // Return a freshly made object
      return {};
    }

    function evaluateLiteral() {
      return tag`Hello, ${"world"}!`;
    }
    return { evaluateLiteral, callHistory }
  }
We can create multiple "instances" of our closures, just as we expect and they do in fact produce distinct everything-else:

  > const first = lexicalMagic()
  > const second = lexicalMagic()
  > first.evaluateLiteral === second.evaluateLiteral
  false
  > first.callHistory === second.callHistory
  false
Using them doesn't behave any differently than the unclosured versions (as expected):

  > first.evaluateLiteral() === first.evaluateLiteral()
  false
  > first.callHistory[0] === first.callHistory[1]
  true
  > second.evaluateLiteral() === second.evaluateLiteral()
  false
  > second.callHistory[0] === second.callHistory[1]
  true
However, there is only one instance of the template literal, not two as we might expect. This is really helpful for some cases, but it definitely is surprising given the rest-of-JS:

  > first.callHistory[0] === second.callHistory[1]
  true


Eh, by recursion I meant specifically the exponential "fib(n - 1) + fib(n - 2)" flavored definition. If you're writing the linear-time algorithm and happen to do the iteration via tail recursion, I don't think there's anything absurd about that


Are there any existing compilers that could compile a function like this -- non-tail recursive and non-tail recursive modulo cons -- into something that executes in sub-exponential time? How would that even work? I've never heard of a compiler sufficiently smart to optimize a definition like this and now I'm very curious


Well, both Clang [0] and GCC [1] do compile the "insanely-recursive" fib into something less insane (or, in case of GCC, something that's insane in a different way). It looks like it's done with partial unrolling/inlining?

And, well, if you disregard heavy optimizations, then this "insanely-recursive" function is actually a somewhat decent way to measure the efficiency of the function calls and arithmetic.

[0] https://godbolt.org/z/3fce1qTdv

[1] https://godbolt.org/z/4jqa453qY


Yeah this is kinda like... this is definitely an advanced topic within the topic of macro writing and not very intelligible if you haven't written simpler macros for a while already.

I wrote a book whose third chapter is a much "gentler" introduction to macros. I don't know if it's actually intelligible (see: the monad tutorial fallacy) but it presents them the way that I was first able to understand them -- explicitly starting without quasiquote and working up to it. Easier for me than getting lost in the notation. https://janet.guide/macros-and-metaprogramming/


ha thanks, you are absolutely right. i updated the table :)


In that same article, you have some iterations

  8 / 41 = 0.1951219
  (8 + 41 = 49) / 8 = 6.125
  (49 + 8 = 57) / 49 = 1.16326531
  (57 + 49 = 106) / 57 = 1.85964912
  (106 + 57 = 163) / 106 = 1.53773585
That second line is screwed up, which also screws up the subsequent lines. It should look like

  (41 + 8 = 49) / 41 = 1.19512195
which then means the line after that should be

  (49 + 41 = 90) / 49 = 1.83673469
and so on


i think this is just very badly worded. the initial conditions are current=8, previous=41, not the other way around. i should make that more clear


You point out a good general problem that I find when blogging -- like, you don't want this, right? The whole premise is absurd; the point is not to memoize an expression, but rather to demonstrate that you can share values between compile-time and runtime. But in order to do this you need some specific example of the idea so that readers have something concrete to hold onto and generalize from. And then the difficulty is trying to present that specific example in a way that gets the general idea across, right, without the reader overfitting to the specific example you presented. It's hard! I don't think this one really succeeded.


I call that the curse of examples. Often conflated with "being in the weeds." Is frustrating, as people will jump on you with the X-Y problem style discussions. Which, fair that that is sometimes apt. Probably more often than makes sense, honestly.

Still, I did the callout that I did not mean that as an argument on if they really wanted it because I think it is fair to explore the intent as stated. And I appreciate how hard it is to make examples.


To your credit, you did explicitly call out your example in the blog post as something you wouldn't _actually_ want to do, so it didn't bother me. I've found that I'm more receptive to contrived examples to demonstrate a point if they aren't trying to hide the fact that they're contrived, so if I'm trying to convey a concept via example, sometimes I'll lean into the fact that the example is unrealistic to make it clear that the lack of utility shouldn't distract from the idea. As a silly example of this (see what I did there?), I might implement a trait with a `len` method that always returns 0 on strings to show how to resolve ambiguity when adding a method with name that a type already has in Rust.


this is a very funny way to do this, thanks! i was thinking of using the (deprecated but still widely supported(?)) `caller` property but was sad that it wouldn't admit multiple memoization dictionaries per calling function (also wouldn't work at the top-level but, like, who cares). but using the stack trace is great.

i mean, you know, this isn't really... this isn't really a thing that you would ever want to do, but i am glad that life found a way


it might be; you'd have to benchmark it to be sure


Can someone explain the backstory of what V is and why someone took the time to write this? To the uninitiated this sounds like someone criticizing some kid’s side project.

I’m picking up some context clues that V is widely used / famous / notable / significant somehow, but the only time I have ever heard of it before this is Xe Iaso’s similarly negative posts. Did V receive some huge funding grant that made it the target of ire? Is the author otherwise well-known? What am I missing?


When the V project started out the creator of V made some big claims that raised a few eyeballs, they've gained a reasonable following over the years, have a pretty serious looking website (https://vlang.io), a beer-money level Patreon following and some corporate partnerships/sponsors. However they have experienced some pretty brutal takedowns over the years, with some of the bolder claims about the language/compiler often being exposed as untrue and some functionality being broken.

A word I keep seeing in relation to V is "aspirational" - the project aspires to be a serious language and it aspires to have some serious features, so I think it's fair to approach it with a more critical eye than one would a kid's side-project. I think HN would have been pretty understanding if they were open about the state of the various features and were a little less defensive when they encounter articles that review it like a Real Language.

If the authors don't want this kind of feedback they can just say front-and-centre (or on their FAQ @ https://github.com/vlang/v/wiki/FAQ) "this is a toy" or "this is pre-alpha" or "this is for research purposes". There are plenty of projects like this which are open about their intent and which don't have posts like this written about them. But I don't think that'll happen, so as it stands the pattern will continue - someone revisits the language every year or so, finds some things that doesn't meet expectations, writes about it and we discuss it on HN again.


There is a summary at the top of this post from 2019:

https://andrewkelley.me/post/why-donating-to-musl-libc-proje...


> To the uninitiated this sounds like someone criticizing some kid’s side project.

As someone uninitiated, you may not appreciate what a devastating burn that is.


We can’t just appreciate that someone took the time to evaluate the technical merits of a particular technology? What the hell does all of this peripheral stuff (“is the author otherwise well-known?”) matter?


"Sounds like someone criticizing some kid’s side project."

That's a really big "kid's side project". Looking at V's GitHub stats:

35.2k Stars

722 Contributors

2.1k Forks


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: