Hacker News new | past | comments | ask | show | jobs | submit login
Write shaders for the (sim) Vegas sphere (whenistheweekend.com)
362 points by jjwiseman 11 months ago | hide | past | favorite | 80 comments



Got the matrix working.

I tried adjusting the UV so we could see the top of the sphere, but I gave up. For what it's worth, the threejs uniforms "projectionMatrix" and "modelViewMatrix" can be referenced in the GLSL as long as you declare them up top.

  #define PI 3.14159265359
  #define SQRT_2 1.4142135623730951
  #define SQRT_5 2.23606797749979
  
  //uniform mat4 projectionMatrix, modelViewMatrix;
  uniform float time;
  varying vec2 vUv;
  varying vec3 vNormal;
  
  highp float randomFloat( const in vec2 uv ) {
    const highp float a = 12.9898, b = 78.233, c = 43758.5453;
    highp float dt = dot( uv.xy, vec2( a,b ) ), sn = mod( dt, PI );
    return fract(sin(sn) * c);
  }
  
  float wobble(float x) {
    return x + 0.3 * sin(SQRT_2 * x) + 0.2 * sin(SQRT_5 * x);
  }
  
  float getRainBrightness(float simTime, vec2 glyphPos) {
    float columnTimeOffset = randomFloat(vec2(glyphPos.x, 0.)) * 1000.;
    float columnSpeedOffset = randomFloat(vec2(glyphPos.x + 0.1, 0.)) * 0.5 + 0.5;
    float columnTime = columnTimeOffset + simTime * columnSpeedOffset;
    float rainTime = (glyphPos.y * 0.01 + columnTime) * 350.0;
    
    rainTime = wobble(rainTime);
    
    return 1.0 - fract(rainTime);
  }
  
  void main(){
  
    float t = fract(time / 14.487);
    vec2 animatedUv = fract(vUv + vec2(t * 0.002, 0));
  
    vec2 gridSize = vec2(3.14 / 2.0, 1.0) * 100.0;
  
    vec2 glyphUv = fract(animatedUv * gridSize);
    vec2 gridCoord = floor(animatedUv * gridSize) / gridSize;
  
    float brightness = getRainBrightness(t * 0.1, gridCoord);
  
    brightness = clamp(0.0, 1.0, brightness * 1.6 - 1.2);
  
    float coverage = 1.3 - length(glyphUv - 0.5) * 3.0;
  
    gl_FragColor = vec4(brightness * coverage * vec3(0.2, 1.0, 0.05), 1);
  
  }


Here's a static penrose tiling.

https://gist.github.com/vjeranc/265db912d4004c7c0b0f16ae5fda...

Interestingly, sphere is not infinite so having this aperiodic tiling is pointless (still looks nice).

Although, I never found out if there's a similar set of tiles (or a monotile) that can tile an infinite cone.


> cone

For physical tiles, clearly not as the curvature would be wrong. I'm sure there remain interesting questions, but it's not sufficiently obvious to me what projection makes sense that I'm able to get further into the problem.


I now realized I wanted to say cylinder, not cone haha.


That's also interesting!

Assuming it's possible at all, I think scale winds up mattering in a way that it doesn't for a plane, because you have to accommodate "meeting yourself" again some fixed distance away.


A cone has the same curvature as the plane.


Not for any definition of curvature that springs to mind. There may well be definitions for which that's true (let me know, I'm curious!), but they're not what I meant. As I said, I was considering covering a cone with physical tiles. My point was that the same tile cannot be placed at different heights along the cone, because a slice along the tile will have to conform to ever larger circles.


The Gaussian curvature is the same. That is the curvature that can be measured "from within" the surface.

What you wrote is of course right as well


Ah hah, neat!


How do you define the curvature for a cone when it has a singularity at the apex? From the definion that I know (angle defect), the tip of the cone will have curvature


might be interesting to slowly 'move' the sphere through infinite aperiodic tiled plane.


    vec2 st = (vUv.xy * h + time/100. + 1.);
This will move it with time.

The tiling is not infinite, it's inside a big triangle, so with enough time you will travel outside of it, but I guess with proper bounds you can move around or make the triangle bigger.

   vec2 st = (vUv.xy * h/(1.+0.5*cos(time)) + time/100. + 1.);
Periodic zooming. Looks like the sphere gets dressed in a cape.


Offtopic but what fascinates me is how big of an actual engineering project The Sphere is.

From the structure itself to the syncing thousands of LED panels inside the sphere and outside the sphere, while keeping them cooled in Vegas while keeping it cool enough and ventilated inside for a seating capacity of 18 000 people.

There are run-of-the mill office buildings which fail to provide adequate ventilation.


All that effort, all those resources, just to get more eyeballs on ads.


I'd normally be there on the barricades with you but this is maybe a _little_ too cynical? The Sphere is a big performance art space, a piece of cutting edge technology that's a canvas and a playground for boundary-pushing visionaries, not a giant billboard.

Oh. I forgot about the exterior surface. And five seconds on image search tells me you were right. Well goddamn. Hard to imagine it'd cover its costs without that stuff, but yuck all the same.


The show currently running inside the sphere is focused on the negative human impact to the environment. So clearly the owners of the sphere are interested in spreading that narrative, if that helps you feel better about their use of resources.

Obviously selling ads is a mechanism to pay the bills and defray the massive building costs. It’s not like the rest of Vegas is some sort of moral high ground, so ads on its exterior kind of fits right in.


>Obviously selling ads is a mechanism to pay the bills and defray the massive building cost.

what came first, the chicken or the egg?

>It’s not like the rest of Vegas is some sort of moral high ground, so ads on its exterior kind of fits right in.

that's certainly true, but driving next to the thing is a bit makes you realize that even for Vegas it's a tad bit excessive. Right time of night with the right animation leaves you essentially light-blinded for a few moments.


Actually not really, the ads on the rest of the strip are by far more obnoxious than the ones on the sphere. Whether it’ll stay that way, I’ve doubts, but for now in a city of gimmicky shit, this is in good taste.


The "impact" part was like 10 minutes of the usual stuff. With the resolution being "we'll just get off the planet into the space".

The current show is really just a fancy technical demo of Sphere's capabilities.

(Also, I get triggered by the picture of the exoplanet with a large moon and a ring system. ON DIFFERENT ORBITAL PLANES!!!)


> negative human impact to the environment

That's not why we care about this. Advertising is an assault on people first, environment second.


> The show currently running inside the sphere is focused on the negative human impact to the environment.

Which is extremely ironic. Or tone-deaf, depending on how you look at it.


Well that and also to house the gigantic concert venue that's inside of it.


Cmon it’s more than that.

It’s true innovation.


> There are run-of-the mill office buildings which fail to provide adequate ventilation.

They actually can create cold wind inside the sphere (for special effects)! It's super cool.


Fun project! Here's a simple smiley.frag: https://gist.github.com/nickbarth/4ec5147bd1288fd2bcfc4c5b46...


Very nice project! It would be great to have a separate url for each code (embed it in the url perhaps?) for easy sharing.


A big smiley face. :)

    uniform float time;
    varying vec2 vUv;

    // https://iquilezles.org/articles/distfunctions2d/ (Thanks IQ!)
    float sdRing( in vec2 p, in vec2 n, in float r, float th )
    {
        p.x = abs(p.x);
   
        p = mat2x2(n.x,n.y,-n.y,n.x)*p;

        return max( abs(length(p)-r)-th*0.5,
                length(vec2(p.x,max(0.0,abs(r-p.y)-th*0.5)))*sign(p.x) );
    }

    void main() {

        // create coordinates at visual centre (y coords doubled for circles)
        vec2 centre = vec2(0.25, 0.25);        
        vec2 uv = vUv * vec2(1.0, 0.5) - centre;

        // mirror x to get both eyes for the price of one, and find eye offset
        float eyes = length(vec2(abs(uv.x), uv.y) - vec2(0.035, 0.03));
        // carve them out using smoothstep
        eyes = smoothstep(0.015, 0.016, eyes);

        float mouth = sdRing(vec2(uv.x, -uv.y + 0.03), vec2(7), 0.65, 0.1);
        mouth = smoothstep(0.02, 0.025, mouth);

        float shade = min(eyes, mouth);
    
        vec3 yellow = vec3(0.9, 0.7, 0.0);
        vec3 color = shade * yellow;

        gl_FragColor = vec4(color, 1.0);
    }


Never played with shaders before, made a spinning watermelon.

    uniform float time;
    varying vec2 vUv;
    
    void main() {
        vec2 st = vUv.xy;
        float f1 = length(abs(2.0*st)/2.0 - .1*time);
        float stripe = fract(f1*10.0);
        float stripe2 = stripe*stripe*stripe;
        float stripe3 = fract(-f1*10.0);
        float stripe4 = stripe3*stripe3*stripe3;
    
        float f2 = length(abs(4.0*st)/2.0 - .2*time);
        float stripe5 = fract(f2*10.0);
        float stripe6 = stripe5*stripe5*stripe5;
        float stripe7 = fract(-f2*10.0);
        float stripe8 = stripe7*stripe7*stripe7;
    
        gl_FragColor = vec4(vec3(0.02, 0.3, 0.01)+vec3(0.3, 0.02, 0.0)*(stripe2 + stripe4 + 0.5*stripe6 + 0.6*stripe8), 1.0);
    }
I'm really not sure how it works? Like I gather that I'm doing something with rgb tuples and frac() appears to be a remainder operation and that's why I square it and add it to itself to create some smooth bands, but what exactly is going on with how shaders work or what uVu is or what gl_FragColor assigns to is rather opaque to me?


The GPU takes model space objects (a long list of vertexes representing the 3 corners of triangles), a camera position and a vertex shader that transforms them into screen space. Then it splits the triangles into fragments (pixels), and runs the provided fragment shader (also called pixel shader in dx-land) on each fragment.

The inputs provided are defined by the programmer, and in this case, they are an uniform float (same value given to every fragment) called time, and a varying vec2 (different value given to each fragment, the vertex shader outputs a value for this at every vertex, and when the fragments are created, a value is linearly interpolated between the three relevant vertexes based on the position of the fragment on the triangle) called vUv, which is just the co-ordinates of a pixel on the sphere.

gl_FragColor is the single output, a rgba vector that represents color that is applied for that fragment.

The only things missing from the more typical shaders used in all games and, for example, google earth is texturing and lighting -- normally you provide the fragment shader a texture handle, which it then uses with texture co-ordinates provided from the vertex shader to sample the texture, and for lighting the vertex shader provides a surface normal, which you can use with an uniform argument for the location of the light source to correctly shade the fragment.

If the idea of running a program for each and every separate pixel individually sounds inefficient, it would be, but the programming model here is actually SIMT. That is, you give the program as if you are working on a single pixel, but it is converted to actually work on a wide SIMD array, doing 32 (NVidia) or 64 (AMD) pixels at the same time. The cost of this is that all conditional branches are essentially converted into conditional operations, meaning that as a first approximation, you always execute both sides of any if statement.


Thanks, that helped! The idea of massively parallel computations like that was kind of what I was leveraging, I was like “I guess I treat this vec2 as a NumPy array of vectors?”—but it's cool that the job gets batched behind the scenes


It runs your code for every pixel on the canvas. You might like this:

https://www.shadertoy.com/


Also check out the shaders at

https://www.glslsandbox.com

But beware there's often idiot children posting NSFW shaders there, the mods don't monitor it very closely.


Building up a body with a few hundred lines of GLSL is one thing, but constructing an orgy with just 11 is something else entirely.

I wonder what the Komoglorov complexity of "NSFW GLSL Content" is.

    float orgy(vec2 p) {
        float pl=0., expsmo=0.;
        float t=sin(time*8.);
        float a=-.35+t*.02;
        p*=mat2(cos(a),sin(a),-sin(a),cos(a));
        p=p*.07+vec2(.728,-.565)+t*.017+vec2(0.,t*.014);
        for (int i=0; i<10; i++) {
            p.x=abs(p.x);
            p=p*2.+vec2(-2.,.85)-t*.04;
            p/=min(dot(p,p),1.06);  
            float l=length(p*p);
            expsmo+=exp(-1.2/abs(l-pl));
            pl=l;
        }
        return expsmo*1.4;
    }
https://www.glslsandbox.com/e#108133.0


Yeah if you’re making an orgy with shaders they should just let it stay up out of pure respect.

Also warning for the future reader: the link in the parent comment is NSFW.


> what the Komoglorov complexity of "NSFW GLSL Content" is

can you explain what you mean by above?


The smallest GLSL program that generates content generally regarded as NSFW.

It's among the more interesting of concepts in computer science: https://en.wikipedia.org/wiki/Kolmogorov_complexity

Though it probably greatly benefits from a good lecturer if you do not have the background... https://people.csail.mit.edu/rrw/6.045-2020/


> idiot children posting NSFW shaders

This is pure genius.

(NSFW) https://www.glslsandbox.com/e#108230.3

  // chatGPT , give me some fat fat
  // chatGPT , make this bitch more fat
  // chatGPT , add some scrolling lines behind the bitch
  // chatGPT , change the colors bruh
  // chatGPT , add nipples


It's a big improvement over the idiots posting swastikas there.


That's very nice. Look at the solution a person pushed for my isocity project [1]. It enables to share a URL with your city creation

Maybe you can implement something like that

1 - https://github.com/victorqribeiro/isocity/commit/c6588171e37...


The Sphere would be a great venue for hosting a Demo Party[0].

[0] https://en.wikipedia.org/wiki/Demoscene#Parties


Cool idea to overlay the real time rendering on a video of the actual sphere. Someone ought to capture a gaussian splatting scene of the sphere, that would be even better.


Now we need to run these on a small spherical display like the Gakken Worldeye https://youtu.be/85rs0WHnUy0?feature=shared&t=53

Here's an animated eyeball (hat tip chatgpt):

    uniform float time;
    varying vec2 vUv;
    vec3 lighting(vec3 normal, vec3 eyeDirection, vec3 lightDirection) {
    float diffuse = max(dot(normal, lightDirection), 0.0);
    vec3 reflected = reflect(-lightDirection, normal);
    float specular = pow(max(dot(reflected, eyeDirection), 0.0), 16.0);
    return vec3(0.1) + diffuse * vec3(0.5) + specular * vec3(0.3);
    }
    void main() {
    float theta = vUv.x * 2.0 * 3.14159265359;
    float phi = (1.0 - vUv.y) * 3.14159265359;
    vec3 sphereCoord = vec3(sin(phi) * cos(theta), cos(phi), sin(phi) * sin(theta));
    vec2 irisPosition = vec2(sin(time * 0.5) * 0.1, cos(time * 0.7) * 0.1);
    float irisDistance = length(sphereCoord.xy - irisPosition);
    float irisRadius = 0.25;
    float pupilRadius = 0.1;
    float irisEdgeSoftness = 0.02;
    vec3 color;
    if (irisDistance < pupilRadius) {
        color = vec3(0.0);
    } else if (irisDistance < irisRadius) {
        float t = smoothstep(irisRadius - irisEdgeSoftness, irisRadius, irisDistance);
        color = mix(vec3(0.0, 0.0, 1.0), vec3(0.5, 0.25, 0.0), t);
        color += 0.05 * sin(20.0 * irisDistance + time);
    } else {
        color = vec3(1.0);
    }
    vec3 eyelidColor = vec3(1.4, 0.90, 0.60);
    float blinkSpeed = 0.5; // Reduce this value to slow down the blinking
    float blink = abs(sin(time * 3.14159265359 * blinkSpeed));
    float eyelidPosition = mix(0.4, 0.5, blink);
    float upperEyelid = step(eyelidPosition, vUv.y);
    float lowerEyelid = step(eyelidPosition, 1.0 - vUv.y);
    color = mix(color, eyelidColor, 1.0 - upperEyelid * lowerEyelid);
    vec3 normal = normalize(sphereCoord);
    vec3 lightDirection = normalize(vec3(0.0, 1.0, 1.0));
    vec3 eyeDirection = normalize(vec3(0.0, 0.0, 1.0));
    vec3 litColor = color * lighting(normal, eyeDirection, lightDirection);
    gl_FragColor = vec4(litColor, 1.0);
    }


Thank you for introducing me to the world eye, this will be the 5th discontinued japanese electronic doodad i've bought off ebay due to FOMO.


Nice job!


This reminds me of when Matt Parker let his viewers control his Christmas tree lights: https://youtu.be/v7eHTNm1YtU?si=GqK6oSZVHBJkjcib


How does this have anything to do with that?


It's also a way for the public to try visualizations in a 3D space. If you're interested in trying out some shaders and seeing what other people have created it's another vector to explore.


How are someone's christmas lights a "visualization in 3D space?". You realize this is just a sphere model in opengl and doesn't affect anything in the real world right?

By this definition, someone flipping their light switches on and off is a "visualization in 3D space".


Very well done. Bright lights even bleed (not sure the right word) beyond the border of the sphere to simulate atmosphere (or lens?) effects.


An effect called "bloom" in computer graphics


And called "halation" in photography & imaging

> the spreading of light beyond its proper boundaries to form a fog around the edges of a bright image in a photograph or on a television screen


And, without looking that up, "halation" presumably comes from this being an artifact of the chemical process in early photography involving silver halides!


I am learning to write shaders. I came across the Las Vegas Sphere thread on X and I love the instant feedback that Alexandre Devaux has created on his site. My attempt to create the Amiga BOING! resulted in a full sphere instead of being cut off at the base like all the other examples. Can someone clue me into why this has occurred and how to fix it? My code is very "hacky" and I only partially understand what I've done here. Any help would be appreciated.

    uniform float time;
    varying vec2 vUv;
    varying vec3 vNormal;

    float rows = 20.0,
          cols = 10.0;
  
    float direction = -1.0;
   
    vec4 colorOne = vec4(1.0,1.0,1.0,0.1),
         colorTwo = vec4(1.0,0.0,0.0,0.1);
    vec4 amiga =  vec4(1.0,0.0,0.0,0.1);

    vec4 amigaMe (vec2 uv, vec4 colorOne, vec4 colorTwo);

    void main()
        {
        vec2 uv = vUv + (direction *(5.0*sin(time) * 0.1));
        amiga = amigaMe (uv, colorOne, colorTwo);
        gl_FragColor  = vec4(amiga.x,amiga.y,amiga.z,0.1);
        }
 
    vec4 amigaMe (vec2 uv, vec4 colorOne, vec4 colorTwo) 
       {    
       vec4 finalColor;
       float patternX = floor(uv.x*rows),
             patternY = floor(uv.y*cols);
       if(mod(patternX + patternY, 2.0) == 0.0)
            {
            finalColor = colorTwo;
            }  
        else 
            {
            finalColor = colorOne + colorTwo ;
            }
        return finalColor;
       }


That's super cool! What shader language is this using?


GLSL - OpenGL Shader Language

Definitely check out ShaderToy to see many other examples: https://www.shadertoy.com/


Ah, thanks! I was trying to copy-paste some examples and they didn't quite work -- it looks like ShaderToy exposes a mainImage(out vec4, in vec2) function with a different signature than the simulation-Sphere site wants. For example there's no main() method in here [0] and I wasn't sure how to translate that to the sphere's expected inputs.

[0] https://www.shadertoy.com/view/Ms2SDc


Yeah, looks like there's a couple of differences. If you look at the top left there are a few examples that are shadertoy examples. Example 5 (Smiley face) shows some differences:

* the void main() declaration difference

* the inclusion of iTime vs time

* gl_FragColor vs fragcolor

Also, the demo adds:

uniform float time;

varying vec2 vUv;

varying vec3 vNormal;

I think those are the biggest differences.


Yeah, you will need to adapt the code a bit: https://gist.github.com/b0o/ac939eb1608f949765ac842fe28f53b9


Nice!


Here is a slowly rotating checkerboard.

  uniform float time;
  varying vec2 vUv;
  varying vec3 vNormal;

  vec2 rot(vec2 uv, float angle) {
    float c = cos(angle);
    float s = sin(angle);
    mat2 rot = mat2(s, -c, c, s);
    return uv * rot;
  }

  void main(){
    vec3 black = vec3(0.0);
    vec3 white = vec3(1.0);
    vec3 color = black;

    vec2 uv = vUv;
    uv = rot(uv, sin(time) * 0.05);

    float boardSize = 24.0;
    uv = uv * boardSize;

    vec2 gridId = floor(uv);

    if (mod(gridId.x + gridId.y, 2.0) <= 0.01) {
      color = white;
    } else {
      color = black;
    }

    gl_FragColor = vec4(color, 1.0);
  }


The pilot on my flight to Vegas said the cost to advertise (or do whatever, I assume) was $470,000/minute. I didn't verify the claim but it isn't too hard to believe and is pretty crazy, even if it's remotely true.


it's 400K per day not per minute


Why would anyone pay that much if he gets into television cheaper? I doubt that number.


Note: if you see a solid dark brown color, it means that your Shader didn't work (that's what OP put in the base MP4 video)


It's so fun to copy and paste these on the shader sim :)

It would be fun to make a little click and see microsite like: http://recodeproject.com/


Love this idea.


Fun with ChatGPT, just tell it to write some GLSL shader code for a shader that would look cool on the Las Vegas Sphere.


The sphere consumes so much solar energy that the sun dims a little when the sphere is switched on.


What is a shader?



really fucking cool! good job


[flagged]


Credits: ChatGPT

Please do not

Anyone on HN who wants to see GPT code is able to do it themselves. Using it to get attention from other people who are expecting to see your thoughts/experience/advice is not cool, any more than answering people's questions with cut-n-pasted search engine results would be.


I'm missing the part where the OP (and the rest of us) asked you.


I've made this quest 8 or 10 times and it gets more positive than negative votes each time, so I think I'm on to something. I don't mind someone referencing chat GPT but using it create whole comments is information pollution, unless it's unusually funny.


This doesn't work. Did you try it beforehand?


Very unfortunate! It must have something to do with the formatting, I uploaded the code to a paste: https://pastes.io/dmqjuyktfv


Wow, that's cool! I asked ChatGPT "The fragment coloring is currently greyscale, update the fragment coloring to create a rainbow effect." and it created this excellent animated rainbow effect, first try:

https://pastebin.com/raw/NtYuJv1m

It looks especially good with the changing colours highlighting the bloom effect


Did you actually try it? Bc it doesn't work for me


My bad, something went wrong with the formatting, I uploaded the code to a paste:

https://pastes.io/dmqjuyktfv


Indent the code two spaces for HN: https://news.ycombinator.com/formatdoc




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: