I tried adjusting the UV so we could see the top of the sphere, but I gave up. For what it's worth, the threejs uniforms "projectionMatrix" and "modelViewMatrix" can be referenced in the GLSL as long as you declare them up top.
For physical tiles, clearly not as the curvature would be wrong. I'm sure there remain interesting questions, but it's not sufficiently obvious to me what projection makes sense that I'm able to get further into the problem.
Assuming it's possible at all, I think scale winds up mattering in a way that it doesn't for a plane, because you have to accommodate "meeting yourself" again some fixed distance away.
Not for any definition of curvature that springs to mind. There may well be definitions for which that's true (let me know, I'm curious!), but they're not what I meant. As I said, I was considering covering a cone with physical tiles. My point was that the same tile cannot be placed at different heights along the cone, because a slice along the tile will have to conform to ever larger circles.
How do you define the curvature for a cone when it has a singularity at the apex? From the definion that I know (angle defect), the tip of the cone will have curvature
The tiling is not infinite, it's inside a big triangle, so with enough time you will travel outside of it, but I guess with proper bounds you can move around or make the triangle bigger.
vec2 st = (vUv.xy * h/(1.+0.5*cos(time)) + time/100. + 1.);
Periodic zooming. Looks like the sphere gets dressed in a cape.
Offtopic but what fascinates me is how big of an actual engineering project The Sphere is.
From the structure itself to the syncing thousands of LED panels inside the sphere and outside the sphere, while keeping them cooled in Vegas while keeping it cool enough and ventilated inside for a seating capacity of 18 000 people.
There are run-of-the mill office buildings which fail to provide adequate ventilation.
I'd normally be there on the barricades with you but this is maybe a _little_ too cynical? The Sphere is a big performance art space, a piece of cutting edge technology that's a canvas and a playground for boundary-pushing visionaries, not a giant billboard.
Oh. I forgot about the exterior surface. And five seconds on image search tells me you were right. Well goddamn. Hard to imagine it'd cover its costs without that stuff, but yuck all the same.
The show currently running inside the sphere is focused on the negative human impact to the environment. So clearly the owners of the sphere are interested in spreading that narrative, if that helps you feel better about their use of resources.
Obviously selling ads is a mechanism to pay the bills and defray the massive building costs. It’s not like the rest of Vegas is some sort of moral high ground, so ads on its exterior kind of fits right in.
>Obviously selling ads is a mechanism to pay the bills and defray the massive building cost.
what came first, the chicken or the egg?
>It’s not like the rest of Vegas is some sort of moral high ground, so ads on its exterior kind of fits right in.
that's certainly true, but driving next to the thing is a bit makes you realize that even for Vegas it's a tad bit excessive. Right time of night with the right animation leaves you essentially light-blinded for a few moments.
Actually not really, the ads on the rest of the strip are by far more obnoxious than the ones on the sphere. Whether it’ll stay that way, I’ve doubts, but for now in a city of gimmicky shit, this is in good taste.
I'm really not sure how it works? Like I gather that I'm doing something with rgb tuples and frac() appears to be a remainder operation and that's why I square it and add it to itself to create some smooth bands, but what exactly is going on with how shaders work or what uVu is or what gl_FragColor assigns to is rather opaque to me?
The GPU takes model space objects (a long list of vertexes representing the 3 corners of triangles), a camera position and a vertex shader that transforms them into screen space. Then it splits the triangles into fragments (pixels), and runs the provided fragment shader (also called pixel shader in dx-land) on each fragment.
The inputs provided are defined by the programmer, and in this case, they are an uniform float (same value given to every fragment) called time, and a varying vec2 (different value given to each fragment, the vertex shader outputs a value for this at every vertex, and when the fragments are created, a value is linearly interpolated between the three relevant vertexes based on the position of the fragment on the triangle) called vUv, which is just the co-ordinates of a pixel on the sphere.
gl_FragColor is the single output, a rgba vector that represents color that is applied for that fragment.
The only things missing from the more typical shaders used in all games and, for example, google earth is texturing and lighting -- normally you provide the fragment shader a texture handle, which it then uses with texture co-ordinates provided from the vertex shader to sample the texture, and for lighting the vertex shader provides a surface normal, which you can use with an uniform argument for the location of the light source to correctly shade the fragment.
If the idea of running a program for each and every separate pixel individually sounds inefficient, it would be, but the programming model here is actually SIMT. That is, you give the program as if you are working on a single pixel, but it is converted to actually work on a wide SIMD array, doing 32 (NVidia) or 64 (AMD) pixels at the same time. The cost of this is that all conditional branches are essentially converted into conditional operations, meaning that as a first approximation, you always execute both sides of any if statement.
Thanks, that helped! The idea of massively parallel computations like that was kind of what I was leveraging, I was like “I guess I treat this vec2 as a NumPy array of vectors?”—but it's cool that the job gets batched behind the scenes
// chatGPT , give me some fat fat
// chatGPT , make this bitch more fat
// chatGPT , add some scrolling lines behind the bitch
// chatGPT , change the colors bruh
// chatGPT , add nipples
Cool idea to overlay the real time rendering on a video of the actual sphere. Someone ought to capture a gaussian splatting scene of the sphere, that would be even better.
It's also a way for the public to try visualizations in a 3D space. If you're interested in trying out some shaders and seeing what other people have created it's another vector to explore.
How are someone's christmas lights a "visualization in 3D space?". You realize this is just a sphere model in opengl and doesn't affect anything in the real world right?
By this definition, someone flipping their light switches on and off is a "visualization in 3D space".
And, without looking that up, "halation" presumably comes from this being an artifact of the chemical process in early photography involving silver halides!
I am learning to write shaders. I came across the Las Vegas Sphere thread on X and I love the instant feedback that Alexandre Devaux has created on his site. My attempt to create the Amiga BOING! resulted in a full sphere instead of being cut off at the base like all the other examples. Can someone clue me into why this has occurred and how to fix it? My code is very "hacky" and I only partially understand what I've done here. Any help would be appreciated.
Ah, thanks! I was trying to copy-paste some examples and they didn't quite work -- it looks like ShaderToy exposes a mainImage(out vec4, in vec2) function with a different signature than the simulation-Sphere site wants. For example there's no main() method in here [0] and I wasn't sure how to translate that to the sphere's expected inputs.
Yeah, looks like there's a couple of differences. If you look at the top left there are a few examples that are shadertoy examples.
Example 5 (Smiley face) shows some differences:
The pilot on my flight to Vegas said the cost to advertise (or do whatever, I assume) was $470,000/minute. I didn't verify the claim but it isn't too hard to believe and is pretty crazy, even if it's remotely true.
Anyone on HN who wants to see GPT code is able to do it themselves. Using it to get attention from other people who are expecting to see your thoughts/experience/advice is not cool, any more than answering people's questions with cut-n-pasted search engine results would be.
I've made this quest 8 or 10 times and it gets more positive than negative votes each time, so I think I'm on to something. I don't mind someone referencing chat GPT but using it create whole comments is information pollution, unless it's unusually funny.
Wow, that's cool! I asked ChatGPT "The fragment coloring is currently greyscale, update the fragment coloring to create a rainbow effect." and it created this excellent animated rainbow effect, first try:
I tried adjusting the UV so we could see the top of the sphere, but I gave up. For what it's worth, the threejs uniforms "projectionMatrix" and "modelViewMatrix" can be referenced in the GLSL as long as you declare them up top.