Hacker News new | past | comments | ask | show | jobs | submit login
My Room in 3D (my-room-in-3d.vercel.app)
716 points by namukang on Sept 11, 2021 | hide | past | favorite | 131 comments



Although libraries like threejs (or softwares like blender) give you so many tools and knobs to play around, I cannot for the life of me make a scene look so great. The lighting, the composition, the textures, the even level of detail…

I would recommend everyone try it out for themselves to see what I’m talking about. No matter how much you try, there is always something too dark or light. Always something that looks like made from minecraft blocks and something too detailed for those 2 objects to be from the same universe. Animations are either too stiff or move like jelly blobs.

How is this so perfect?! I just can’t comprehend how do 3d artist’s brains work.


First off, it's art. You need to stop thinking like a programmer and think more like an artist. That means thinking about colors and lighting, tones and hues and highlights and shadows. If you're unfamiliar with these things, that's fine, you can find plenty of training material and courses online. It takes time, because it's a skill that requires training and practice and self-discovery.

Second, lighting is a careful balance, but in general, the simpler the rig, the better. Pull an IBL HDR map from somewhere, use a single directional light, and let the bounce lighting do the main work here. Outdoors looks gorgeous and that's a single directional light, and a ton of bounce. And that's the same thing providing the light in this room scene as well.

Third, learn the tricks and technologies. Learn about lightmapping, about shadow mapping, about the different shader tricks, about why real-time transparency is hard. Yes, libraries like Three.JS can abstract over some of this, but some fundamentals in the tech can help you focus on the things that will work, and the things that don't. Simon Schreibt has some excellent guides and tutorials that are artist-focused about tech art.

The lightning here isn't realtime: just a bunch of baked lightmaps fading in and out. Not even SH probes!


It can be both art and programming! Here's a paper showing how programmers work with artists to develop the classic TF2 half lambert [0]. This shader also makes use of an efficient method for representing diffuse environmental contributions using low order contributions on a spherical harmonic basis which is similar to techniques from quantum mechanics [1].

[0] https://steamcdn-a.akamaihd.net/apps/valve/2007/NPAR07_Illus...

[1] https://cseweb.ucsd.edu/~ravir/papers/envmap/envmap.pdf


You read the paper wrong. They use a 6-face ambient cube instead of the SH9 term. I guess the SH9 term was somewhat expensive in 2007, but it's now pretty standard for representing ambient diffuse light, and has much better results than the ambient cube. For more on that, see this slide deck: https://steamcdn-a.akamaihd.net/apps/valve/2006/SIGGRAPH06_C...

But yes, understanding some physical basis for lighting is helpful even for non-photo-realistic rendering.


damn, I haven't played Team Fortress in so long lol. This is a cool thread, I'm going to have to Bookmark this so I can always come back to it for foundational inspiration and as a reference key for doing artwork.


I do agree with this. I just want to add that the "thinking like an artist" doesn't have to be as intimidating as it may sound.

I don't really do any visual art outside of graphics programing, I don't see myself as an artist, I'm very much not a visual thinker, and my deuteranomaly does make some aspects of my work harder at times.

Besides all that I've found I can do a good job and get some really nice looking results just by focusing on an understanding of the underlying equations and common tricks like mentioned above.


Just a nit: the lighting outside is not a single directional source, unless you're in the Moon.

There is sun, which is slightly yellowish (or orange to red at sunset), and there is sky light which is obviously bluish. Sun is directional, sky light is not. Sun is a light source far away, but sky is a light source all around the scene.

The rest, well, can be done with a lot of bouncing, refraction, diffusion subsurface scattering, etc.


The blue light of the sky is technically the result of Rayleigh scattering of the sun through the atmosphere, which can be emulated (and often is these days! [0]), but fair; it's often cheaper and easier for an artist to model them separately. I was going for a classic physics demonstration to point out how simple sources can often result in beauty.

[0] The classic is Naty Hoffman's 2002 work on atmospheric scattering http://developer.amd.com/wordpress/media/2012/10/GDC_02_Hoff... , but it has been developed over the years, e.g. Sebastian Hillaire's work on both Frostbite https://media.contentapi.ea.com/content/dam/eacom/frostbite/... and Unreal Engine https://blog.selfshadow.com/publications/s2020-shading-cours...


While technically true, this is not how most 3D animation software models light — even if it (roughly) models Rayleigh scattering in solids, like opal or ground glass.

In all realistic rendering software, skylight and sun (or other celestial bodies, like Moon) are modeled as separate sources of light, with different light characteristics.


> There is sun, which is slightly yellowish

The sun light is white during the day, otherwise, anything white would appear yellow.


No, the white color of the sunlight is an illusion. It actually is yellowish, but our vision adjusts for that. We basically have a kind of adaptive white balance. The spectral intensity is mostly that of blackbody radiation, which peaks in the yellow part of the spectrum for our sun.


The sun is actually white. The yellow tint is due to Rayleigh Scattering in the atmosphere. When the sun is high in the sky there's less scattering and it appears whiter, when it's low there's more scattering and appears yellow/orange. Here's a better explanation: https://physics.stackexchange.com/questions/47694/why-is-the...


This is just wrong. The surgace of the sun is around 5700K and rhe emitted light follows the back body spectrum for that temperature. And that has a massive yellow peak. See https://en.m.wikipedia.org/wiki/File:EffectiveTemperature_30... for the data. This is measured outside the Earth's atmosphere. Rayleigh scattering does come into play, but it is way less important.

As I said, human vision has a kind of adaptive white balance that cancels out peaks like this to some extent. It also does that for artificial light sources with uneven spectra (neon light, blueish white LED lights...). [Side note: this is also the reason why cameras have adjustable white balance. A picture taken by a neutral sensor under some artificial lights is perfectly correct, but looks color tinted and wrong to the human eye afterwards, because the eye under the same conditions has a shifted white point.]

Why is this important? If you set up a light source in a renderer, you often get a pretty naive equal energy white as a default, which is simply not what you need for sunlight, even at noon under a clear sky.


You see how the brain does white balancing quite clearly when working with LED lights optimised for salad, for example. The light has very little green in it and looks nearly pink. After a few minutes it looks less pink and nearly normal (depending on the lamps). Then go outside in daylight and everything suddenly look green, which takes a minute to go back to normal. The brain adds the green it perceives to be missing, and removes it again when it isn’t needed.


An even easier example is the typical home light which is very peach-tinted (at 2500K), as seen from outside/ But inside the light feels entirely normal white after a very short time.


I dont think this is true. direct sunlight can be many shades of white. It differs in the morning and the evening and it is mostly yellow. The yellowness of the sunshine is a direct result of raleigh scattering and other atmospheric effects.the blue part of the spectrum that comes from the sun is scattered by the sky, making the sky blue. Taking away the blue makes the nonscattered sunlight yellowish. On photos of the moon, you see the true color of sunlight. Which is a pure white.


Of course there are different shades of white, what I am saying is that the color of the sun during the day is what people most would classify as "white" and not as "yellow" if it the color was printed on a sheet of paper.

If the sun color was actually yellow, people wouldn't be able to tell the difference between a white object and yellow object under the sun light (they would both appear yellow). And clearly they are.

On the other hand a white object doesn't get any "whiter" under another kind of light than it appears under direct sun light, this again is because the sun light is white, and not yellow.

Of course there is some subjectivity here, but it is a misconception to call the sun light yellow. For example, this "bright sun" RGB [1] color, is clearly not the color of direct sun light. Hopefully, you will agree with this.

[1] https://www.colorcombos.com/colors/FCD440


Sunlight is wide-band (see "black body spectrum"). This is why a white object (that reflects all light equally) and a yellow object (which mostly reflects reds and greens) are discernible under such light, even under sunset light which is visibly yellow or orange. The sunset orange light still contains many other colors.

A yellow object and a white object would be not discernible under a very narrow-band yellow light, or a mix of two narrow-band red and green lights of right proportions. This is how LEDs and lasers emit light, but totally unlike hot objects (like Sun) or fluorescent lamps (like "natural light" LED lamps) emit light.


Look, my point is simple: if you agree that snow and sugar are white, then the sunlight is also white.

It has to be because white was historically described as the color of the snow (or whatever is white) under the sunlight.

Now, if you want to adopt a more precise definition of white and other colors, based on (not so) recent scientific insights about the light, you can. Then the sunlight is not white, neither is snow or sugar. And is also certainly not yellow.

Let's be honest, the sun light is almost perfectly white, and kids use yellow to draw the sun, because it wouldn't make sense to draw it with white.


The fundamental yellowness of sunlight is not Rayleigh scattering, but the sun's natural black body spectrum. See my reply to the sibling comment.


There is another catch. Sun's surface is about 6000K. A light with such color temperature would be considered bluish by most people if they looked directly at a lamp with such a spectrum. Rayleigh scattering lowers it to 5600-5800K at noon time, but still most humans would call a moderately bright lamp of this color "bluish".

The eye has a built-in white balance correction, and it moves the white point towards longer waves as the brightness lowers. This is why a single 5600K lamp in a room makes bluish light, while 10 of the same lamps make it lit by bright white. Same, a single 2500K lamp in a room makes a calming sort-of-white light. but 10 such lamps make a unmistakably yellow light.

The sunlight outside at a cloudless summer noon is very bright (one usually even wants to wear sunglasses), so the sun looks white or yellowish, especially in contrast to the blue sky.


I agree it takes time! But would be amazing if the author(s) could make a tutorial / step by step on how they did it. I would def pay for that info / tutorial


Here’s a course by the creator of the post: https://threejs-journey.xyz/


When you look at the laptop on the desk, the editing process of the scene is shown there. Nice touch :-)


I noticed that there was the build process displayed on the laptop, then even a video of him, which made me think that he might have streamed the process. Indeed he did, here is the playlist:

https://www.youtube.com/watch?v=Hm7fdd8TD7Y&list=PL5nApUt6Z8...


In 20 parts x 2-3 hours each. I like that he starts with nothing.


I wish there were a way to pin this to the top of the discussion. It really adds a lot to this to have a link to the creation process.


Don't be daunted, it just takes time. You'll get faster at it with practice, but the first step is simply brute force, or at least it was for me. Take a picture of something simple, recreate it in 3D with exacting detail. When something doesn't match you drill down, find the optical phenomenon that you weren't modeling correctly, and fix it.

That's it. There's no shortcut to being good at this stuff, you just have to learn how the real world works and make sure that when you're simulating it in the computer you're using physically correct numbers for everything. Relentlessly comparing what you're doing to photographs is the only way to truly master the skill. It's a lot easier to take a step back and do stylized realism once you can nail the real thing.

That being said this is fantastically executed. I'm especially loving the ability to mix the baked GI live, it's a great effect for static scenes like this.

P.S. Ok I probably lied when I said there are no shortcuts. Here are a few:

- use a physically based (preferably unbiased) renderer

- use only energy conserving materials

- use light-probes to light your scene


The things that you list as shortcuts aren't actually shortcuts, but the best tools for their respective jobs.

To put it differently: this is what renderers were aspiring to from the start, but couldn't quite reach initially in real world applications because if a severe lack of computing power. So, for the better part of three decades, computer graphics relied on ugly shortcuts that required laborious workarounds from skilled artists. With the right tools you just don't need a lot of them anymore.


True! I meant shortcut in a more sapiocentric way, i.e. "These are the top 3 things you can quickly google and implement for a big realism gain for low effort cost."


What are energy conserving materials?


https://google.github.io/filament/Filament.html#materialsyst...

This document is a good primer on a lot of the concepts that go into PBR. I have a few more open source/free resources on PBR and unbiased rendering as well and I will update this post if I can dig em up.


They don’t reflect more light than the amount of light that falls on them.


as a 3d artist, every time I work on a new piece, there's a long painful time where I KNOW it could look good, and it looks like crap. and things will continue to look like crap, as I flounder around adding shapes here, forms there whatever... until something clicks, it's like being lost in the woods until the moment you see some glimmer of a direction to start going in.

honestly making art is almost always like that, and what makes it easier is not necessarily what makes you better artist, a lot of people find "shortcuts" what you'd call a style, put certain lighting in, certain forms, and that makes it really easy to skip the middle part and just work with your established artistic vocabulary but it's not what will push you to create better art...

I'm not sure what I said was helpful but I have completely gone through that process you mentioned over and over.


Yes, this description does capture the fundamentals of the process IMO.

Particularly the part about not being satisfied until you exactly execute what you had in mind, as opposed to funneling everything through layered techniques that lose fidelity.

The problem with this is that it involves reacquiring orientation over and over. Each time that is done successfully, it becomes easier to hold onto more details at the same time, and execute with more finesse, but it requires the interest in the subject to come from a very fundamental place in order to keep getting back on the horse with a centered mindset that retains focus on the bigger picture, and doesn't get distracted with shortcuts like you mentioned.


In reply, there's the famous quote by Ira Glass: https://www.goodreads.com/author/quotes/113989.Ira_Glass


Lighting is the main thing. Especially global illumination/ambient occlusion. Do a google search for "ambient occlusion" and you'll see a lot of scenes that are literally plain white but look amazing. Throw in some soft shadows and area light sources and things start to look really nice even with very plain materials.

If you look at this scene, I'm fairly certain all the lighting is prebaked in something like blender/maya etc.

There's also a lot of other little environment art things you can do. For instance, small bevels on sharp edges help things look more natural (look around your house and you'll notice even sharp corners aren't as sharp as you think). Adding roughness and specular lighting to materials helps a bit too, even things like wood or asphalt have some amount of reflectiveness at certain angles.


The lighting is completely prebaked in this scene. The shadow of the chair isn't animating. You can see the shadow of the armrest remain atatic on the table.

If you can, go for global illumination over abient occlusion if you want a realistic look. Ambient occlusion is a pretty rough approximation of diffuse light from the surroundings. A proper global illumination rendering captures the same effect more accurately. You get a lot of subtle hues in the lighting which instantly make even flat single color surfaces more interesting and real.

You can also go over board with your materials, adding texture maps for roughness and normals, which vary how the surface reacts to the light. There are plenty of free materials out there that demonstrate what I mean. But good texturing is a topic all of its own, with its own tools and processes. If you start to add textures, my advice would be to keep them really, really subtle. The amount of variation you really need is a lot less than you would probably think.


A lot of it is the baked lighting (he mentions it takes about 7 hours to bake the lighting if he wants to change the scene! https://www.twitch.tv/videos/1144467679?t=00h18m03s).

I love doing 3D stuff for fun - but like you, everything looks like minecraft blocks from different worlds. Certainly not real.

But if you open Blender and put a couple of cubes in a scene and render it using Cycles (in the Scene tab change the Render Engine dropdown from the default Eevee) then the cubes looks way better: https://imgur.com/a/q9xvZeI

The ambient occlusion, and proper light bouncing and reflections make even the boring-est cubes look pretty good!


He recorded the entire process on Twitch. You're seeing the end result: the forest. When you're seeing how each tree is built one step a time it all seems a lot less magical.

It's just a lot of work. A lot more people (on average seem to) think it is.


Did you leave out a "than a lot more", or are you saying too many people think it is more work than it is?


In my experience people inexperienced with the process think 3D requires about 10x less work than it actually does.


That's common to all things. Can't you just make changes to the app? Can't you just port the app to X? Can't you just build me this bookshelf? Can't you just fix the dishwasher?

Can't you just... means they have no fucking clue.


It takes x^3 more work!


Yes I'd love some clarification, I kind of lost the true meaning of that last sentence.


repost from elsewhere in comments. A lot of that seems to be captured here: https://www.youtube.com/watch?v=Hm7fdd8TD7Y&list=PL5nApUt6Z8...


I've had the exact same problem when I was designing (trying to) websites. It's the same but 2D and simpler. I was feeling the result is fine but quite not right. Compared to "real" website made by professional, my design was clearly inferior but I could not fanthom why exactly.

Only later I discovered it was the font size few pts smaller, the padding 1 pixel larger, the color little bit less contrasting, the 1px additional light border on top of dark border, the shadow a bit more blurry and million subtle things you can get right and balanced only with experience.


The #1 best "Blender tip" (that also applies to computer graphics in general), is that by default it is horribly incorrect with its treatment of colour spaces and especially gamma/tonemapping.

Rather than me clumsily explaining with text how to get this right and have your artwork "pop" with very little effort, watch this video titled "The Secret Ingredient to Photorealism": https://www.youtube.com/watch?v=m9AT7H4GGrA

Combine that with physically correct materials and rendering, and you could literally "sketch" a handful of grey boxes with a single area light and have it nearly pop out of the screen with uncanny realism!

The #2 tip is that chances are that you're probably working with a cheap LCD monitor. Even for $600-$1000 you can get a very decent monitor with DCI-P3 or AdodeRGB wide gamut support and more accurate colour. That helps to produce content that is consistent and correct. That, or invest in a calibrator and set up your operating system colour management with the ICC profile of your panel. Without this, images you grab from Chrome or Firefox will shift in hue and/or brightness when you import them into image editing software. Test your browser with this site: https://cameratico.com/tools/web-browser-color-management-te...

The #3 tip is that you can make much better colour choices working in a perceptual space. The colour pickers in most software use trivial RGB 0..255 ranges mapped to a rectangle. This is garbage, criminally lazy, clownshoes programming. Yet, everybody does it, resulting in these "curtains of colour" controls: https://www.google.com/search?q=colors+picker+control&tbm=is...

You shouldn't be seeing those vertical streaks! The transitions should be smooth and even. An excellent space to use for picking colours or creating gradients is Oklab: https://bottosson.github.io/posts/oklab/

Notice how beautiful it looks compared to the other spaces, which are horribly uneven looking?

Even just these tips can make someone like me, a non-artist, produce beautiful images, web pages, or 3D models.

Art isn't magic, it's a technical skill and you can learn it!


For color picking, I have just released two new color spaces based on Oklab, trying to improve on HSV and HSL. You can try them and compare with other options here: https://bottosson.github.io/misc/colorpicker/


Looks great!

If I could request some features:

The layout might be better 4-wide (or dynamically reflowing) instead of 3-wide so that typical PC monitors can view all of the controls at once.

Also, I'd love the ability to use Oklab to pick ranges of equal distributions (dark to light), or typical symmetric colour pairings (opposites, or 'n' divisions of a circle, or whatever...)

In the past, I used an app that allowed me to do that with the Munsell colour system, and I found that it worked great. The only annoyance with that was that it was fixed to 10% increments, and had limited symmetry types.

However, despite that, I could pick a colour that was perceptually 2x as saturated (or bright), and then use it for a control that was 1/2 as big as an adjacent one. It made things like almost eerily balanced. I felt like it was the cheat code to artistic web layouts!


As a VFX freelancer: the tools are actually making this a lot simpler nowadays. 15 years ago it wasn't uncommon that instead of having physical correct(-looking) renderers one would have to fake even the light bounces by hand to make things look decent.

The issue with 3D tools is that in order to make real use of them you have to basically be a good photographer, you have to be extremely provicient with both natural and artifical lighting, you have to be very observant of the physical properties of materials, you have to model accurately and to scale etc. These are things any software can help you with, but just like the greatest piano won't make you a great pianist, a great 3d software won't make you produce great images.

Like any art form VFX and CGI depends on people who are willing to work on their results and learn from each other till they are perfect.


It's not perfect. Look at the shadow cast by the chair. The chair is moving, the shadow is not.


If you use a tool with global illumination, so it's not up to you to fake how the light bounces, but it's calculated for you, then you're much less likely to get the problem you describe. If the tech you want to play around with doesn't support GI, then at least you can use one that does to create a reference that you can follow along with on your target platform.


I found it a bit easier to start with voxels:

https://ephtracy.github.io/


Recently Unreal Engine 5 released a beta version with a new lighting feature called "lumen." It makes lighting an order of magnitude easier if you can export your scene or create in Unreal.

For example: https://youtu.be/oT2DDu9vRJk


Don't use pure black. Make sure the ambient color is a dark blue or purple or something.


Here's the source to look at https://github.com/brunosimon/my-room-in-3d


I suggest watching the Youtube playlist below. It's insane how much work went into this. He paid attention to every detail.


Try poking https://blender.stackexchange.com/questions/167940/is-filmic....

Was recommended https://www.youtube.com/watch?v=m9AT7H4GGrA a little while back, it basically goes 'round in circles a bunch of times describing exactly what you're talking about, then recommends Filmic Blender. I kind of hit my TL;DR point at ~12min.

I have no idea if this information is relevant or insightful. Hopefully at some point I'll be able to play around with this stuff one day. (This machine hits 100°C if I try and play *video*... HP thermal design FTW. Not even going to try launching blender, no need to prove it'll melt everything lol)


> I cannot for the life of me make a scene look so great

You mean that not just anyone can be a 3D artist and make a Pixar movie? Say it ain't so!! /s

However, it is kind of insulting that someone would think they could just walk up to a block of stone with a hammer and chisel and think they will create the David. Now imagine it is the fisher price version of that hammer and chisel.

Threejs and Blender might be capable for certain things and calling it fisher price is not fair either, but there are definite limitations.

>How is this so perfect?! I just can’t comprehend how do 3d artist’s brains work.

Honestly, this looks closer to game engine rendering than something from a photo realistic engine.


His portfolio home page is incredible too: https://bruno-simon.com/

It's the website shown on the monitor in the room.


It actually works on both mobile and desktop. Mind definitely blown


It didn't work at all for me in latest Firefox, even with uBlock disabled. But it worked on a Chomium-based browser.


Didn’t work for me on iOS either.


Im on firefox latest , with ublock enabled

Worked like a charm for me. Im on linux


Arch Linux with Firefox 92.0 and uBlock activated works sometimes.

However, it seems somewhat brittle and sometimes the initial loading doesn't complete. I am not 100% certain, but it seems to help to switch to another tab while the page is loading. Feels a bit like a race condition while loading the engine which doesn't occur when the tab runs with a lower priority.

After pressing the Start button, everything works fine.


It works in firefox mobile. Ublock enabled


Worked great for me on Firefox 92 with Enhanced Tracking Protection and uBlock Origin.


Man that portfolio website is amazing.

Makes me want to check out each of his individual projects.


Wow, I've missed the times when websites were made to impress people.


Thanks for sharing that, the website is really cool!


If you’re curious about the process, you can watch all the steps on Bruno’s Twitch: https://twitch.tv/bruno_simon_dev



It's certainly a good time to become a 3D graphics developer, or at least be proficient at it. WebGL simply works. The internet is fast enough to ship a lot of assets. Even phones can render tons of graphical elements. Mix in WASM and it's pretty much golden. The world-wide-web is the true VR.


No, it doesn't simply work, unless one is happy with targeting what is basically 2012 hardware, doing software rendering, if the browser happens to black list the client's system for whatever reason.

Just like WASM is pretty much stuck in MVP 1.0, if one cares about support across most browsers.

And WebGPU, won't be much better, as it might finally land in 2022 as MVP 1.0, take again another 10 years for wide adoption and has a crazy mix of C++, Rust and HLSL shading language as compromise among all involved parties.

No wonder that streaming 3D as video, while using headless Vulkan and DirectX 12 Ultimate cards, seems to actually be the future of 3D on the Web.


I tried WebGL about a year ago, and it wasn't ready yet. It spun up CPU quite a bit, and was a different experience in terms of performance across browsers. Has this changed in the last year?


I think just like all things on the world wide web, a majority of it will be implemented sub-par, perhaps even terribly. It's not so much WebGL that is the bottleneck as all the experimental implementations of it (guilty as charged here).

Those people like Mr Simon here who take the time optimize every last bit can indeed make it purr.

Everything on the web starts as a framework or some proprietary bundle of some kind, and if it proves it's merit, it may just get built into the W3C standards themselves.

We currently have a low level tech (webGL) in W3C, and libraries like three.js are providing the get-you-by to elevate it to average code legibility, but I think we are still a long way off from having 3D be a javascript-native ability for the average developer in a peformant implementation, at which point we could really expect this www-metaverse experience getting hyped every which where to blow the lid on this two dimensional web we're currently stuck in.


What do you mean you 'tried' WebGL? It's just OpenGL in the browser. As mentioned, it just works just like OpenGL/Vulkan/DirectX just works.

Now, various WebGL frameworks (Three.js, Babylon.js, etc...) are more performant or easier to use or whatever. Or you can compile native 3D platforms to wasm. Which did you try?


OpenGL/Vulkan/DirectX just work, WebGL depends on the whims of the browser and how it evaluates the client machine, eventually blacklisting it, or replacing calls with software emulation instead.

OpenGL/Vulkan/DirectX can use 2021 hardware capabilites, WebGL is stuck on a GL ES 3.0 subset, basically 2012 hardware.

Cool if you want to do iPhone 6 style graphics, but that is about it.


OpenGL ES 3.0 is derived from both OpenGL 3.3 and 4.2 while getting rid of some legacy stuff so it's a 'subset', but not really. And considering the range of devices browsers are on, it's not bad. Very few devices other than consoles and gaming rigs actually make use of bleeding edge graphics APIs.


Yeah, if we leave out compute and geometry shaders, mesh shaders, ray tracing hardware.

Unreal Citadel was running in Flash, back in 2011.

What Unreal based games are making the headlines in WebGL 2.0?


I see your point. I used a WebGL framework (Three.js), not compilation to Wasm. In the case of the latter, what is the best way to start?


Yeah Three.js is slow.

Godot and Unity can both compile to wasm so I'd try out one of those.


I sometimes use SweetHome3D to visualize spaces. It's a little more accessible being point-and-click CAD, and it takes arbitrary objects in OBJ format (and others). You won't be able to get quite to this level but it can render surprisingly well

http://www.sweethome3d.com/


I recently had to move, and I designed my new place in sweethome3d before the move in order to know where the furniture would go immediately when I arrive. The result was quite good and you can see it here exported to html/js (though it's a 15MB download) https://nessuent.xyz/schlemann/


I've used SweetHome3D to plan various rooms for renovation - it's very intuitive and surprisingly powerful. Definitely better than trying to do building design with a "real" CAD package. The built in library is pretty comprehensive (and has nice scaling features, so even if a bookshelf model doesn't match yours, you can scale it easily to match), and importing custom 3D CAD designs is straightforward.


More fascinating is his website. https://bruno-simon.com/


His course looks fantastic!


Same with photos ... ( Refresh page twice)

1) Graffities : https://free-visit.net/fr/demo01

2) My kitchen : https://free-visit.net/fr/demo01/demo02


I'd like to know more about free-visit.net's "magical" 3D editor... sounds interesting, but the links are coming up 404.



Nice teaser. Where's the software?!


Not yet public. I sell service around the software : I build the places in 3D myself.


If you're looking to learn how to make this kind of scene, I highly recommend Bruno's Three.js Journey course.

Disclaimer: I had the pleasure of having Bruno as an IRL teacher for some time ;)


Apart from Blender, what tools/libraries should I learn to use to do this?


Well, blender itself is a whole world. But you can export to threejs and those are the only two tools you'll need at the very least.


I'd add that for a diorama like this you could probably get by using Sketchfab instead of Three.js. When the focus is on the art it makes it much easier to get assets imported, lighting set up and working right, camera controls, etc.


You mainly need to know some javascript library for importing the models that were exported from Blender. Three.js is, afaik, the most popular. I've used it myself because it was the first thing I found.

If you want to use TypeScript, and you have time on your hands, maybe look at Babylon.js and compare it to Three.js.


What's your opinion on Three.js versus Babylon.js? What's the advantage of one over the other?


I spent two years with threejs then the last 2 years with Babylon.js

Three.js has a huge community, learning 3D programming by learning three.js from the community is very feasible, lots of nice exemples around.

Babylon.js is maintained by a team of Microsoft employees. It is a cohesive piece of software with tons of integrated utilities (controls, GUI, good debugging tools, an excellent jsfiddle equivalent called playground etc). It is the perfect balance between a rendering and a game engine. Also it has a well engineered Typescript codebase that in my humble experience has been more stable than Three. The docs are great, the community is nice and super active, contributing is easy but you have much less online exemples than with Three.

So my advice for a 3D novice would be to play with Three.js and see where it goes. If you are in it for art and making things look good, the amount of examples with Three will also help a lot.

Now if you value engineering, and you already have even a very modest experience with 3D programming I really recommend Babylon.js

Also you will see Aframe, react-fiber or react-babylon thrown around: don't bother if you don't already know Three or Babylon. You will outgrow them and still have to learn Three/Babylon "the correct way". I think that most people decent with Three or Babylon can make their own specific abstraction with less overhead anyway.


react-three-fibre is just a method for writing three.js code in a declarative way. It doesn't impede anything about three.js, so you can't really outgrow it and fall back to plain three.js.


I don’t have a strong opinion because I’ve never used Babylon.

However, I’ll echo the sibling comment here. I used Three.js and had some problems exporting it into my TypeScript project (using the DefinitelyTyped repo for type definitions). It eventually worked with a dumb js file that I copied over from Three and edited myself to make the export work in my project. This was in mid-2019.

Anyway, after getting that all working and moving on, I realized I should have at least tried Babylon because the code looked far more suitable for a TypeScript project. I never got around to porting over because it wasn’t crucial for my project, and Three was already working.

Based on that experience and the fact that I prefer ts, if I work on another project with 3D rendering in the browser, I’ll try Babylon.


Really you should. I use babylonJs in my 3D export. ( https://free-visit.net). It's very easy to program. I was amazed at the power of this 3D API : with a few lines you do incredible stuffs.


This reminds me of trying to make a game and realizing that game dev is mostly not about programming. The most impressive part of this scene is how beautiful the assets are and how well the lighting has been configured.

If I worked with three.Js you’d get some spheres and a cube!



Back in 2004, I made My Room in 3D using the not-quite-new hotness that was VRML.

Can't find it anymore, but I'm glad we're going full circle with this trend :)

And I just got my Lightfield Lume Pad today for stuff like this.


Are those PHP elephants on the bookshelf? :D


Ilove this, and it reminds me of making a Half-Life level of my own room around 20 years ago. So much work to get the little details right, but it felt so strangely satisfying to see an eerie digital simulacrum of it take shape.


Looks really good. I was thinking about doing something similar for my house, but using my 3d engine and a first person camera, so people could walk around. What is the polycount for this scene? Looks really smooth


I recognize a fuckton of PHP elephants and an Ikea couch (or a 100% lookalike).


PHP elephants FTW!!! :)


my first thought was Ikea too haha. Probably designed in software, built by robots, and then re-modeled by a human back into software.


Does this export to VRML? I am joking of course, but it would be cool to have an easy way for people to start building 3d models using an open web technology without coding. Any ideas?


Does anyone know what is the GUI library on the right hand side? I've seen it on multiple websites but the classnames doesn't reveal anything.



Not as elaborate also created in three.js

https://3d.delavega.us


I have that same chair. I can never get it to stay in the highest position though. Do you have that problem?


It could be that you're putting more weight on the chair piston than he is....


It should have had a 2nd camera in the compscreen rendering the room as the room in the lifestream..


Is there a way to zoom in? The keyboard shortcuts don't work on the site.


That’s a lot of keyboards!


So the dog and Octopus have beds but the sapiens doesn't!


Really admirable, that is for sure. I am excited.


Just one question: where does he sleep?


Pretty cool! But where do you sleep?


Your TV is as big as your couch.


I like your Monstera plant.


zoom in on the coffee mug


Very cool! Could use a loading animation


fails in safari


very cool




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: