Hacker News new | past | comments | ask | show | jobs | submit login
Rendering large terrains (pheelicks.com)
175 points by pheelicks on March 10, 2014 | hide | past | favorite | 52 comments



CLOD is the underlying mesh/LOD technology used in Frostbite (BF3/4). There is a formal paper for it [1]. My coworker is currently making a toy CLOD renderer in C++ and the technology speaks for itself - he's rendering 8K x 8K worth of height-map data at some 200 FPS on some laptop NVidia chip (he will be open-sourcing it in a few months time, so watch this space).

Frostbite goes a bit further (they stream in the height-map etc.) but the bare method used is this. His reaction to me sharing this was quite funny:

> Cool! But… Javascript??! It’s like writing the bible (or your religious equivalent) in glitter and kids crayons… Loses something.

You just can't get oldschool developers away from their beloved languages :).

[1]: http://dice.se/wp-content/uploads/GDC12_Terrain_in_Battlefie...


It's 2014 and Javascript is awesome.

If I ever do anything 3D, it will probably be WebGL. Some industry veterans have reservations against it, but so long as it doesn't look like poop smeared origami, it's OK.


>2014 and javascript is awesome

sort(1,10,5,3)

1, 10, 3, 5

Then there is this.

null == false //false

!null //true


https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...

> The default sort order is lexicographic (not numeric).

Works as documented.


Ok, pick any other embarrassing design flaw: http://wtfjs.com/


Documented boneheaded decisions are STILL boneheaded.


Sorry, by 'javascript is awesome', I meant 'javascript performance is awesome'.


"You just can't get oldschool developers away from their beloved languages :)." - 1995 is new-school now?


1995 will be 20 years ago next year.


If you look carefully you will see the very tops of the mountains "grow and sharpen" or "shrink and smoothen" when they pass a certain distance. I saw that same behavior in an old game Starsiege Tribes( from 1998 ). Now I know why it happens.


This could be caused by two issues:

Popping: it "naturally" happens with all dynamic LOD systems (back in '98 it would have been Real-time Adaptive Optimized Meshes). It happens when something changes from one detail level to another (and hence more details "pops" into view). If you spend more time on your renderer you will generally morph those vertices between the lower detail position and the higher over a very small amount of time. This means that the pop happens over a longer duration and humans are bad at noticing stuff like that.

Incorrect Heuristics: a heuristic is used to determine how far away from the camera you need to be in order to not see the difference between four triangles and two (this has a lot to do with Nyquist-associated theorems) because "the difference is less than a pixel." If you get it wrong you could present too little detail for a pixel and change the apparent shape of the terrain (obviously intentionally presenting too little detail increases framerate).


This reminds me of: http://www.zephyrosanemos.com/ - another very impressive webGL terrain demo. One thing I particularly like about the zephyrosanemos is the realtime rendering of the procedural terrain generation.

I'd love to hear a comparison of the two by someone knowledgeable about the subject.


Very nice! An interesting approach!

Did you also consider using some more advanced geometry representation of terrain other than heightfields, such as triangulated irregular networks? That would allow you to choose an error threshold for each LOD and seamlessly join neighboring patches together without the need to use vertex shaders for it.

I did my own "Google Earth"-alike using a modified BDAM algorithm a couple of years ago. It required Hadoop preprocessing of detailed input data (resolution in meters), parallel terrain simplification using quadric error metrics and edge collapses and produced a set of layers at different LODs.

The cool thing about it was that you could have chosen the desired screen-space error and fetch only those LODs that were fitting - usually these had far smaller size than regular heightfields of the same resoultion/error. You could have also incorporated roads, rivers, buildings, bridges, tunnels straight into your terrain geometry, as well as different textures for each sub-part of it.

The tessellation was a bit funky (BDAM uses triangular patches instead of tiles) however there are also variants that utilize rectangular tiles. Vertex shader was then used to simulate 64-bit precision, giving uniform rendering detail in centimeters for the whole Solar system. Geometry shader could add more detail into it and overall it was very snappy online as the size of downloaded data was smaller than with regular meshes.


I was keen on keeping as much of the complexity on the GPU, so I wanted to avoid having to update the geometry. Not to say I'm against exploring different approaches. Did your "Google Earth"-alike, ever see the light of day?


Completely understood, it's better to focus on interesting 3D stuff and easier to do it this way in JavaScript than to handle tile caching using WebWorkers/WebSockets and local storage in a language that doesn't support proper multithreading.

Yes, the project was completed and working fine though it ended up buried deep within my former employer's R&D lab and I have no update if they are using parts of it anywhere. They got acquired and another office was aggressively taking over, getting rid of internal competing technologies. I also had a prior experience in adding dynamic 3D cities and vector maps into NASA's World Wind, and therefore writing a completely own globe from the scratch was a joy ;-)

I am thinking about doing similar apps in WebGL and Qt and writing some tutorials on how to do it as I progress, so perhaps we can inspire each other then. First I need to finish some tutorials for advanced 2D geometry and Bezier curves though (Illustrator-class algorithms with JavaScript + HTML5 live examples).

Thanks for writing this inspiring piece! :)


Ah, projects in the R&D lab. I can sympathize with projects being "put on the shelf"

RE 3D world, I'm already working on something similar. Shoot me an email (see HN profile)?


There are some great open source WebGL terrain renderers out there, on a planetary scale, for example:

http://doarama.com/view/2171

This is built on the Cesium open source virtual globe http://cesiumjs.org/


The main reason for LODing is to increase the draw distance. The demo seems to have implementing a LODing system, and then fogged the draw distance anyhow ... which is odd.


No matter how far your draw-distance is, you totally DO want to "fog anyhow" in any event: you want distant scenery to smoothly fade-out rather than abruptly pop-out (regardless of your actual far-plane distance), plus fogging is a primary spatial-distance cue to the viewer in 3D, and is cheap/free nowadays.

Even the real-world does "fogging-out" over distance depending on atmospheric scattering, haze, rain/storm or actual fog conditions. So fogging-out-smoothly, while beneficial as per above, also looks beautifully "realistic" as an added benefit, at least provides a great realism/cost ratio


This has always impacted the realism of flight simulators, in my mind - due to performance reasons, you're always flying around in some pea-soup like fog. At altitude, you can easily see hundreds of miles in clear weather, but I haven't seen any flight simulator that comes even near to being able to render out to those distances.


I believe there are two problems in flight simulators:

- insufficient precision of OpenGL 32-bit floats in Earth-scale computations. If you want to render the whole Earth, you'll find that at the ground level your precision is only 16m resulting in jittering as you move. This was usually solved by "zoning" and local coordinate systems, hence it was easier to deal with a smaller set of ground "tiles" and haze out the distant ones. Having said all that, nowadays you can use vertex shaders to simulate 64-bit (or rather 56-bit) precision and get around 1cm resolution at any distance

- the need to have various LODs that include curvature of the Earth with significantly increased error tolerances, which in turn increases memory consumption. Again, nowadays should be no longer an issue given entry-level GPUs having 1GB of RAM and geometry shaders


> due to performance reasons, you're always flying around in some pea-soup like fog

In fairness, this is also the case of 90+% of real-world flights I partake in..


Yes I believe you make a valid point. However the point of the tech demo was to show off the LODing, and the fog distance in my opinion was way too close, thus inbiting the work the author had done.


Cool demo and writeup. Why's is it so sparkly though?


There's a specular light positioned above the terrain, which adds the sparkle (light3 in https://github.com/felixpalmer/lod-terrain/blob/master/js/sh...). If you want to play around with changing the light etc, you can use Mozilla's awesome in-browser GLSL editor (currently in FF beta): https://hacks.mozilla.org/2013/11/live-editing-webgl-shaders...


Probably aliasing due to undersampling.

Terrain rendering is probably one of the things where GPU-based raymarching really is an option nowadays.

http://iquilezles.org/www/articles/terrainmarching/terrainma...


Probably is - I was astonished when I first saw this raymarched procedural terrain at shadertoy: https://www.shadertoy.com/view/4slGD4


::rant:: Crashed firefox( latest version ); tried chrome, loads forever and then doesn't work. For whom is this made, if it doesn't work on my i5 with a decent gpu?


Mac users? :) Sorry!

Runs ok (well, 12fps!) here on firefox on an iMac. Just black on my centOS laptop with intel graphics, I admit, but loads of gl things don't run on that.


This has been a transformational technique in graphics and mapping and I think its worthwhile to recognize where the technique came from. As far as I can tell the origin of the technique described here can be traced back to SGI prior to Keyhole as published in 2002. [1] This "Clip Map" technique is a key enabler of applications like Google Earth because of how efficiently it pages data.

Recall that Google Earth was the product of an acquisition of a company called Keyhole. Named after the spy satellite, a reference to the recently declassified satellite imagery they were using. Its also interesting to note Al Gore played an important role in declassifying this data. [2]

But to make use of all of this imagery a new efficient algorithm was needed for loading and rendering the data. This is what the Clip Map paper describes and enabled the iconic zoom from space to the surface of the earth with real time rendering of satellite imagery.

According to Michael T. Jones. "Keyhole started by accident. The earliest possible origin of Keyhole actually lies with me. I worked with a company called Silicon Graphics (SGI). One of the companies I worked with had created a program to view satellite imagery where you could zoom in and see the imagery in great detail. It was used in the Bosnia peace talks to draw the border." [3]

"Before its acquisition by Google, Michael was CTO of Keyhole Corporation, the company that developed the technology used today in Google Earth. He was also CEO of Intrinsic Graphics, and earlier, was Director of Advanced Graphics at Silicon Graphics." [4]

It is interesting to note that a German company is now suing Google over Google earth and possibly this very technique. [5] Are ART+COM and Terravison the company and software Michael Jones was speaking of?

The real power of this technique is that it allows you to pre-page the data in tiles that can be easily loaded over the web and are sized specifically to live graphics memory. It optimizes both the IO this way and off-loads the mesh creation process from CPU to the GPU. In 2004 Microsoft research published an expansion of the of the Clip Map technique that does just that. [6]

A basic demo and source in C++ available [7] and I have also found SpiderGL uses this technique [8]

One thing I have not found yet is a good discussion of implementing collision detection in conjunction with this technique if anyone has found any resources on this I would be grateful.

[1]: http://www.cs.virginia.edu/~gfx/courses/2002/BigData/papers/...

[2]: http://www.realityprime.com/blog/2006/07/notes-on-the-origin...

[3]: http://geospatialworld.net/FirstPerson/ArticleView.aspx?aid=...

[4]: http://www.uoc.edu/portal/en/sala-de-premsa/actualitat/notic...

[5]: http://www.eweek.com/cloud/google-sued-for-alleged-google-ea...

[6]: http://research.microsoft.com/en-us/um/people/hoppe/gpugcm.p...

[7]: http://filougk.blogspot.com/2007/03/gpu-geometry-clipmaps-so...

[8]: http://spidergl.org/example.php?id=8


I agree with the nostalgia comments in this thread :)

Though not published in a journal, I developed a near-identical concept and implementation as a senior high-school project from '97 to '98.

This after realized the difficulty of implementing the more general triangle-subdivision strategy for terrain LOD driven by a computational-cost/visual-benefit algorithm.


correction Tanner et al. (link 1) was published in 98


Thank you for such detailed and well written material, pheelicks! I had been looking it for a long time. Bookmarked.


Glad you like it! I'm planning on doing a follow-up, where I look at doing more detailed things with the fragment shader, to give the terrain a different look. E.g. using some textures for the ground, like grass, rock, snow etc


Ah game programming and terrains, the nostalgia :)


I did some stuff with terrain LOD by using tessellation, but that is a DX 11 feature and not supported in WebGL yet. Basically I rendered Bezier patches that were turned into triangles on the GPU, and then used screen-space displacement mapping for the finer details. The bezier patches were designed to be seamless but still add/remove triangles as the distance grew. Unfortunately I no longer have the code, but here is an image:

  http://www.gavanw.com/uploads/9/5/4/0/9540564/8069858_orig.png


Is this the same algorithm used in games like Dying Light? These new breed of games allows free movements from rooftops to ground with very realistic graphics.

http://www.youtube.com/watch?feature=player_detailpage&v=J2T...


Looks superficially similar to the WebGL terrain 3-parter at http://www.gamasutra.com/blogs/JasmineKent/20130904/199521/ - are there differences?


This is a common technique so no surprise they are similar. However, there seem to be slight differences in the way the chunk edges are made seamless.

In Trigger Rally, there are additional vertices being added to the lower LOD chunk side of the seam to avoid T-junctions in the mesh topology.

In the OP demo, the vertices on the higher LOD side of the seam are morphed so that they will be flat with the lower LOD side of the seam. This should also help reduce "snapping" when moving to higher LOD levels.


If the inner level is 1x1 and the other level is 5x5 and the inner level moves by 1 unit, then the outer level will have to move by 1/5 of its 5x5 units (1). Thus the outer level will not move by its proper grid snapping amount.

Or am I missing something?


The grid snap happens after the movement. You are correct that the higher level grid may not move. To fix this, the higher resolution tiles (1x1 in your example) are morphed near their edges so that gaps do not appear. I cover this in the blog post, see the 'Morphing between regions' section


I still don't think I understand. If the 1x1 level moves, and didn't overlap the 2x2 outer ring, and the 2x2 outer ring doesn't move (the camera hasn't triggered a 2x2 move yet, just the 1x1), then there is now a gap in the opposite direction of the 1x1 move. That's why trigger rally uses the overlapping geometries, so that a 1x1 level can move and in the direction of the move the overlapping geometries double up and in the opposite direction, they no longer overlap at all (it uses morphing also so that the overlapping geometries have vertices in the exact same spots).


Imagine instead that each ring is joined by a set of springs, which can be compressed down to zero and extended to the difference in the grid snapping between the 2 rings. As the rings snap to different grids the springs extend and contract to fill the gaps. Now place some vertices on these springs and you have a terrain mesh, where the majority of vertices are snapped to locations depending on the LOD, except those joining the rings, which such that the transitions between the layers are seamless


You are a genius :)


Writing terrain renderers like this pretty much consumed my junior year of high school. This is the stuff that made me a programmer. I'm glad to see it get some attention here on HN.


Waiting for the time someone comes across VRML ;)


Fantastic, I have been looking for good example in modern opengl for a while. Thank you!


Hate to be a dissenter, but none of the pics in the linked article look good to me. ???


Is there a name for this approach? The recursive-tile-morph-factor shader?


This is amazing


Three years ago, someone had the same idea:

http://sea-of-memes.com/LetsCode28/LetsCode28.html

Edit: just for some additional information, the idea is of course a lot older


This is far older that 3 years old, I don't think OP says he invented it




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: