Hacker News new | past | comments | ask | show | jobs | submit login
Games company claims their graphics are 100,000x better (ausgamers.com)
385 points by trog on Aug 2, 2011 | hide | past | favorite | 139 comments



I think what they're doing is great, but I see two problems with their presentation. First, computer rendering techniques are extremely well understood and well researched. We've picked the low hanging fruit, much of the high hanging fruit, and everything in between. There is no "groundbreaking new technology" to be invented. They're converting polygons into voxels (although each voxel is probably a sphere for cheaper computation), and using software ray-tracing to render in real time. Since ray-tracing is trivially parallelizable, the multicore technology is just about there now. A 12-core machine will give just about 20FPS. The reason why they can get away with an incredible amount of detail is that ray-tracing diffuse objects is fairly independent of the number of visible polygons in the scene.

The second problem is that 10^4x improvement in level of detail does not mean 10^4x aesthetically pleasing (or in fact, more aesthetically pleasing at all). Ray tracing gets very expensive the moment you start adding multiple lights, specular materials, partially translucent materials, etc. It is very, very difficult to do that in real-time even with standard geometry, let alone with 10^4x more polygons. This is why their level doesn't look nearly as good as modern games despite higher polygon count (compare it to the unreal demo: http://www.youtube.com/watch?v=ttx959sUORY) They only use diffuse lighting and few lights. In terms of aesthetic appeal of a rendered image, lighting and textures are everything.

Furthermore, one of the biggest impacts on how aesthetically pleasing a rendered images looks is made by global illumination. That's also something that's extremely difficult to do in real time with raytracing, but is possible with gpu hardware with tricks. The trouble is, these tricks look much better than raw polygons.

Again, I love what they're doing. Real-time ray-tracing is without a doubt the future of graphics, but it would be nice if they were a little less sensational about the technology, and more open about the limitations and open issues.


This video is not that fresh. I have seen it already last year and the way how this guys speaks has not changed since then (speaks like a salesmen) ;)

Their technique is based on point cloud rendering. My supervisor has proposed already in 2004 how this can be done on a standard PC. Look at his PhD-Thesis [1].

This technique is usable for static objects as well as for dynamic objects; however, using it for dynamic scenes additional acceleration structures are required (besides of the Octrees) in order to dynamically change according to the deformation of the object. To clarify on several comments made here:

- this is a rasterization technique

- they used acceleration data-structures like Octrees, kd-trees, BVH, ... (which exactly they don't tell)

- the graphic "aesthetics" depends actually only on the artists and not on the technology itself, so not a good point

[1] M. Wand: Point-Based Multi-Resolution Rendering. PhD Thesis, Wilhelm Schickard Institute for Computer Science, Graphical-Interactive Systems (WSI/GRIS), University of Tübingen, 2004.


> I have seen it already last year and the way how this guys speaks has not changed since then (speaks like a salesmen) ;)

+1. I'd actually trust the company more if they had someone who spoke in measured tones and laid off the hyperbole.


But did your supervisor ever ship a product that did it?


Yes. We use his original software, XGRT, every day in our research group. All our research about point clouds and geometry is implemented as modules in this software. The software is open source and supports point clouds of really huge sizes, e.g. 80GB datasets, dynamically streamed from the disk.


And there are no moving objects in this demo. Voxel animations are much harder than poly animations.

Then there is memory. The elephant is looking great, no doubt about that. But I think you will need a lot of space for it. On a PC this might work, but I'm not sure it can be used on consoles.


That and I don't know of anyone (outside of using them as particles) that has successfully created collision models with voxels in real time. The collision calculations of just a few of their scanned rocks and a ground plane would be plenty complex - a whole scene perhaps even a little insane given today's specs.

They might have to resort to polygonal collision models in the same way that polygonal games end up using low-poly collision models (with the same pitfalls such as moonwalking, blocked projectiles or mystery-bouncing).



Sorry for the belated reply. Yeah, I saw Atommontage and it's quite interesting. Despite the realism of the track marks left and suspension movement, the truck still seems to exhibit some unnatural "floating" feeling. These are the very pitfalls of dealing with lower-res poly collision that I was speaking about. Kind of an uncanny barrier for movement.

I do feel that this author is a bit further along at something ship-able than the Euclideon folks are though.


As far as I can tell, they're not doing any voxel-voxel collisions. Mesh-voxel collisions sound much more tractable.


Yeah I think so. A tree-lookup can make things easier, but then you have to estimate the bounce direction.

But overall it's a great demo. And you see more and more games using voxels for surroundings. So ofcourse you could combine this technique with polys.


The more I think about it, the more I realize that of the three pure voxel problems (rendering, animation and collision), rendering is probably the low-hanging fruit - Though I'm not sure if creating a hybrid poly/voxel engine is any harder or easier.

Here's to at least getting a hyper-real Myst someday :)


Collision need not be at the same resolution as the rendering which should help. But animation is probably limited by how much of the scean changes. If it's a small percentage of the enviornment removing the old objects and adding the new ones should be reasonably fast. Or if it's a small number of objects you could always keep them in a seperate tree and translate a ray in the main scean with one for the moving object.


Agreed, not very impressed.

Realistic movement is a more important factor than detail, you can have 8 cubes move fluidly enough to immerse in. 1,0000,0000,0000 polygons seems un-important when you have pixel shader's and normal mapping.

Maybe they should of made duke nukem forever with this engine, as it seems like its from a similar era.

The new frostbite engine 2 a million times amazing, because of the dynamic lighting and radiosity. Plus the amazing fluidity of the animations.


They are using point clouds, not voxels. Animation should be easier without the grid restrictions of voxels, though it really depends on their acceleration structure (which they're not saying anything about, in addition to the specs of that machine they're running on which are undoubtedly beastly).



This is somewhat the state-of-the-art in realtime ray-tracing (actually path-tracing): http://www.youtube.com/watch?v=qntMz2QPA_E using the Brigade renderer. It's using both the GPU and CPU.


That game looks like it has been filmed with a cheap digital camera with the ISO setting too high.


That's the neat thing about path-tracing, you can render in a very short time but get more noise. More powerful hardware or more time gives noise-free and unbiased image.


> They're converting polygons into voxels

Are you sure about that? He mentions atoms, but then the video also mentions that they're using procedurally generated graphics, which is something entirely different. Also, voxels would not work on the scale they are demonstrating, you'd need way too much memory.


I thought he said something like "you don't see polygon counts this high unless they are procedurally generated".


Actually they say they're doing 20-30fps on 1 core.


In terms of aesthetic appeal of a rendered image, lighting and textures are everything.

A thousand times this!


John Carmack's repsonse:

"Re Euclideon, no chance of a game on current gen systems, but maybe several years from now. Production issues will be challenging."

https://twitter.com/#!/ID_AA_Carmack/statuses/98127398683422...


I trust Carmack's response over pretty much anyone's in matters of rendering engines. The guy is a god of coding them.


I saw a comment on Reddit about this 6-7 hours ago suggesting that Carmack had a horse in this race and that we might be best to presume the very slightest of bias in his response(s).

That said, I also view his perspective in this field as tending towards unquestionable. He's always seemed to have had a pretty sparkling reputation as well as the obvious talent.


The tech basically has to be a variant of sparse voxel octrees. Carmack has mentioned it in an interview years ago, It's reasonable to assume he's built his own prototype, as he does with pretty much anything else.


gods have been known to be blindsided by revolutionaries


He, and his company, have done research into the field of SVOs to compliment megatexture.

I don't know enough to say if it's the same approach, but it's certainly the same field, so you'd expect him to know a fair bit about it.

http://www.youtube.com/watch?v=VpEpAFGplnI


His latest tweet on it is interesting:

> @Foggen insufficient information to say if it is tracing or splatting.

They state their method is based on a well-known technique used in engineering and medical visualization, and splatting seems to be used there. I'm still not 100% clear on what splatting is, but there's a bunch of papers applying that term to voxels, e.g. http://graphics.cs.cmu.edu/projects/objewa/


Splatting is when you draw with a point cloud but fill in in-between the points to make a solid surface during the rendering. It's only necessary if you get close enough to your point cloud that the points don't map to adjacent pixels.


I remember reading years ago that id was considering using vortex graphics for Quake2 and Carmack had even started to work on it, but they abandoned the idea seeing that industry is heading into "more polygons!" thing (the context: it was a time when 3dfx released Voodoo2, nVidia was starting to gain popularity (Riva TNT2) and first Unreal was on a horizon).


By "production issues" I would think "pipeline" first of all. It doesn't matter if it's possible to convert a high-res mesh to an optimized voxel format, if the result takes hours of compilation. That is too long an iteration time for a single asset.


Not if it's a scalable thing, i.e. you can get a "decent" conversion in a short time, and only do the full-scale conversions for milestone builds.


I'll say the same thing now as I said last time these guys released a video: I'll believe it when I see them make a single blade of grass move, or when they place a single dynamic light source and cast a single dynamic shadow. Until then, this technology is awesome, but more or less useless.


Even if it's absolutely impossible to animate this technology I don't think it would be "useless". I don't see why it wouldn't be possible to have the static elements (buildings, tree trunks, ground, debris, etc etc) of a level rendered using this tech, and polygons used for everything else. It used to be this way in the bad old days (Doom etc) - Polygons for level structure and sprites for animation. This would at least free up the "polygon budget" to improve the features which couldn't be voxel-based.

If anything you could create a pretty interesting art style with hyper-realistic backgrounds and cel-shaded characters, or similar. Even as a replacement for pre-rendered background scenery or out-of-level elements it would be useful.

And (thinking of the Game of Thrones titles story) whatever mojo they've got must surely be able to be applied to other industries, eg. CGI - if Weta or Pixar could improve the poly count of their static backdrops without just buying more rendering farms, that's gotta count for something.


"polygons for everything else" would mean you integrate two completely different rendering methods in one pipeline. I won't say that's impossible, but it sure sounds non-trivial.


Would this not be perfect for films where you are rendering per frame?


Yes, ray tracing is basically the technique that Hollywood use today.


What is somewhat disappointing is that they don't tell if they have managed to solve these issues or not, as they were clearly commented on one year ago. Another major issue is the RAM-usage, as point cloud data will require a lot of memory.

Their claims of being 100,000 times better than current technology is also "frustrating", as it is repeatedly mentioned as if you've for some strange reason forgot it in the last 30 seconds. It is also a lie if they cannot use this in a game: I would suspect it's not impossible to increase the level of polygons to 10-100 times the amount - if not even more - if there's no animations or dynamic light sources around.


It seems they're getting around the RAM issue by procedurally generating their objects. I'm cautiously optimistic, but I want to see it in something approaching a consumer game before I'll believe it. Carmack seems to think the next-gen consoles might be able to handle it, so if they can push out their SDK within a year, it might just power the next Crisis. Assuming, that is, the technology is actually useful.


Dynamic shadows work in exactly the same way as they do for standard triangle rasterization.

Each light has a shadow map (which can be thought of as "the depth of the scene, from the point of view of the light").

During the final rasterization pass, the shadow map is sampled. If the sampled depth is less than or equal to the current fragment's depth, then the fragment is in light; otherwise it's in shadow.

So, nothing has fundamentally changed here. When a particle (voxel) is rasterized, it outputs a depth, just like a triangle outputs a depth.

tl;dr: Dynamic shadows work fine.


"tl;dr: Dynamic shadows work fine."

So... where were they in the video?

There's a difference between "some voxel engines can have dynamic lighting" and "dynamic lighting works with this particular engine".

It's not like this is some obscure or useless feature that nobody has ever heard of. In fact for all the "detail" one can't help but notice the number of other things missing from the video.


I didn't mean to imply Euclideon had dynamic shadows.

I meant to imply that Euclideon are lazy and haven't implemented dynamic shadows yet.

There's no fundamental reason why dynamic shadows wouldn't work in voxel engines. But whether you believe me or not is of course up to you.


Near the end they claim to have added more detailed shadows to their engine just after making this demo. I'm not sure if they meant 'dynamic shadows' by that.


I think they meant diffused shadows, the small shadow demo they give didn't appear to be dynamic.


The "improved shadow" demo they showed was clearly ambient occlusion. It's impossible to say whether their particular implementation supports dynamic shadows but as a rasterization technique many of the same techniques used in games ought to work fine here (pre-baked lightmaps, various kinds of shadow maps, screen-space ambient occlusion, and even the fanciest new real-time GI stuff coming from SIGGRAPH).


Obviously not every 3D rendering technique in existence would be included in a short demo video.


For something as obvious as for gaming (where in the video other games have people running around while these guys are hovering around a static environment), I expect something that at least looks like a game, hell id go for simple animation such as the top comments blade of grass.


from what i understand, the voxels more or less dynamically adjust their resolution according to their distance from the modelview origin (where the player is). if the engine generated dynamic shadow maps per light source, would'nt the voxels need to be high resolution with respect to each light source, so that the shadow map isn't a bunch of huge blocks? i imagine that alone would be a pretty big performance hit


The dynamic adjustment is traversing the voxel representation when you render the view. This kind of per-pixel shadow map requires rerendering the scene for each light source - but in more or less the exact same way that the player view is rendered. Then some work rectifying that information. So it would be a big performance hit, but not obviously any worse than normal rendering. Although did someone say it was only running at 20fps as it was?


20 fps in software. They're still finishing hardware rendering.


Because it's probably incredibly difficult to do with the added memory and bandwidth restrictions.


Also if they actually make a scene with different things in it. Having millions of objects in your scene is not that hard if they are all copies of the same object. The old demonstration showed a bunch of repeated copies of a single object. This one showed a lot of copies as well, although not as obvious.


That's like saying Megatexture isn't impressive since most of the texture detail is copied over and over.


It is hard in that you still have to draw every poly in every copy of the object. Instancing allows you to reuse some per-vertex calculations but it doesn't help at all with fragments, which is where most of the power is typically used.


Yeah, it "only" solves the memory use problem.


Would you say the same thing about polygon models? That is: if you have a scene with a million[0] copy of the same polygon-based elephant model, would it run as smoothly as this demo?

[0] not to be taken literally


It's all fun and games until you introduce animation... oh wait, that is fun and games... I just confused myself.


Useless huh? Static geometry and lighting was the state of the art for a while, starting with Quake. It certainly puts some constraints on game mechanics but it's significantly better than useless.


Quake actually had "dynamic lightmaps" - that is flickering lights and such by alternating between two (or more?) lightmaps for the same geometry. It also had moving level geometry: elevators and all kinds of traps.

Granted, both of these features are somewhat hacked into the static BSP level structure, but this only shows that it's "necessary" to have dynamic lighting and moving stuff to offer a good experience.


The characters weren't static though. That's what he's alluding to.


Yeah, and the characters had to use a completely seperate renderer with much simpler lighting. Likewise, the early voxel engines will probably use polys for dynamic stuff. Someone else mentioned an engine called Atomontage that does exactly that.

Sometimes you have to take a step backward before you leap forward.


Cyril Crassin's stuff is also interesting and it's here, now, running dynamically on GPUs. See: http://www.youtube.com/watch?v=fAsg_xNzhcQ

See his papers page: http://artis.imag.fr/Membres/Cyril.Crassin/

I am curious what methods Euclidean are using, however.


As someone who has worked with GPUs and software renderers for over a decade:

I am pretty sure that their tech depends on a few types of repeatable data, which they are able to cache effectively based on rotation -- in other words, they have come up with an efficient way of querying the front-facing voxels in a large set of data based on the resulting view matrix. Where this falls flat is if the data is not procedural, or it is not diverse - as you can see in the video, there is a bunch of the same data copied over and over. However you compress it, such detail is not free and I am guessing there is a lot of data that depends on a good deal of memory/storage to work properly.

I am not so worried about animation or dynamic lights or textures as everyone else is. If they can render it to a buffer and get the normals/depth/UV coordinates, the rest of the rendering can be done in screen-space, including SSAO, deferred lighting, and similar rasterization tricks. Animation can also be rendered on top of the scene, and intersected with the former depth buffer. The only thing I am worried about is the size of the data set and ability to create more diverse landscapes.


Also, for those interested in seeing a LIVE demo of very similar technology, there are many examples, but here is one:

http://voxels.blogspot.com

It is not likely to be the exact the same technique, but I am guessing they are using similar methods.


It's no accident that there is that much repetition in the models. It's also no accident that they are all nicely tiled in power of two axis aligned bounding boxes. Clearly these things take up enormous amounts of memory and need to be in some big octtree like hierarchy - so while they can instantiate these pretty impressive leaf nodes they can't do things like have them on uneven ground.

So much work left to do.


I'm sorry, but as much as I respect people both on Reddit and Hacker News, I wonder where does all the enthusiasm come from, when:

* demos show nothing new from a technological perspective

* the presenter sounds like a door-to-door salesperson

* as it seems to me, the only purpose of the demo is to raise a hype and somehow (I still don't understand) they succeeded

Euclideon got financed by Australian government.

I really hope the board took a critical approach and relied on at least /some/ technical expertise to grant these people A$ 2m. If they made this decision based just on a demo - I'm moving to Australia at once, where I will invent a technology you have never seen in your whole life before. Ever.


The startup grant program was this one: http://www.commercialisationaustralia.gov.au/WhatWeOffer/Ear...

Terms in that program are you have to match the government funding 1:1 so they must have (or raise within 2 years) $2mil to put against the government's. They also have to pay it back "on success" (5% of revenue once that reaches 100k total) and are monitored closely for "fast failure" (they are expected to succeed and repay within 2 years, they have to repay even if they fail after 5 years). Euclideon claims to have had a 2010 funding round so maybe that's how they got into this program. It also says this program is not to be used to "Prove to the applicant that a certain technological problem can be overcome (R&D projects)" so they must have shown it as a viable product that just needs to be packaged up for sale.

What strikes me most is anything under a Commonwealth funding agreement has to have the words "Funded by Australian Government through the XYZ Program. An Australian Government Initiative" in all their promotional material. Yet the shining star of Commercialisation Australia's portfolio forgot. Ouch.


The last time we talked about this (http://news.ycombinator.com/item?id=1179970), the consensus was that it was a lot of snake oil and not useful for most applications.


Every time this comes up I like to point people to the Atomontage Engine[1], which takes (what I think is) a more pragmatic approach, combining voxel and polygon graphics. Voxels are used where appropriate (eg. landscape, destructible buildings) and polygons can be used for dynamic objects.

[1] http://www.youtube.com/watch?v=1sfWYUgxGBE


It would be nice if that demo was more exciting. Also, the detail on the terrain (in the first minute) leaves much to be desired.

But it's a much higher quality demo than Euclideon IMO, from a technical standpoint.


I think the videos on the Atomontage channel show something I can actually see being a game, for example it shows destructible environments and terrain deformation, editing tools, and clever use of simulated DoF and a gameplay perspective that plays to the strengths and hides some of the weaknesses of the tech.

I could see it being great for a hybrid-RTS game like Ground Control, or some kind of god-game.


Yea this looks awesome, i hope he gets fundings. I am listed on the donor-page yay ^^


Anytime there's too much bragging--or any bragging at all--before the actual product is finished, my bogus filter lights up. And it's really hard to turn it off later.


I thought it was interesting that they predicted some sort forthcoming schism between 'real' scene objects and 'artificial' ones. At first I thought he was talking about the characters and scenes of residing on one side of the uncanny valley or the other, which I think is a valid thought. But he wasn't talking about that at all and posited instead that objects will either be scanned in from the real world and placed in game vs assets created by artists. I don't think this will be the case except in games that strive for realism. It will be more like Photoshop, where real scanned in assets still require artists to perfect and stylize them for your game. Until high resolution, tactile VR arrives, you're still running into 'Ceci n'est pas une pipe'.


I thought replicated and fabricated were more appropriate terms.


IMHO polygon count is not nearly as important as texture, lighting (and therefore also shadows) and animation quality.

Their polygon count is impressive and the object detail looks awesome, but, as other people commented here, I wonder how well it will hold up when dynamic objects, animation and dynamic lighting are added.


General consensus last time this rolled around, which I see no reason to change, is that once you add "dynamic objects [or] animation", it completely stops working, so... it probably won't hold up at all.

Most obvious comparison comes in right at the 4:00 mark, where you can see the "polygonal" grass waving gently in the wind, then at 4:22 their "unlimited detail" grass appears to be carved out of some sort of grass-colored rock. Also observe how at the end they are excited about their improvement in "the lighting", by which they mean, the lighting that you will get.


I can understand that adding dynamic objects/shadows is difficult and their videos do not show them being capable of doing that yet. However, why can't 90% of the objects be rendered in the new static way like they do (statues, buildings, tree trunks) and dynamic objects be added on top of it using whatever method game devs use right now? I don't really care if the cactus is moving or not but I sure would like to see it in much higher detail.

Why can't we take the good from both and get better results?


They can be, and that's likely how it will work. Triangle animated characters + voxel static world.


This hybrid rendering approach has already been used in a video game from a major publisher - in 1999.

http://en.wikipedia.org/wiki/Outcast_(video_game)


Then why is everyone here complaining about the lack of dynamic objects?


Because they describe the tech as if was the answer to everything. It can do specific things very well and other things not at all. If they toned down the hype and were clear about the limitations, they would get nothing but love.


Because Euclideon's demos haven't featured any dynamic objects.

IMO they're doing the graphics industry a disservice by putting out demos of a half-finished technology in their rush to "be the first". The technology can work, but Euclideon's showmanship leaves much to be desired.


This has been "announced" since 2010 all the while being only "a few months" from release. So far nothing has materialized. Vaporware. Also note this is nothing but voxels plus an advanced search algorithm, for resource conservation.


Vaporware is exactly what this is, note also that they also push investment opportunities in their 'product'.


These guys are doing something similar and are demonstrating more dynamic lighting and destruction tech: http://www.atomontage.com


The most unrealistic things in video games for me are faces. While increasing polygon counts and such will certainly help, I can't help but notice that faces will never cross the uncanny valley unless they can do something about the lighting.

Check out:

http://graphics.ucsd.edu/~henrik/images/subsurf.html

So, unless they can do all this AND ray trace it at the same time, it really won't make my game experience 100,000 times better.


Faces do get better, and I'd be pretty content with something like Mass Effect 1 (not exactly state of the art anymore), unless we're going for very emotional scenes in close-ups -- where most real-life actors fail, too.

One problem where in my opinion the advances aren't exactly exponential is body language. If I'm looking at the NPC talking to me, it's not just his facial motions, teeth etc. that I'm aware of, it's also the movements of his body - shoulders, arms, etc. This is still pretty bad. Most of the time they're just flailing around in a pretty uncoordinated manner. It gets better for highly "scripted" scenes, but the usual "shrug / fist pump / scratch yourself" animations are bad. They've gotten pretty good at the purely physical parts, i.e. what muscles and body parts have to move if something connected to it moves (skeletal and muscular animation), but there needs to be a better "body language AI".


These days, I don't think rendering technology is what's holding us back from having realistic faces. I think the art and animation is just really hard to get right.


Surely a lot of that is due, not to lighting/shadows/rendering, but to the incredible subtlety required of the animation?

HL2 has some pretty amazingly convincing faces, and that's obviously not because it has the best rendering engine.


I wouldn't go as far as "amazingly convincing". I think they look better after you've been staring at the HL2 world for a while, rather than the real world.


Nah, I had that impression from the very first scene the game loads, which is the creepy guy talking to you.

(I don't mean convincing in a "mistake-for-real" way, but in a "lets-me-suspend-disbelief" way.)


I think that's one of the massive challenges in gaming. I played Minecraft a bit (before realising I was tending a lovely in-game garden while there were weeds outside my real-life house) and at no point was my experience impacted negatively by the fact that the animals were dorky boxes with legs. Photo-realistic isn't everything.


What do you think of the LA Noire faces?


So much better than before...but still not quite where I would like them to be. I feel like a game like that made huge advances by using talented actors and scanning in the faces. They overcame a lot of the uncanny valley that was there before, but the shading still makes the whole scene seem unbelievable.

Still though, the artists and actors did a phenomenal job with this game.


I love how they mock other game dev companies for using skyboxes (they call it card board cut-out buildings in the distance @04m52) and then proceed to do exactly the same thing in their demo. Hell they even managed to do it wrong (their sky texture isn't stretched & compressed appropriately to hide the fact that its a cube @2m27).

To many bold claims & deception in that vid. Colour me skeptical.


Polygon engines work nice with physics and animation. I'm trying to figure out how they could have a dynamic world based on particles.


make sure they're also waves?


Very good solution to needing a polygon that can interfere with itself.


To all those talking about physics, interaction, lighting and shadows, what do they think of the Atomontage engine using similar technology: http://www.youtube.com/watch?v=_CCZIBDt1uM&feature=BFa&#...


Regarding the technology used. In this video: http://www.youtube.com/watch?v=JWujsO2V2IA you can see lots of artifacts and also talk of point cloud data, so it is clearly not raytracing but rather point data rendering. All the repetition seen is because of the memory constraints. The point data is probably preprocessed and compressed in numerous ways, which makes it very difficult to do animations. But as others have mentioned, even as a last resort they should be able to just use this technology to render terrain/background and then use polygons for moving/animated objects. This would probably also utilized current technology better as the polygon pipeline would not just sit there unused.


Still no dynamic objects in these videos. A limitation they're not discussing and working on?


I think these guys are on to it too. W/o govt. funding and "orders of magnitude better" marketing babble: http://www.youtube.com/watch?v=1sfWYUgxGBE


If this technology is legit without problems with animation and lighting etc, what will it mean for the costs of game content creation? You still need artists to create the models, textures etc, and higher detail generally takes much longer with diminishing returns. I also find that environments with highly detailed objects also need far more objects to be convincing which compounds the issue.

In short, the tech sounds great, but will it result in better games or just more expensive games (and in turn, fewer games with more innovative but risky design)?


From what I can gather, 3D game artists tend to start with high-res models and then reduce the poly count, so it may not be THAT much of a problem. I think it's because a lot of the modeling takes place in more organic editors now.


I think a good game and a game with good graphics are two unrelated things.

What would my mom say? This game looks a bit better than this other game? Or this game looks 100,000x better than this other one :)


Graphics don't improve linearly with the number of pixels or polygons added. I'd say the improvement is closer to logarithmic. E.g. 500 polygons/second is a step better than 50, 5000 is two steps better than 50, etc.

Improving the rate by 100,000 times is only a six step improvement and that's if they haven't surpassed the limits of the human eye. I much prefer 72 frames per second to 36, but giving me a thousand frames per second is a waste since my optic nerve can't process most of them.


Minecraft creator does the math and says: http://notch.tumblr.com/post/8386977075/its-a-scam


20 FPS in software is pretty impressive, I wonder (if this is real and if it takes off) how long it will take to see hardware voxel acceleration.


They're here already. NVIDIA Research paper describing one approach: http://www.nvidia.com/docs/IO/88889/laine2010i3d_paper.pdf

Open source implementation: http://code.google.com/p/efficient-sparse-voxel-octrees/


It's only impressive if you know how powerful (or not) the cluster computers powering the software rendering are ;)


I think their point was if the same amount of research and money went into improving their technology, then future games would be amazing.


I'm hoping they show a nicer looking demo (instead of 'programmer art') to get a better idea of how much better it would be.


Graphics are only a very small part of the advantage a truly volumetric world could present. A game that captured wind currents, scent, EM spectrum... these are just some of the attributes air normally carries that most games do not capture but that a volumetric system might be used to capture.


It's kind of intersting until you realize that all their examples are static - there is no animation anywhere and there is a very good reason for that.

Yes, they might implement some kind of a hybrid approach but then they will lose all the things they hype about.


I remember when the first video came out, they said their technology worked "like Google" to find the appropriate pixels to display on the screen. No further description of the technology. Seems too vague to be true.


I think it is true but it is hype. Assuming it is based on voxels, then the earliest game engine to have used the technology (to my knowledge) is probably Blade Runner circa 1990s. The hardware at that time, ofcourse, dictated the quality of the tech at that time. By extrapolation I would say this is very much possible but may not be such a huge deal as it is made out to be. The results however could be amazing if widely adopted by the industry.


I'm in Brisbane. Mmm I'm nearly tempted to apply for a job there.


I'm not sure if you can ask for a better job than doing computer gaming research on the government's dime.


They haven't progressed at all since 2008 so I'm not really sure what they're actually doing


Oh, is this for real? I came across the video somewhere earlier today, but stopped watching after I got to "we give give computer graphics unlimited power".


The big problem that I see is that they don't show any animation. A camera floating through a static scene can only get you so far in video games.


Honestly - it's quite ugly - it looks like a game from 8 years ago. I'm not sure what are they drinking?

For sure their graphics are not 100,000x better.


I just want to say that this is probably the best company name I've seen yet.


So when they say atoms do they mean voxels or is it something else?


I don't know much about current 3D graphics technology, but the fact that they're trying to disrupt it is really cool. They remind me of a crazy inventor, let's hope they got something =)


But will the games be 100,000x more fun?


Color me skeptical. However, I'd like to pose a question to HN:

If the claims from this video are legit, is this level of innovation deserving of a patent? (Yes, a software patent.)


Maybe, you'd have to be an expert in rendering engines to know. Too bad none of those work for the patent office or are ever likely to.


[Citation needed.]


no.


Poll: Do you think this will result in real games with 100,000x better graphics? http://www.wepolls.com/p/1653402/Is-Euclideans-graphics-tech...


If this takes off the whole graphics industry is going to get so much better. I am talking from movies to games to everything else you can think of. I mean think if FIFA started to use this in there games, you could make it so you see a shoe lace move or individual hair strains moving.

I hope they get this out to market sooner rather then later.


And seeing the details of a shoelace makes a game of soccer how much more fun?

Fuck this shit. Back to Minecraft.


Even if polygons weren't the limiting factor on the shoe laces or hair, the cost of computing the physics will limit how detailed things get. Need more cores...


Not really. Physics is computed against low-resolution polygon representations.

You don't want to compute physics against your art representation, because it's typically over-described.


Of course. My point is that polygons aren't the limiting factor today, so having unlimited polygons wouldn't "fix" that. The GPU typically determines polygon budget, with the CPU determining physics and AI.

Of course, you can dump physics onto the GPU, but that will cost you polygons. I guess that could, in theory (if you really had unlimited atoms), give you more cycles for physics.




Consider applying for YC's first-ever Fall batch! Applications are open till Aug 27.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: