Hacker News new | past | comments | ask | show | jobs | submit login
Epic's Sweeney on graphics tech: "the limit really is in sight" (eurogamer.net)
11 points by johnr8201 on Feb 12, 2012 | hide | past | favorite | 19 comments



Do we even know how to render reality? If we can do it at 1/2000th of real time then somebody should have made a prerendered animation that looks real by now, but I've never seen one. And if we don't know how to do it, how do we know what GPU we will need?


We know the rough bounds of the technical parameters of a GPU good enough to fool a human. We don't yet know, for a large variety of cases, how to generate artwork that good in any better way than taking a snapshot of something real.

That being said:

> somebody should have made a prerendered animation that looks real by now, but I've never seen one

How do you know? A friend of mine works on visual effects, and his show-reel has quite a few shots where, if you didn't have before and after shots side by side, you wouldn't know that a computer had been involved at all.


> How do you know? A friend of mine works on visual effects

Any chance there's a link to it online? I'd love to see an example :)

Anyhow, the problem for me is less with still renderings, and more with natural motion, like a human walking or talking. Also, I'd love to see a realistic splash as something falls into the water.


I can't find the effects reel for that modern CSI+disasters show I was looking for... You'll have to settle for this: http://youtu.be/aFHKwaW4Um8


Thanks. I enjoyed watching that.


It looks like we're slowly getting there. See the Luminous engine, for example:

http://www.digitaltrends.com/gaming/square-enixs-new-gaming-...


Thanks for the video link. While definitely an improvement over what we currently have, it's an empty environment with literally 3 patterns and 1 object. Color me unimpressed.


Some film CGI (which renders much less than realtime) is getting pretty close to realism, to the extent that for certain snippets you can probably fool a large proportion of people, especially people who aren't computer graphics experts, and therefore don't know what specifically to look for. Most work seems to be focusing on very specific tough cases, like cloth animation. It'd be interesting to see an actual survey of what people can tell apart, though.


With film CGI one also has to take into account that it's highly polished by humans. To be able to reproduce that quality in an automated fashion will likely be much more taxing.


I've definitely found diminishing returns as a player, though that's a somewhat different, more gameplay-oriented question. I'm personally more interested, when it comes to graphics, in things more gameplay-related than strictly graphical fidelity; things like destructible terrain, for example. I'd even take a decrease in strict fidelity of the graphics for an increase in gameplay-interactivity (Minecraft showing an extreme example of that tradeoff). This seems to require deeper modeling of things other than strictly surface meshes and lighting, like the interior of objects, their physics, etc.


"And we apparently don't notice frame-rates above 72 every second."

I'm curious what conditions he's referring to with those 72 frames. Is motion blur built in for instance or are they static snapshots? 72 just seems a bit low. http://www.100fps.com/how_many_frames_can_humans_see.htm


You'll never get an absolute number for this as humans don't see in 'frames'.

I would assume his numbers would include motion blur as that effect is not only important for 'smoothness' but also for giving a realistic feeling of movement and rotation when translated to a stationary display. If the 'player' spins around quickly you've got to blur it regardless of framerate or it just doesn't look right.


I was recently egged into digging up hard numbers on this subject. Here's my post: http://www.reddit.com/r/gaming/comments/p4lrj/paradox_intera...

Short version: Min to simulate motion:~10Hz. Min to stop noticing flicker:~60Hz. Diminished returns on immersion:~72Hz. Limit on ability to flick your eyes and still see temporal aliasing:~2000Hz


lols... would be pretty funny if he got 72fps from a test on a 60hz lcd ( ==60fps max visible hard cutoff )

btw quickhack test:

- set crt to X hz refresh rate

- bright white screen

- wave a pencil quickly in front of it

the X refresh rate at which you stop seeing a trail of 'multiple pencils' is just beyond your fps sensitivy threshold.. yes?


Graphics in todays games is poor in terms of realism. Content is simply made of polygonal silhouettes - no modeling of anything internal, maybe in top of the line there is some hackish muscle/bone animation system and artist tweaked special case shaders to account for subsurface scattering in skin but nothing generic. In order to get realistic content we are going to have to switch to procedural modeling, there's no way to generate that much information manually with the pedestrian tools we have. As for rendering, there is no global illumination other than crappy techniques like precomputed light-maps or ambient occlusion, maybe one bounce image space (IDK if this has been used in a game). Even the materials use simplistic BRDFs. There is a lot of room for improvements and a wide gap to close between realistic and current generation. But in 20 years there's no doubt that it will be done.


I remember 15 years ago when plenty of graphics programmers thought the industry was within reach of all the polygons you'd ever need on the screen at one time. These are always a bit like the '640k should be enough memory for anyone' declarations.

When Quake1 models were a couple hundred polygons, it was sometimes predicted games would never need more than 5,000 to 10,000 poly models. There was a PC game (name escapes me, had a cherub in it as the main character) from the late 1990s that talked up using "real time polygon tessellation" to deliver models with thousands of polygons on the screen, derived from models built in Maya that had tens of thousands of polygons in the original spec model. According to the developers, it was almost photo realistic! Such is the braggadocio in the gaming industry.

Now we're up to 15,000 to 50,000 in-game depending, and that's not nearly enough.


The game you're referring to is Messiah: http://en.wikipedia.org/wiki/Messiah_(video_game)

The tech you're referring to was to increase or decrease polygons based on the power of the users machine. The big benefit to this tech would be that when you got a new graphics card the graphics would scale up to match the number of polygons generated for the models. Not sure about the photo realistic part but I do remember them talking about how this tech was going to be the best looking because of the increased polygon count.

Considering the jump graphics hardware has taken over the last decade, I don't find Tim Sweeney's prediction that photo-realism within our lifetimes seems like an impossibility. Barring some kind of physical limitation with the hardware I think his assumption isn't that crazy.


Yep, that's exactly right. I remember at the time Dave Perry seemed to be everywhere 24/7 pumping Messiah as something that would change gaming forever. Back then the developer fights were common, with tech arguments between id Software (Paul Steed instigating) / Epic / Shiny / whomever.

In the end, it didn't sell particularly well, it cost a lot to make, and it had a lot of performance problems.

Have to wonder who thought a game about a cherub named "bob" would sell like crazy.


The game you're thinking of is Messiah by Shiny Entertainment. I vaguely remember that one too :) This was released just before hardware T&L cards were on the market.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: