Hacker News new | past | comments | ask | show | jobs | submit login

If the data can stay on the GPU, then it is likely a win. GPU have 8GB or more now. Thus it depends on how much polygon data one has.



Not necessarily: even if the data's on the GPU so doesn't have to pay the PCI-E transfer penalty, GPUs still have cache hierarchies and these have latencies as well, and they can be worse than for CPUs as the branch-predictors and pre-fetchers of GPUs are still fairly primitive in comparison to what CPUs are capable of, meaning access patterns on a GPU can actually matter quite a bit - you end up having to change block-size per GPU type and code which works on one very well doesn't work as well on another GPU.

However, point-in-polygon is a fairly simple algorithm, and if each polygon was mostly < 40 vertices, I suspect a GPU might be faster. However, for more complex algorithms, GPUs don't do as well and with many more vertices I suspect GPUs won't do as well for point-in-polygon tests.

In terms of raw theoretical FP processing power, GPUs look good - but when you start to do more complex things with them - i.e. when branching happens a lot, say with path tracing, they don't look as good. E.g. a dual Xeon 3.5 Ghz quad i7 (costing ~£950 each) is as fast at path tracing as a single NVidia K6000 costing ~£4100.


Here is a production quality path tracer that for most users runs noticeably faster than competing CPU-based renderer: https://www.redshift3d.com/

Pragmatically, it produces results of similar quality quicker than CPU-based competitors.

It is really taking the high end rendering world by storm this year.


Erm??...

That's a biased renderer that uses all sorts of caching and approximations that there's no CPU-based renderer that supports (VRay's closest with it's ability to configure primary and secondary rays using different irradiance cache methods), and as Redshift doesn't support CPU rendering, it's hardly a comparison worth talking about as you'd be comparing different algorithms. The pure brute-force without any caching numbers I've seen for it don't look any better than the other top CPU renderers doing brute-force MC integration.

Also, a quibble, but I guess by "high-end rendering world" you mean archviz (where VRay and 3DSMax are dominant) and a few small VFX studios who happen to be running Windows?


Your selling it a little short, pretty much everyone not doing feature films like game cinematics, commercials and product viz.


"Pretty much everyone"

Really?

I know companies like Blur are trialling it, but they're still using VRay. I know The Mill have done stuff with it, but they're still using Arnold too.


I mis-read, thought you were saying not many studios using windows and VRay, agree on Red Shift.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: