A very interesting feature of the Ivy Bridge architecture is a new digital random number generator code-named Bull Mountain, which uses teetering as an integral part of its workings. [1]
While consumer-oriented reviews rightfully ignore such things, I reckon that the HN crowd would be pleased to know more about such a nifty hardware solution.
Teetering must be the marketing term they chose... the phenomenon in play has been known as metastability for as long as I've been a digital designer.
And generally speaking, it's something you want to avoid in your designs, not only because of the unpredictability of the final output (whether it resolves to a 1 or 0), but also because of the uncertainty in how long it will take to resolve.
But it seems perfect for generating randomness. Neat idea. The real trick here is that it's only theoretically unpredictable. In real life, due to "imperfections" in fabrication, the inverters will be biased to resolve one way more frequently. So you might end up with a 60/40 split, for example.
I see they do address this in the article, and have added some "conditioning" to eliminate the bias... I bet it's described in a patent somewhere.
The simplest approach I know of is to extract a random bit repeatedly until it transitions from 0 to 1 or 1 to 0, and then use the second result. 50/50 split, regardless of any (constant) bias in the generator.
I guess I don't really understand why this is impressive - the GPU is still not good enough to handle modern gaming and apparently has some issues with media encoding as well (although I will admit that that section confused me somewhat), and it sounds like the CPU is just a modest upgrade in efficiency.
I'm not saying it's bad, it looks like solid progress form Intel as we all continue to expect, but I don't get why this is considered 'tick+' when it seems less exciting to me than Nehalem.
Obviously it's not meant to be a high end gaming chip. Sandy Bridge was pretty excellent in The Sims 3 and Guild Wars, amongst others that I've tried. If Ivy Bridge is as better as the graphs show, there's a whole new category of games open to it.
If you have a discrete video card already, this won't replace that and isn't meant to. If you are looking to buy a laptop and want to play The Sims while you're on the airplane, that's what this is meant for.
Nehalem was a "tock", as was Sandy Bridge. Both rolled out lots of new features (integrated DRAM controller, QPI, hyperthreading, AVX, uop cache, etc...). So of course they were more "exciting" to a software person.
Ivy Bridge is a "tick" (OK, "tick+") which means that it's fundamentally a die shrink of Sandy Bridge. It's rolling out lots of new stuff too, but it all has to do with how the 22nm Tri-Gate transistors are produced, the logic implemented is mostly the same. Compare this to Clarkdale/Arrandale, not Nehalem.
>I guess I don't really understand why this is impressive - the GPU is still not good enough to handle modern gaming
Because the huge of majority of the population does not care about "modern gaming", but they do care about less costs and improved batter life with an integrated GPU and would like to see it offer better performance for apps where it's needed?
Your right that the majority of the population doesn't care about playing battlefield 3, as an example. But, a very large precentage of the populous is insanely interested in casual gaming, see farmville.
The value that gets skipped by mainstream "tech sites" is that, yea its kinda crap at playing battlefield 3, but what it does do is create a baseline of capabilities that you can expect out of an average device. In other words, it opens the doors for mainstream adoption of technologies that weren't there before, ultimately leading to a new baseline performance for those putting out casual games.
I guess what I'm saying is that the next farmville could now have decent 3D graphics where as that was out of the question prior to the mainstreaming of decent GPUs. Intel plays a key role in that process.
>I guess what I'm saying is that the next farmville could now have decent 3D graphics where as that was out of the question prior to the mainstreaming of decent GPUs. Intel plays a key role in that process.
Well, you have some point, but I think that Intel's GPU capabilities are far better than the needs of a casual 3D game. I would expect to first see some casual 3D games that utilizes at least that power, before I would think that more power is needed.
Also, a biggest problem is that MS doesn't support WebGL in IE, so no 3D casual games, except if you model them with a 3D engine build on Canvas (which would mean it would use only the 2D acceleration features of the GPU that the canvas uses).
One of the challenges of GPU performance is getting the connection between the CPU and Memory correct. That is why for years and years you've had the 'special' video card slot which was optimized for that, and then the general purpose slots that didn't have the kinds of demands that video offered. So if I get back space for a 'regular' slot and can put the GPU inside, that is a win for me, even though I predominantly use them in servers which brings me to ...
CUDA, or more accurately, running general purpose compute tasks on shader engines. Having the GPU there, even if I am not using it as a GPU, can provide some impressive benefits if I can program it to operate on in memory data in a fast parallel way. So in this regard it stops being a GPU and starts being a kind of funky but powerful co-processor.
And of course the third thing is that keeping the power draw sane is important. Its a small chip and pulling a lot of heat out of it is very difficult. Additional cores, additional heat, higher operating temps (or more exotic cooling systems).
One of the things that would be interesting (but won't happen on an Intel part any time soon) is if there were a bridge port so you could run dual-CPU and the GPUs could run in a crossfire/SLI mode where they each rendered every other line or something along those lines. Since I use dual cpu motherboards that would give me additional compute options that I don't have at the moment, but it won't happen at Intel because it would disrupt the market positioning of Ivy Bridge.
Ouch, please don't do this. You're mixing stuff up badly. The "special" video card slot throughout history has simply been a higher bandwidth interconnect relative to whatever else was there (well, in the AGP days it also had a primitive IOMMU for doing DMA into userspace). The integrated graphics on Intel parts, obviously, have the important distinction of sharing the memory controllers (and, I believe, the L3 cache?) with CPU work. That does hurt things under some loads, though many games are actually quite CPU-light these days.
"CUDA" is an NVIDIA trademark for their compute platform, it's not a special kind of technology. In fact Intel's i915 derived GPU works very similarly. In fact, Intel's AVX instructions also work along basically identical lines. They're all wide SIMD implementations of a bunch of parallel tasks(/threads/lanes/warps, the Industry terminology is a mess).
Honestly, I think rwmj is right: for a consumer gaming rig, it would be preferable to have an 8 core part without integrated graphics. The reasons that isn't going to happen aren't technical though: first, Intel needs to ship different binnings of these same parts into segments (Ultrabooks, tablets) where the graphics are required and where price is more constrained -- basically, they have to make these in large quantities anyway. Second (and relatedly) they already do make "lots of cores w/o graphics" variants, but they sell them into the low volume server market where margins are much higher. Selling an 8 core consumer chip would canibalize Xeon sales.
You: "The "special" video card slot throughout history has simply been a higher bandwidth interconnect relative to whatever else was there..."
Me: "One of the challenges of GPU performance is getting the connection between the CPU and Memory correct. "
Yes, its the GPU <-> (CPU/Memory) interconnect. It has, since the introduction of PCI, been a 'different' slot than other peripherals. So I don't see how we're confused.
You: ""CUDA" is an NVIDIA trademark for their compute platform, it's not a special kind of technology. ... the Industry terminology is a mess)."
Me: "CUDA, or more accurately, running general purpose compute tasks on shader engines."
The industry terminology is a mess, however most readers recognize the name 'CUDA', nVidia's implementation of a shader language, as that technology. Further it was the introduction of using shaders as vector units which lead to folks implementing things like the 'PS2 supercomputers'. (and to be precise, no you cannot use nVidia's tools on PS2s)
You: "Honestly, I think rwmj is right: for a consumer gaming rig, it would be preferable to have an 8 core part without integrated graphics."
Which is great, what I would love to see then is how you reason to that opinion. What is it about the 4 additional cores that would improve the 'consumer gaming rig'. How are you measuring 'good' vs 'not as good'? Cost? Triangles per second per dollar? Developer support?
"Second (and relatedly) they already do make "lots of cores w/o graphics" variants, but they sell them into the low volume server market where margins are much higher. Selling an 8 core consumer chip would canibalize Xeon sales."
I don't think anyone has argued that Intel couldn't make an 8 core processor out of this technology, heck the feature size is small enough they could probably to 12 or 16 cores and still get decent yield, but there are other system issues associated with that, most notably cache behavior and size.
But the specific question on the table was that Intel has made a part which they expect people to put into laptop machines and maybe even tablets. They chose to make it a four core machine with an integrated GPU, are you arguing that the consumer experience on those laptops would be improved if Intel required an external GPU? If so I'd be interested on your take on that as well. First, the way you evaluate value, and then the case for an external GPU that maximizes that value.
That's already available; it's called Sandy Bridge-E.
I was thinking the opposite; for games it would probably be better to have two cores but more GPU. The upcoming Ivy Bridge vs. Trinity comparison will be interesting.
Throwing more cores at the problem isn't really terribly efficient, though. You don't see as much of a return as you might expect, especially when most things aren't properly multithreaded.
Oh sure, I understand it's not in Intel's interests at all. But it would sure make my builds faster if I could default to 'make -j16' (hyperthreads). I guess I'm not the target audience here.
The graphics in IVB are not just on the die, but are part of the internal memory ring and thus the shared L3 cache. So CPU cores are able to directly send and receive shared memory from the GPU, and the GPU is able to use the same last-level cache (LLC) as the CPU cores.
This is why you see the simple fill-rate benchmarks of IVB blow away SNB and discrete GPUs.
Agreed this report comes across a bit underwhelming, but the integrated GPU boost is really what I've been waiting for. And it's more power efficient to boot.
I don't play games, but I do have a crazy-big monitor and want a bit more oomph drive the display before I get a notebook without a dedicated graphics card. Still waiting on the dual core ...
At work I'm on I5 and I7 stuff and never use all that (I'm one of the few here on HN that isn't a programmer). At home, I was going to put in a 2600K at the last build but there was that whole chipset recall problem and I needed something in quick so I went back a generation and got the core2quad - 8 gig of ram. usage pattern is really office applications and a few specialized plot applications (Igor). after I added the SSD, and a newish video card the CPU is just not the problem. frankly.. there is no problem. so no incentive to upgrade.
When I started using computers heavily in the 90's you were lucky to get a year out of a machine before it was just too slow to do the tasks you needed. For me, those days died when the P4 came out. Unless your super CPU bound like compiling - you just don't need it for basic office stuff.
"Quick Sync" is a marketing term for the hardware video codec on the die, and a software suite built around it. The Intel software is windows-only, but I'm pretty sure the codec is documented. At least it has a section in the big SNB graphics docs dump they did a while back. But AFAIK there are no free drivers written to it.
While consumer-oriented reviews rightfully ignore such things, I reckon that the HN crowd would be pleased to know more about such a nifty hardware solution.
[1] http://spectrum.ieee.org/computing/hardware/behind-intels-ne...