Since the article and the linked nVidia marketing page both fail to adequately explain what Optimus is, I'm going to attempt to do so.
nVidia's GPU technology focuses on maximum performance. Their GPU's are power-hungry, are capable of rendering billions of triangles per second, and have an interesting programming interface to support operations such as custom shading. (Within the last few years nVidia and AMD have improved support for use of their GPU's by computation-intensive non-graphical applications. For example, people use GPU's for Bitcoin mining, for the good and simple reason that their computational throughput on SHA-256 hashes blows CPU's out of the water [1].)
Intel's GPU technology focuses on low power and low cost; as long as their GPU's can run the Windows Aero GUI and are capable of playing streaming video, it seems Intel doesn't feel a need to push their performance. The combination of low power and low cost means Intel GPU's are ubiquitous in non-gaming laptops.
nVidia has observed that Intel GPU's have become cheap enough that it's viable to put both a weak, low-power Intel GPU and a strong, high-power nVidia GPU in the same laptop. nVidia calls this combination Optimus.
Under Optimus, the Intel GPU is used by default for graphically light usage, meaning Web browsing, spreadsheets, homework, taxes, programming (other than 3D applications), watching video, etc. -- since it's low-power, you get good battery life.
When it's time to play games, of course, the driver can switch on the nVidia GPU -- hopefully near a power outlet.
I think Optimus technology has been available for over two years. However, nVidia's Linux drivers do not yet support it, leading to much gnashing of teeth, and the third-party Bumblebee solution which I discuss in another comment [2].
You basically missed the entire point of it or why it's interesting.
Current Intel CPUs have a very small GPU built on to the die of the CPU. By buying the CPU, you're paying for an Intel GPU anyway.
At the same time, unlike CPU performance, GPU performance really does scale with die area; if you want more graphics performance, get a bigger chip with more ALUs/texture units/etc, and because graphics is so parallel everything will just go faster. However, larger chips mean larger amounts of leakage when the chip is powered but idle, which means that battery life can be significantly worse.
What Optimus does is allow the Intel GPU to be connected to the display hardware and be used most of the time (e.g., when you're looking at email or whatever) when high performance isn't called for. At that point, the NVIDIA GPU can be turned off completely, meaning no leakage and no battery life degradation. If you want high performance, the NVIDIA GPU is enabled on the fly, rendering is done on the NVIDIA GPU, and the final result is sent in some way to the Intel GPU, which can then use its actual display connections to put something on the LCD.
Prior to Optimus, there was generally a mux that would switch which GPU was outputting to the screen. This was messy because everything had to be done on one GPU or the other, it was noticeably heavyweight, occasionally required reboots, increased hardware complexity, etc.
The biggest issue with Optimus on Linux is that the infrastructure for the actual sharing of the rendered output didn't really exist until dmabuf appeared--you need two drivers to be able to safely share a piece of pinned system memory such that they can both DMA to/from that memory and be protected from any sort of bad behavior from each other. (I also think it's impossible to have two different drivers sharing the same X screen, which is why bumblebee works the way it does)
> Current Intel CPUs have a very small GPU built on to the die of the CPU. By buying the CPU, you're paying for an Intel GPU anyway.
I was not aware of this fact.
> the Intel GPU to be connected to the display hardware and be used most of the time...the NVIDIA GPU is enabled on the fly...
I did mention these aspects of Optimus.
> rendering is done on the NVIDIA GPU, and the final result is sent in some way to the Intel GPU, which can then use its actual display connections to put something on the LCD.
I guess I missed the point that the architecture is like this:
nVidia <-> Intel <-> Display
instead of like this:
Display <-> nVidia
Display <-> Intel
I was a little fuzzy on this point myself, so I appreciate the clarification!
> the infrastructure for the actual sharing of the rendered output didn't really exist until dmabuf appeared
I'd certainly believe that the current approach to the nVidia driver was enabled by dmabuf. But Bumblebee shows it's possible to use Optimus on Linux without that particular kernel feature.
The lack of any sort of physical connection to the NVIDIA GPU's display outputs is the fundamental feature of Optimus. Switchable graphics existed for years before Optimus introduced (and was usually usable under Linux without issue), but it was largely a niche feature because of the usability drawbacks.
> That NVIDIA is now able to implement Optimus support in its proprietary driver is primarily down to...the PRIME infrastructure in the Linux kernel and X Server...
This is simply false, since adequate open-source support compatible with the current Ubuntu kernel has been available in the form of Bumblebee for over a year.
I have had an Optimus laptop for a while now, and since I got it I have been running Linux Mint, with the Bumblebee PPA [1]. With Bumblebee, applications use the Intel GPU if you run them normally:
$ wine
If you want that application to use the nVidia GPU instead, you can pass it as an argument to Bumblebee's "optirun" command:
$ optirun wine
I will grant that Bumblebee's approach is very hacky, and the DMA-based approach used by the new proprietary driver is likely to lead to higher performance and/or less CPU usage.
But I have found Bumblebee to be very stable, and more importantly, it's been available since I got the laptop, unlike the proprietary Optimus support -- which is still vaporware.
> if you live on the command line it's not a major issue
I didn't intend to call the method of running things by adding another command to the beginning of the invocation "hacky." Although GUI-only users may see Bumblebee that way, until frontends / desktop support is developed.
I was referring more to Bumblebee's internal architecture: running multiple X servers and ferrying data between them.
And neither of these notes about the "hackiness" are any criticism of the Bumblebee developers: They've done great work making the software as reliable as I've personally found it to be, and getting it out there as quickly after the release of Optimus as they did, with AFAIK zero support from nVidia.
Even if the internals are hacky, as an end-user I have no complaints; unless you're messing with the config file, or upgrading from a very early version of Bumblebee, its complicated design is well-hidden from the end-user.
Doing front-ends and desktop UI integration is somewhat out of scope for Bumblebee as a project. It would be nice to have, I suppose, but I personally don't need it (I always use the command line and can figure out how to manually make launch icons if I need to), and it doesn't require the same intense knowledge of the guts of the Linux video subsystem that writing Bumblebee itself did. So I feel it would be much more logical to have those things be separate projects, since they require a different developer skillset and don't have a dependency on the internal details of Bumblebee.
I don't care any more about NVidia blobs. My old NVidia based laptop have died due to the nvidia chip. My new one is intel based with open source drivers and i'am very happy.
My desktop PC still have an NVidia Graphics card, and i'am looking for a replacement. But I cant find any Intel based graphic card on the market.
The fact that they're now supporting it, while a few months ago they were saying they would never support it, speaks to a real failing in marketing and messaging. I honestly think nVidia would be better off letting their engineering team do the talking, at least to the Linux community.
That's very nice, but speaking for myself, I will never again voluntarily own an Nvidia graphics adaptor. I've seen too many of them overheat and fail in otherwise normal circumstances, resulting in the involuntary abandonment of a laptop before its time.
At one point I had three Nvidia-equipped laptops stacked in my closet, all essentially bricked, each eventually replaced by a laptop equipped with an ATI/AMD adaptor.
I initially had nothing against Nvidia, and Dell seemed to prefer them in its offerings, but experience has forced a change in my outlook.
Meanwhile, ATI are unable to write drivers. I've actually lost work due to that; and it's not limited to a parts from a particular early lead-free process (IIRC the cause of those nVidia problems).
> This could be more to do with cooling design of the laptop itself rather that the nvidia chip.
Yes, perhaps, but:
1. Nvidia would need to unambiguously specify extraordinary cooling requirements, to avoid difficulties in the field. Apparently they didn't do this.
2. The graphics adaptors of others, in the same laptops, had no similar problems.
Someone may argue that ... oh, wait, you do make this argument:
> High performance graphics chips are going to get hot.
Yes, but if they reliably melt down, that fact negates their impressive specifications.
I'm imagining an advertising campaign in a parallel universe where everyone has to tell the truth -- "Nvidia -- the hottest graphics processors in existence!" Well, yes, but ...
Thermal output management is not new. If the chip overheats, you can always clock itself down or power down some cores. GPUs, being very parallel, should make it even easier than it is with CPUs.
Not being able do transparently do this should be considered an important design flaw.
Sure, but when your chip tends to overheat, you should design mechanisms to reduce the thermal output when the need arises. CPUs have similar mechanisms integrated in them since the early 2000s (remember the videos of AMD CPUs melting down seconds after the removal of a heatsink?).
Unless you absolutely need high performance GPU/vector acceleration, I'd suggest going with Intel GPUs. I wouldn't, at this time, support companies that, to put it mildly, can't cooperate with the rest of the Linux ecosystem and that try to work against it when possible.
They make powerful GPUs, but, unless they can reliably perform their functions with the software I want to run, their products are worthless.
I think the problem may be the laptop brand. I've had a couple of Dell laptops and they both got alarmingly hot to the touch. I've now got a Thinkpad w520 (Quad core i7, Nvidia quadro chip) and I can sit with it on my lap comfortably. I don't know how Lenovo do it, but their thermal management is amazing.
nVidia's GPU technology focuses on maximum performance. Their GPU's are power-hungry, are capable of rendering billions of triangles per second, and have an interesting programming interface to support operations such as custom shading. (Within the last few years nVidia and AMD have improved support for use of their GPU's by computation-intensive non-graphical applications. For example, people use GPU's for Bitcoin mining, for the good and simple reason that their computational throughput on SHA-256 hashes blows CPU's out of the water [1].)
Intel's GPU technology focuses on low power and low cost; as long as their GPU's can run the Windows Aero GUI and are capable of playing streaming video, it seems Intel doesn't feel a need to push their performance. The combination of low power and low cost means Intel GPU's are ubiquitous in non-gaming laptops.
nVidia has observed that Intel GPU's have become cheap enough that it's viable to put both a weak, low-power Intel GPU and a strong, high-power nVidia GPU in the same laptop. nVidia calls this combination Optimus.
Under Optimus, the Intel GPU is used by default for graphically light usage, meaning Web browsing, spreadsheets, homework, taxes, programming (other than 3D applications), watching video, etc. -- since it's low-power, you get good battery life.
When it's time to play games, of course, the driver can switch on the nVidia GPU -- hopefully near a power outlet.
I think Optimus technology has been available for over two years. However, nVidia's Linux drivers do not yet support it, leading to much gnashing of teeth, and the third-party Bumblebee solution which I discuss in another comment [2].
[1] https://en.bitcoin.it/wiki/Mining_hardware_comparison
[2] http://news.ycombinator.com/item?id=4471411