This isn't my area but, they appear to be saying that by using OpenGL instead of Microsoft's equivalent, not only do they get Linux support and a stepping stone to mobile support, they can also use more up-to-date code that Microsoft doesn't support on XP (highlighting China as a place where they have modern machines running the old operating system) and get a 20% performance improvement even when going through a translation layer.
Did you read the post you linked? It talks mostly about legacy problems. OpenGL 3.0+ is fine. And by the way, the emphasis should be on cross platform.
Well, the post claims that 3.0 was a failure, but I think that was a case of speaking too soon. My limited understanding is that yes, since then OpenGL has had much better shepherds.
The failure it is talking about was codename "Longs Peak", a project that was supposed to be OpenGL 3.0. It was supposed to be a redesign from scratch but was eventually abandoned and OpenGL 3.0 was based on the old GL 2.0 API (with some parts deprecated and some new things added).
Wolfire Games, of Overgrowth fame (they also were the original organizers of the "Humble Indie Bundle") have another writeup comparing OpenGL to DirectX. Their argument centers mostly on the market ("Linux users are important") and it's a bit dated (2010), but still a great read.
Both DX and OpenGL work with the hardware abstraction that does not exist any more. As a result, the API calls cannot be trivially converted into the hardware commands. Most importantly - there are multiple paths the translation can follow and each will produce very different results from the performance point of view. There is a reason modern video card drivers are hundreds megabytes big - they contain app-specific optimized code for hundreds of games and applications.
With the above said, 20% difference is solidly in the area of driver versions differences.
I tend to believe everything except the performance part.
I'd be curious to see a less interested party doing the analysis. I'd like to get Carmack's or Sweeney's take on this. This feels like MS claiming that IE gets better perf than Chrome.
actually, IE has comparable performance to Chrome (at least in webGL), but they tuned it by silently dropping frames. ;p
so yes, Im pretty sure M$ technologies are written really well and could not be trivially optimized by as much as 20%. :)
I'm actually surprised that there isn't more support for a native Linux version of Direct3D. It is by far the better of the two APIs, both in terms of ease of use and more closely conforming to the model used by video hardware itself. Since the future of remote Linux desktops is going to be based on Microsoft's RDP rather than X, it makes sense for the open source community to adopt a better Microsoft technology when it comes along.
OpenGL is a dog's breakfast of an API. It's everything an API shouldn't be -- stateful, verbose, and committee-driven. Just take a look at Valve's .PDF.
An OpenGL programmer spends a large part of his/her time trying to maintain a mental model of the underlying driver's state and behavior. And God(s) help you if you step off the path used by popular applications and game engines. You can expect combinatorial bugs out the wazoo, again because the API and driver layers are so stateful.
I agree with the grandparent -- it's well past time that a D3D-like API was adopted by other players. I've used OpenGL since the Quake 1 days and I've never understood why so many people sing its praises. The days when D3D was an inferior API are more than a decade behind us.
That's bullshit. In practice different GPU parts support different subsets of OpenGL's shader model, leading to broken graphics depending on the GPU. This is a huge problem in, for example, Android games, where you have to test on just about every device to see what works and what doesn't.
OpenGL is cross-platform because it's legacy. It is primarily of interest to outdated CAD software vendors, because that's who's driving the API specs, hence why it took so long for the fixed-function pipeline to finally be dropped. Modern CAD vendors, such as Autodesk, render through a Direct3D pipeline.
The Android issue is entirely caused by shitty drivers. Pretty much all mobile hardware after the original iPhone fully supports OpenGL ES 2.0 shaders which haven't changed since inception.
Being verbose and committee-driven need not condemn a thing; Ada the programming language is both and is leaps and bounds better than its nearest competitor (C++).
but programming opengl still feels like setting up a rube goldberg machine for its initial run
Wouldn't providing a solid Linux version of Direct3D rely on cooperation from Microsoft?
I know that Wine implements some forms of compatibility, but it's not perfect.
Surely the risk would be that you could build DX11 for Linux but then DX12 moves all the goalposts again. Not to mention that there might be legal issues.
I wasn't aware that there was movement for RDP on Linux though.
> "This doesn't mean that remote rendering won't be possible with Wayland, it just means that you will have to put a remote rendering server on top of Wayland. One such server could be the X.org server, but other options include an RDP server, a VNC server or somebody could even invent their own new remote rendering model."
So looks like it could be an option in wayland, but not the only one.
Maybe not the only one, but an RDP remoting compositor was just checked into core and the Wayland devs are evangelizing moving to Wayland+RDP over X or VNC, so the rest of the community will likely follow suit.
Personal rant: Never mind that X11 does the same sorts of pixmap caching that RDP does to get its speed. Never mind that braindead toolkit developers render into offscreen local rectangles, and then transmit the entire rectangles as bitmaps into X, thus using none of this capability, which is why X is observed to be "slow" and "chatty". The conventional wisdom is that RDP is faster than X, so that's what we'll be using.
Meanwhile, Direct3D is quite a bit better than OpenGL, on a number of fronts.
Hey, would you mind elaborating? I always wondered why RDP was miles faster than X when working remotely. Mind you, whatever the Teamviewer guys are doing is even more amazing.
From what I understand, RDP is fast because it can send vectors as well as just bitmaps. So if you want to display a button on the screen you can simply send the vectors for the 4 corners over the wire and say "this is a button, render it as such". RDP on Windows is implemented as a special video driver which can hook things like Win32 API calls to help with this process.
Compare that to something like VNC which IIRC simply sends the screen data over as a raw bitmap and must rely on (potentially lossy) compression in order to get reasonable speed. This of course has the advantage that the client software needs to be much less intelligent to understand it and means that it needs much less deep OS integration.
This is why VNC clients and servers are available and can work easily between pretty much any platform.
The GP claims that X11 is capable of doing the same optimisations that RDP can do but this is not utilised well because the windowing toolkits such as GTK render their output to some other area of memory and X only gets to see the bitmapped result.
Not sure how teamviewer works, but I know some of these paid services use intermediaries to trasmit data over faster internet links to reduce latency but of course this costs them money so they have to charge you.
I strongly agree with your rant. The current idea that X is slow is caused by toolkit developers working around X instead of trying to use X correctly and contribute to the project where X is lacking.
If the toolkit developers continue to render to bitmaps remoting on Linux will never be as fast as RDP for Windows.
OGL 4 still has too much state machine diddling and extension wrangling. In order to be fixed, it would have to drop a lot of state machine calls and replace them with a comprehensive object model to represent textures, shaders, geometry buffers, etc.; and there would have to be a comprehensive profile of current GPU functionality in the core API with a much faster release cycle.
In other words, it would have to become a lot more like Direct3D.
"Fixing" OpenGL to bring it into the current era of graphics development usually means making it look more like Direct3D anyway, so why not just go whole hog and adopt Direct3D?
There's a preliminary free implementation of Direct3D on top of Gallium by Luca Barbieri. Getting the community behind that would be a useful first step towards a native Direct3D rendering stack that could then be ported for use e.g. in iOS apps.
And today I learned about the existence of cGDB (a curses frontend for GDB, http://cgdb.github.com/ ). It seems very nice and I agree with what is said in one slide of the Valve talk:
cgdb looks promising. I really, really liked the Turbo Debugger in the 90's. No debugger has offered me the same experience since. It was fast, the text-based windowed interface allowed efficiently jumping to assembly as well as to the source, view registers, editable hex dumps of memory, view watchpoints etc.
gdb is super-powerful but was a bit of a setback interface-wise because everything has to be done on a plain command line. And the GUI-based wrappers around it don't appeal to me much (don't know why, the ones I tried felt somewhat clunky).
Eclipse on Linux has great C programming support including interactive debugging using the CDT (C development toolkit) plugin. I used it to interactively debug operating system code running in a virtual machine on my Linux box recently and it worked great.
There's also gdbtui which is built-in (gdb -tui or "layout next" when in a gdb session) and not too bad to at least view the code and type commands at the same time.
If you can crash X like that, it's obviously an X bug. Imagine if I could crash your HTTP server remotely.
Seriously, DDD is pretty stable. People don't like it because it doesn't use Gtk+ or Qt. And the people who use it even if it looks ugly aren't the kind of people who would talk about it, they just use it.
which is funny, because I long for plain gdb on Windows - not that gdb is so great, but on Windows if you want command oriented and thus scriptable, you're stuck with WinDbg, which is not as smart as gdb and has much worse syntax.
It's not directly related to games, but I was very surprised to see their graph of operating system usage. In the last few months (which, admittedly, are not at all typical), their Linux usage increased by 1% every one to two months. At that rate, it would pass OS X use by April. It would be really cool if these numbers made other companies, including non-game ones, become more serious about Linux development.
The real question is, can we expect a FOSS release of old (or new) valve code any time soon? It seems natural given their modder-friendly nature and their shift towards GNU/Linux. Plus, it's about time someone else followed Carmack's example.
Their approach has generally to release their tools and SDK so others can build using their tools. I don't think they have much interest in releasing the code, but they have always wanted (and benefited from) the community taking their tools and engine and building cool stuff.
I think they should have an interest in it. The toolchain they publish is notoriously out of date and suffers from poor documentation. Why not let the community work on it? There seems to be demand for it.
We're a pretty Windows-centric shop, I often see great effort to do OSX/Linux development via some convoluted toolchains just to be able to do the brunt of the work on Windows in Visual Studio.
A similar talk was given at GTC (GPU Technology Conference), it is S3418: Porting Source to Linux: Valve's Lessons Learned. The video is available for attendees right now. Videos should be available to the general public 30 days after the conference ended. This is the website: http://www.gputechconf.com/page/home.html
Perhaps there is a lot of software written for XP, many computers can't run Vista or 7, and there is little incentive to upgrade anyway? Those would be my guesses.
That's a very interesting result.