I've been reading some of the C++ hate that's been on HN of late. I've been successfully convinced (not that it was hard) that C++ is a really terrible language. But, if that's the case, then what else are commercial game developers to use? Is pure C sufficient? What about Haskell, or is that too slow?
"C++ sucks" because it is widely used, its socio-techical landscape is well explored, including the "sucks" segment. In other words, most other languages suck as much as C++, they're just not that widely used, or have not been around as much.
If you're writing your own game project you can use a sane, minimalistic and syntactically pleasing subset of C++ and gain all the pedal-to-the-metal advantages. Don't discard C++ because other people say it is terrible. Other people are usually wrong. Make up your own mind.
For example, the Keyspace code is organized into classes; we use inheritance and polymorphism; we use templates in a few select cases; but we completely avoid exceptions, operator overloading (actually operators) and the STL altogether (we implement our own containers). It doesn't matter what others think about this specific subset - it works for us, we work well in this formalism, make few mistakes and produce good code that runs fast.
Very good points. Since C++ is a good match for the way people are taught to think about programming, it's obvious that anything good programmers can't easily and consistently get right is the fault of the language. With Haskell, you can't rule out unfamiliarity as the cause of any difficulties until you're already heavily invested in it. Most people really have no basis for challenging the statement that Haskell is perfectly suited for game programming in the hands of a competent programmer.
Pragmatically, for the OP, I think it's safe to say that C++ and Haskell are both very challenging languages that take a long time to master. The advantage of C++ is that there's ample evidence of its suitability for various domains. The advantage of Haskell is that you'll learn a different way of programming. The decisive factor may be whether it's your own money on the line.
I second that. Having spent almost a decade writing commercial C++ code, I recognize everything horrific about the language as documented in the FQA.
The thing is, it doesn't matter. The fact that it's the leading language in your domain is much more important than its abstract quality. Availability of libraries, co-workers who understand it, examples of how to solve common domain problems in the language, these all outweigh the pitfalls and ugliness of the syntax.
Don't be seduced by the language wars. It's like spending days optimising a function that only takes 10% of your run time. Unless you're interested in the learning exercise (which I'm not knocking if you have time) pick one of the dominant languages in your domain, and spend your time worrying about everything else.
Why is C++ suddenly "fine" when you have no other options? There could easily be a language with all the close-to-the-metal advantages and none of the baroque, redundant complexity of C++; why must we be content to use the same language as everyone else in the industry, rather that scratching this itch and building a newer, more productive one?
"Newer" isn't necessarily better or more productive. Especially given the monumental task of having to write a game without the benefit of any of the libraries developed over the past few decades.
Perhaps the reason there are no other viable options is because C++ is pretty well suited to the task?
Why would you have to lose the libraries? .so files are .so files; it doesn't matter what language they were written in, as long as they expose an API that can be cleanly wrapped by your own language.
... and a standard ABI, otherwise you will never be able to link. C++ did not have standard name-mangling conventions until recently, which forced you to use the very same compiler to compile both your libraries and your own code. This is still unfortunately very much a concern with all these legacy systems out there. Your only solution is to expose your API as a C library or distribute your source code.
> Why is C++ suddenly "fine" when you have no other options? There could easily be a language with all the close-to-the-metal advantages and none of the baroque, redundant complexity of C++
A lot of that "baroque, redundant complexity" comes from standards committee's refusal to make things easier on programmers at the cost of any smidgen of performance.
And for certain tasks, like the highly competitive video game industry, uncompromising performance is exactly what the doctor ordered. You can always hire smarter programmers if you have to. On the other hand, it's hard to sell a game that runs or looks like crap at 60 fps.
The complexity is "baroque" because, in modern times, we've invented ways around it. Type inference would kill half of the pain involved in C++ without sacrificing performance. Cleaning up the syntax enough that definitions could be found context-free would eliminate the need for declarations, and in most cases .h/hpp files.
The complexity is redundant because they included features of C that they should have deprecated in favor of the C++ equivalents. String literals should be (const) basic_strings, and there should be no way to access stdio's horribly-easy-to-buffer-overflow machinations now that there's iostream. For that matter, boost should be adopted at a much more formal level: the default syntax for pointer variable allocation should allocate smart pointers, and one should have to go out of their way to get a dumb one. None of these things sacrifices performance; they're just artefacts of the fact that C++ wants to pretend it's still 1970 and that it's being used for systems programming in a mélange with C.
Type-inference is needed in C++ (and is already part of the next standard, you can enable the auto keyword with gnu gcc already). But it's needed for entirely different purposes than what scripting languages use it for: it's a feature for writing better templates.
It's confusing because there are two types of type-inference (run-time and compile-time). You do lose performance with run-time type-inference, which is what most interpreted languages use. C++ already has compile-time type-inference in the form of templates. They are...complex. But they're also faster than anything similar. There's a reason that C++'s sort is usually faster than C's qsort for complex types.
C++ doesn't have run-time type-inference and probably won't. The performance loss is too big to build it into the language everywhere, and unless you do that it's pretty pointless. You can tack it on with boost::any or RTTI, for example, but you'll generally find that it's not the correct decision from an engineering standpoint. Scripting languages have run-time type-inference built in everywhere -- but they do it at the cost of performance.
Now, as far as the problems of C++'s dependence on C...well, you're right. It's a feature of C++'s history. If someone could come along and invest the millions of man hours required in making a performance-critical high-level language without the C-baggage (and then market it!), it would be a great thing. But that's a pipe dream. The best we're going to do on that front is Java.
If someone could come along and invest the millions of man hours required in making a performance-critical high-level language without the C-baggage
As long as it also had C syntax, and something close to the C memory model, and was recognizably object oriented and/or functional. And came with a lot of libraries. And 3D graphics engines. And physics engines.
Unless some independent game developer does something in a new language that other developers can't easily duplicate with their current ecosystem (not that it couldn't be done in C++), and it catches on, there's no incentive to do anything other than continue to evolve things in the most backwards compatible way. There are always more C++ programmers coming off the assembly lines who want nothing more than to work on games.
According to the very Wikipedia article you cited, there is no such thing as "runtime type inference". What you described as such is a way of implementing dynamic type checking: checking at runtime that the types of the various parts of an expression actually match. You should actually read your sources.
So, "type checking" isn't always performed at compile time.
Type checking is a little different, and refers to some safety measures done at compile time that theoretically guard against certain types of errors. Scripting languages don't do them, and don't miss out on much programming correctness as far as I can tell.
Full run time type inference in C++ is about having a container of something -- let's say void pointers that just point to memory locations, and figuring out what sort of thing they're pointing at. Or similarly, you have a base-class pointer and want to figure out which derived version of the object is being pointed at.
I replied to you above, but I'll reply to your specific example here. Your example is really not how PL people talk about type inference. However, you're right that you wouldn't call this example type-checking either... it would probably be a fuzzier term such as "reflection" or "introspection". But certainly not type inference. Do you understand what the wikipedia article you linked is about? Because it's NOT about introspecting the type of a contained object within a container. It's about statically making proofs of the type of a variable based on its usage.
A quick read of this page and its top link strongly suggest that runtime type inference is an optimization designed to speed up the interpretation of dynamically typed programs. It sounds very useful for JIT compilation (to make appropriate code specializations). It's also likely too complex to be implemented in simple interpreters (like Lua's).
Note that this term isn't very widely used: we are already in second position at your link.
So, unlike ML-style compile-time type inference, runtime type inference is implementation specific. Your earlier statement "Scripting languages have run-time type-inference built in everywhere" is actually ill-typed. However, if you had said "checking" instead of "inference", your sentence would have been correct.
Hence my "Err, by "runtime type inference", you actually mean runtime type checking, right?"
Type checking is what you say when the compiler makes sure that you've used all your types correctly, and spits out an error when you don't. It's there to prevent you from trying to assign an int to a string, for example.
"Runtime type checking" is the same thing at runtime. Errors and exceptions get thrown, your program stops. So no, that's not what we're talking about.
There is such a thing, but it has a different meaning than how I believe you used it.
You said "scripting languages have run-time type inference built in everywhere". I don't think that's true for most scripting languages. Inference refers to determining facts which were not explicitly provided, and usually implies a statically typed programming language where inference is done at compilation time. "Runtime type inference" might be used for, e.g., Python's Psyco project, where the just-in-time compiler infers properties about a particular variable (such that it's always an integer) and can therefore compile out boxing, unboxing, runtime type checks and so on.
Speaking of runtime type-checking, I inferred that that's what you meant when you said "runtime type inference". Type-checking IS something that happens at runtime almost everywhere in dynamic languages. It is also more closely associated with RTTI -- a dynamic_cast in C++ would not be termed type inference (the previous and new types are known), but a type check does occur at runtime.
That's kind of the point, isn't it? I use the languages that I am most familiar with, because I am far more effective with them than I am with whatever this year's new awesome language is.
If I had been doing Ruby from the beginning, then I should continue doing Ruby, and not use C++ just because someone else says so. Likewise, if I've been doing C++ for as long, it doesn't make sense for me to switch to Ruby just because someone else doesn't like C++.
This stuff is so much nonsense. I think I will start using programming language evangelism as a reliable marker of a less experienced programmer.
Unwillingness or inability to shift between development tools where appropriate is also something I've noticed characterises less experienced programmers (often evangelism can be driven by this).
Using your example languages. I can think of situations where using C++ rather than Ruby would be idiotic, and likewise situations where the reverse is true.
Because, from what I've seen, a lot of people can be very excellent programmers in one language, but be unable to learn to program in another.
Actually, I don't think it's really "less experienced programmers". Several of my computer science peers (I'm 23) -- inexperienced as professionals, but very smart -- will try the same problem in four or five languages, just to learn the language, and have no fears about jumping into a new language to start a new project. On the other hand, I know some excellent older programmers (late 40s) who know C++ super well, are willing to apply jaw-droppingly complex template metaprogramming, but who only reluctantly use Python and won't touch Ruby or Scala. It's probably more of a cultural thing.
I don't think it's cultural, or even generational; it's just the byproduct of working in a domain for a long time.
Inexperience doesn't mean a person isn't smart, or that they aren't good or that they aren't clever. All it means is that they haven't developed a significant body of familiarity with something.
So, your peers are learning several languages -- maybe superficially, maybe not -- but they aren't yet developing the familiarity with one or a couple of languages that allows them to feel comfortable mentally solving any given problem in the language they're most familiar with. Thus, you have "Ruby" problems, "Python" problems, "C++" problems, and so on.
Those older'n-dirt programmers on the other hand have already solved a huge number of problems in C++. That doesn't mean they're a better programmer, but it does mean they're more experienced. So, if you ask them if they could write X or Y or Z, they'll say sure -- and in their head, they're probably already gathering the familiar pieces that they would need to solve it.
Or, to put it another way: if you went to pg and asked him to write software to run a forum like this, he would probably choose to do it in Lisp, and that would be a good choice for him. If you asked me, I would do it in PHP. If you asked DHH, he would do it in Ruby (on Rails).
None of those are wrong.
The only wrong choices would be me coding in Lisp.
If you let the language trolls on proggit and HN convince you that you must avoid C++ -- particularly whatever 'subset' of C++ scratches your itch with minimal complexity -- you've made a classic pointy haired boss mistake: letting the whims of the crowd make a technical decision for your project.
Don't buy into the nonsense. C++ is a fine language. People have been using it for decades to do real work, and they probably still will be in another two decades, long after Ruby and Haskell and Blub have been discarded as 'archaic' by the next generation of 20-year-old language snobs.
Personally, I would like to see a commercial game developer use FORTH.
from: http://www.economicexpert.com/a/Forth:programming:language.h...
"Forth became very popular in the 1980s because it was well suited to the small microcomputers of that time: very efficient in its use of memory and easily implemented on a new machine. At least one home computer, the British Jupiter ACE, had Forth in its ROM-resident OS. The language is still used in many small computerized devices (called embedded systems) today for three main reasons: efficient memory use, shortened development time, and fast execution speed.
Forth is also infamous as being one of the first and simplest extensible languages. That is, programmers can easily extend the language with new commands appropriate to the primary programming problem in the particular application area. Unfortunately, extensibility also helps poor programmers to write incomprehensible code. The language never achieved wide commercial use, perhaps because it acquired a reputation as a "write-only" language after several companies had product failures caused when a crucial programmer left. In addition, the ease of implementing Forth on a given processor meant that the barrier to self-development of a Forth system was quite low, so that commercial suppliers were, in effect, competing head-to-head with hobbyists, many of whom supported the idea that software should be free."
You're right, C++ pretty much has no serious alternatives for games. And not just games, anything that involves computer graphics, physical simulation, computer vision or image processing.
Python is excellent if some good soul has already written a library that does exactly what you need (in C or C++, obviously). The combination of numpy+PyOpenGL+PyCUDA is great for certain kinds of research projects (I have written a volume renderer in it) but probably far from commercial usability.
Haskell has great potential, but the community is too small and too academic to produce the necessary libraries and tools (which is a lot of work with minimal scientific content). The key problem is the absence of an industrial-strength array facility - there are many kinds of arrays in Haskell, all somewhat clunky and incomplete.
D might be a contender, but last time I looked the tools seemed very basic and it did not even support 64-bit systems.
I absolutely agree !!
I hate C++ too, but as you said , there is no real alternative for it. Mixing C and python is a good idea ,but real world applications in my area (computer vision) use C++ widely ...
yay! Python + PyOpenCL + ... is on my shortlist for the next project.
D is a dream to program in. The templates, especially, are so much nicer than C++ that I hate coding in C++ for a few days after doing D. OTOH, it's the same thing wrt runtime. Why spend a few days getting other people's crappy wrapper code working, FOR EACH LIBRARY, when you already know how to use, debug, and optimize those libraries for C/C++. It wastes a lot of time!
His point about fp appears to get no traction in video game developers around me.
I heard some iPhone app developers are trying to use Haskell to create games, though.
They are. Ryan Trinkle (one of the developers behind this) gave a talk about why they are using Haskell (echoing Tim Sweeney's talk, but adding his own perspective) at a meeting of BostonHaskell. Among other things, they've made a custom iPhone cross-compiler port of GHC that is quite impressive (in the waltzing bear sense)
Since their basic approach is applicable to other platforms, it will be interesting to see if GHC ends up with clean, extensible support for mobile cross-compilation in the future.
I have worked on major console games and written games in both C and C++. I also maintain an open source raytracer which I've implemented in C, C++, D, and python. So yes, I speak from experience.
Eve online is a great example of a game for which almost all game logic is in python (stackless). The "engine" is in C++, I think (not sure on that one). So I would adopt their model but use C instead of C++.
I saw a game that used Python (I think, maybe it was Ruby) and then dropped down to C for the high-performance bits and it was still slow as shit because of the Python code.
Lua is often used as an embedded scripting language within level editors, for example to keep track of the progress made on a quest or to trigger an event when the character reaches a certain point. Lua isn't used to code the game engine itself (in the vast majority of cases, there may be some obscure examples of game engines written in Lua)
Right - The stuff that needs high performance or low-level access (e.g. graphics code) is written in C or C++, but most of the "soft layer" is scripted in Lua.
"LÖVE is a cross-platform, 2D game engine. The latest version of the engine is 0.5.0 released on September 9th, 2008.[1] It uses the SDL library, OpenGL hardware acceleration and the Lua programming language to program games.[2] It is licensed under the Zlib license. "
So we can conclude then that games have many different facets to them and they can be loosely coupled enough that you can implement them using different technologies. The interface could be done in Lua, since it doesn't require intense computations when compared to graphics and physics engines.
Lua is pretty popular because it is a simple scripting language that you can either run in an interpreter, or compile to C. The interpreter is easily embeddable, and you can call C functions from it. All of this makes for pretty rapid development without sacrificing speed for the things that it is suitable for.
I'm not sure how ready it is now, but Haskell's nested data parallelism might be worth looking into as a forward strategy for things like physics and in-CPU graphics engines, since it should make very effective use of many-core processors and is expected to later (transparently) gain the ability to distribute work onto the GPU. This is if you want to do something more fancy than OpenGL, such as ray-trace or use splines.
Haskell's extreme facility with small light threads and its implicit parallel "strategies" could also make updating a rule based game board an almost mathematical, rather than detail-grovelling exercise.
You will not be able to get monkeys to program in Haskell. Whether this is a problem depends on your plans.
Well, for once, it's not particulary fast, but that's not a concern for all games(it's not slow either, but it's quite hard to optimize). The real deal is that it's a completely functional language with not much support, as far as games go. You won't find any graphics libraries, so you'll have to make them from scratch. Unless you have been programing in funcitonal languages for a long time, this will take an absurd ammount of time, and you probably won't be able to do it at all.
Haskell is great for math-oriented programs, but I wouldn't recommended for much else, in my opinion.
http://hackage.haskell.org/ is suppossed to be a good source for libraries, if you are intersted, but I can't get it to load.
As a professional Haskell programmer, I wouldn't say Haskell is harder to optimize than C++, I'd say it is different to optimize. Good Haskell programmers start with thinking about high-level concepts and algorithms right, rather than starting with (relatively) low-level, performance-focused good. And when they optimize, the things they're optimizing for are different: space leaks, excessive (and too little) laziness, appropriate data structure sharing (which can be tricky when using laziness to recursively compute a data structure) and so on.
After that, if necessary, they can look at lower-level details (escaping to a high-performance library in another language, skipping things like array bounds check, other low-level tricks, etc.). However, one of the powerful advantages of Haskell is that this sort of low-level, error-prone, ugly code can be isolated behind clean, predictable, functional interfaces. A great example of this is Haskell's ByteString library, where the low-level details are complex enough that I don't necessarily understand all of them, but the external interface is one even a beginning Haskell programmer can effectively and efficiently use.
I've used Haskell's OpenGL libraries some, and definitely recommend them over C++. OpenGL's callbacks feel much more natural in a functional language and the monadic do notation works very well at managing nested matrix transformations. Also, having an interpretive shell to play around with the types of functions is very useful.
And if use functional languages you can write all that math yourself - if you use C++ you can use all the C libs that have been debugged and tested for 30years.
Fraq was made in 2005 and it looks worse than quake 3, wich was made 6 years early (addmitedly by a big studio instead of jut one person, but the point still stands). It is possible to make games in Haskell, but it's closer to a theorical exercise than it is to an aproach to proffesional game-making.
addmitedly by a big studio instead of jut one person, but the point still stands
The point doesn't really stand, no. A big studio making a game over a long time period will obviously do a much better job than an undergraduate writing his thesis (yes, Frag was an undergrad thesis project).
To be fair, I think there are legitimate difficulties making games in Haskell, primarily related to the predictability of performance. GHC is a complicated and rather ingenious compiler incorporating many optimizations. While this is useful, it also means that performance (especially the responsiveness of the GC) can be unpredictable. Predictability of performance is quite important for games. In fact, this could be a good reason to try writing games in Ocaml.
The more probable reason is simply that the set of people working on big-budget games with the expertise to do in Haskell what they've learned to do in C++ is small.
haskell, libSDL, and OS X are a match made in hell. I recently tried this, and all sorts of dylib problems on my system lead me to believe its just not practical.
Are there other 3D/multimedia frameworks that play better w/Haskell? Or is it just raw OpenGL?
Most of the performance of a game stems from finding a sweet spot combination of data structures and data relationships so that data is processed at real-time rates, is accessible everywhere, and can be immediately applicable to the algorithms you plan to use.
If you design relationships-first, the way you would design a database, you can figure out a data layout that doesn't involve any high-level semantics to speak of, just classic data structures from CS: lists, arrays, graphs and trees to order things, records and hashes to name them. Go through each type of value and cross-index it across all the structures it needs to be accessed by.
Then write intersectable queries into the structures to express a complex gameplay question like "Find all the enemy actors that are near mission objective X" as the composable "Find positions between boundaries A,B,C,D, that belong to actors of the enemy type, where A,B,C,D are some distance relative to the position of the objective entity with the name of X." Since you aren't writing a generic all-purpose database, this isn't a hard problem. It's just time-consuming to nail down the data model that captures all of it.
I think you will agree that a fancy language isn't necessary to implement such an approach to game programming. Done that way, data mostly ends up being pointers.
If your game is processing-light(which with today's desktop hardware primarily means avoiding 3d computation), you don't need a fast language: I'm most familiar with doing game code for Python and Flash and they can do just fine on modestly-sized datasets too.
The D programming language [http://www.digitalmars.com/d/] was designed primarily to be a better C++. It has C-style syntax, a vaguely similar OO model to Java, and compiles directly to machine code, with performance as priority (it even allows you to drop to assembly if desired). The lack of C backwards compatibility allowed them to clean up a lot of the uglier areas of C++.
I've been contemplating this, too. C++ was my main language for a long time (~1999-2007) and about 30% of my consulting work is still in C++ (game development). I've found that while C++ code can be very clean and concise, it usually requires a lot of behind the scenes scaffolding, and it's very easy to get wrong even if you know the damn thing inside out. Programming in more expressive languages has certainly improved the quality of my C++ code, but there's only so much you can do.
My use of other languages has therefore gradually increased - I currently use Clojure where I can, but I'm pretty flexible (PHP being the only language on my blacklist so far).
For projects on which I'm a lone programmer, I'm free to use whatever language I want in theory; sometimes the customer has a preference, usually not, so the main restrictions are technical and legal:
I'd like to use Clojure a lot more, but for game dev, it's problematic. Performance is the least of my worries, as I can drop to Java or even JNI for the rough bits. It's also a fantastic choice for server-side programming on multiplayer games. However, the JVM isn't allowed on the iPhone, say, and unrealistic or disallowed on consoles. (the Nintendo DS has 4MB of RAM, for example; licensing/porting is an issue) For pure PC/Mac game dev it's probably fine, although if by any chance you want to port later, good luck.
There was a submission on HN a few months back about using Gambit Scheme for iPhone and Mac programming. Googling easily retrieves some useful information on this, but the general idea is that it compiles to C, and you can actually write Scheme functions with inline C/C++/Objective-C code, so you're using a 2-stage compilation process and retain full control while using a very expressive high-level language. I'm going to try this with my next iPhone game, as I'm not all that impressed with Objective-C so far. It looks extremely promising.
There are of course other languages which have compilers that generate C; I believe there are some Common Lisp implementations, although especially for game dev I'm not sure if there's any advantage in using CL over Scheme.
If using a full-blown Lisp feels too high level, there are some Lispy efforts that are essentially very fancy C preprocessors, e.g. BitC[1] or SC[2]. The latter is literally C with an S-Expression syntax; I'd be interested how compactly all of C++'s features could be expressed in such a language. Not that you'd want the arcane contortions of C++ templates when you have real macros.
Moving away from C a bit more, there are of course Forth and other languages in a similar vein.
As I've mentioned, I'll be going down the Gambit Scheme route in future projects, as it has a very nice interface to C/C++ (this is critical when dealing with game development oriented libraries - I'd probably make this the top priority in choosing a language for this purpose), it's very stable and mature, and it's a Lisp. I'll try and write a postmortem of some sort when the time comes.
My main worry is the behaviour of GC in an environment with hard memory limits (no virtual memory or paging), but if the allocator and GC are well written it should be less risky than explicit memory management.
Same here, except that after using C and C++ for over a decade I find switching to another language quite trying. The habit of constantly thinking what machine code is generated from the higher language constructs is really hard to ignore.
Moreover, I actually like C. But it is inconvenient. On the other hand C++ is convenient, but I don't like it. So I started playing with developing a dialect of C that adds support for parametrized code, closures, this pointer and type inference. All the stuff that I use or would like to use in C++, but in a syntactically cleaner way and without all the blubber that C++ accumulated through a design by committee. A hobby project, nothing too fancy :-)
I had the problem of worrying about low-level nuts and bolts for a while too; I guess it must have been the jarring difference in expressiveness between C++ and Lisp that made me stop worrying in the end. I don't think you can realistically wean yourself from C++'s mind pollution gradually - you have to go cold turkey and go with a high level language, dynamic typing, garbage collection, not especially object oriented, etc. Do a couple of smaller projects with a language like that and you'll probably find that you don't really want that "better C++" of yours after all.
A Scheme compiling down to C for game development? That sounds incredibly fun! Gambit even looks mature. (I did some Common Lisp but have no experience with Scheme... wtf, no Scheme books on my mysafari?!)
Damn you, pusher! I don't have time for hobbies! Uhm, I mean -- thanks for a really informative post. :-)
C++ isn't bad, per se. It's an incredibly powerful and rich language, verging on being a meta-language that allows you to create your own language. However, that flexibility is a double edged sword. Just like Perl or Javascript, any powerful, flexible language can, and will, be abused.
The main downfall of C++ is probably that it's too flexible in every direction. It takes an equivalent amount of effort to do the "right" thing as to do the "wrong" thing. In a sense, the language makes no value judgments regarding design. Which is helpful in some ways because it doesn't lock you into the straight-jacket that Java does, for example, but it doesn't encourage users to fall into a "pit of success" either (which a truly good language should, even if it is ultimately flexible enough to let you do those "wrong" things).
Not _quite_ to the point of absurdity. I kinda like it. Admittedly, I dislike it when I have to instantiate four classes to read the contents of a file. But I have also used the same flexibility to great advantage, and been burned in other languages when the standard library wasn't flexible enough.
C++ is definitely not a terrible language. It was one of the first OO programming mostly widely adopted (though today there are far better ways of doing OO with other programming languages). The problem with higher level languages is their performance. C++ sits somewhere between C and languages like Java, Python. I would say its probably a good place to program games which are both memory and CPU intensive.
I used C++ when I worked on game AI and a few Nintendo U64 games, also for some virtual reality stuff for Disney and SAIC. Right choice for the games, in retrospect probably not for VR.
C++ is fine when you need high performance and can live with more expensive development.
If you are interested in making games for consoles you are pretty much restricted to what the console maker's software development kit supports, which is usually C/C++. It's usually impractical to reverse engineer the hardware and supply your own system libraries in another language.
Computer games are another story, but the same facts are generally true, that you should use the language which provides the libraries where most of the work is already done for you. There are the most number of game development kits for C++, so that's why C++ is popular for game development.
For what it's worth, I have done a few games using the ORGE rendering engine and plugged the game elements (physics, AI) with separate libraries and it has worked fairly well.
"If you are interested in making games for consoles you are pretty much restricted to what the console maker's software development kit supports, which is usually C/C++. "
Post acquisition (by Sony) Naughty Dog seems to have shifted to c++ but seems to have shifted back partially to lisp (C++/ scheme combo as far as I can make out) for their "Uncharted" games for the PS3.(http://www.naughtydog.com/docs/Naughty-Dog-GDC08-Adventures-... Warning PDF)
It is C++. Note that this perfectly reasonable individual choice tend to ensnare us all in a local maximum.
One route out of this local maximum could be CPUs optimized for garbage collection and a high rate of function call. I doubt we will see that any-time soon, though.
META: Why the downmod? I assume it wasn't out of mere disagreement, so I must have made a specific mistake, but I can't see it (I answered the question, then stated an opinion which I think was on topic).
I'm with you. Whatever you use, don't choose C++. Sure, everyone on your programming team knows C++ already. Yeah, your libraries are all written in it. And successful games have used it again and again. But if it's not one of the "cool" languages, you shouldn't be using it.
Modern game developers should code in Ruby/Lisp. By the time you've finished coding all of your graphics libraries from scratch, Moore's law will have made computers fast enough to run them.
Because the calling overhead is too fucking slow. Never mind what happens if the library allocates memory and you're expected to free it in another module. This is bad practice, but it happens.
If the library uses weird pointer tricks like "since every data item is 4-byte aligned, we can use the low two bits of the pointer for flags", you'l break the garbage collector. Again, not great practice, but it happens.
I love dynamic languages. Every so often I try to write a game in Lisp, but productivity is about Language + Runtime. You can only justify fixing broken GL wrappers for so long, before you realize you've wasted time that you didn't have to waste.
My current fav mix is C++ without aggressively using all the bells and whistles, so it's easy to wrap for dyn languages, and Python.
I think that best thing to do is look at 1) the libraries you'll be using, 2) what your team is proficient at, 3) what's worked for other groups, 4) optimization potential.
If you think that Ruby/Lisp is your answer to the above points, go for it. But I have the feeling that it won't be for many people.
I didn't say I wanted to use Ruby or Lisp--I was simply denying the parent's point that one would have to "write the graphics libraries from scratch."
Besides, isn't the advice for optimizing every other kind of project:
1. Start in a high-level language
2. Port any bits that profile as slow to a low-level language, and then interface them into your HLL code
Why, all of the sudden, when you're coding a game, is it a better idea to start in the low-level language? Because your team "knows it?" by that argument, all games would still be being developed in assembler, because the most senior members of the team would have more experience with that than any new-fangled language like C++.
That advice never works. The problem is data structures. If your high-level prototype uses complex data structures of e.g. Python, you will not be able to access them from C++. You will need to rewrite the whole thing anyway.
The only way you might get away with this approach is if your application is essentially a number of separate scripts that communicate through files. Then you can rewrite the key scripts in C++. But that is never the case for games.
While rather similar to Python, Lua has a simple stack-based C API that makes this far less of a problem. OTOH, Lua was designed for embedded use from the start, and its stand-alone interpreter comes second.
But I assume they did not write the game in Python and the optimize parts in C++. Quite the opposite - they wrote it in C++ and then found that some part (e.g. GUI) is growing in complexity while not being crucial for performance, at which point it is a good decision to use a scripting language for it.
That is a pretty common approach, but what was suggested above (writing first in HLL and then rewriting parts in C++) is, to my knowledge, never done in practice.
"There is not a simple set of “hotspots” to optimize!" and also "Will gladly sacrifice 10% of our performance
for 10% higher productivity [...] We never use assembly language."
Last you heard must have been a while ago. Even the engines aren't being written in assembly. Writing in x86 yields slower and less optimized code than letting gcc -o3 etc compile your C code in x86. It simply gets too complex to manage in assembly.
Now, although C++ is a perfectly fine "HLL" language, I'll grant you that other languages are easier to program in. But patchwork programming can be bad advice some of or maybe most of the time.
Besides the issues of programming in a language different from what your libraries were written in (and more importantly, designed for), there are serious issues with the patchwork-performance code philosophy. The biggest is that the "routines" that you need to make fast often operate on data. You'll have a container of some sort in your "slow" language, and you'll want to do something with that container in the "fast" language. Do you expose the container's memory to the fast language (hard!)? Do you copy (slow!)? Do you use the more efficient containers in your fast language and expose them to the slow language (insane!)? When exposing complex objects between languages, you'll have similar considerations to make.
You can't look at C++/Python or C++/Ruby programming and assume that you'll simply have to make the same sort of engineering considerations as you would for C/Assembly. It's an entirely different kettle of worms.
Assembly is concerned with telling a computer directly how to handle memory addresses. C, for all intents and purposes, is just a simple abstraction for doing that. On the other hand, higher level languages present a lot of abstractions for doing a lot of things. And mixing those abstractions leads to serious complexity.
If you're writing your own game project you can use a sane, minimalistic and syntactically pleasing subset of C++ and gain all the pedal-to-the-metal advantages. Don't discard C++ because other people say it is terrible. Other people are usually wrong. Make up your own mind.
For example, the Keyspace code is organized into classes; we use inheritance and polymorphism; we use templates in a few select cases; but we completely avoid exceptions, operator overloading (actually operators) and the STL altogether (we implement our own containers). It doesn't matter what others think about this specific subset - it works for us, we work well in this formalism, make few mistakes and produce good code that runs fast.