Hacker News new | past | comments | ask | show | jobs | submit login
John Carmack: Script Interpreters Considered Harmful (codingthewheel.com)
131 points by LiveTheDream on Aug 13, 2011 | hide | past | favorite | 87 comments



Gaming "toy user interfaces" like the one pictured lower in the article generally take so much code because, very often, art directed animation which drives those UIs is often actually assembled entirely in script. Comparing it to an early Unix kernel I guess paints a nice visual image, but is probably not a fair comparison.

In game, in-world "toy UI" like that is intended to look like the sort of sci-fi UI you see in movies, but also be interactive. The stuff in movies is made by a motion graphics designer with a copy of Adobe After Effects. Replicating that movie like look, with all the animations and visual effects you see on the big screen, but ALSO making it work as a fully user-aware, state-aware computer interface, isn't cheap, either on an artist man-hours standpoint, or from a lines of code standpoint.

I imagine its expense could be reduced if a lot of the stuff up on screen was boiled down to some kind of binary data - a lot of work is put into optimizing performance of engine-native animation files and effects which are played on the characters and environments of a video game world - but usually UI (including fictional in world "toy UI") isn't considered part of the main tool chain or process, so it ends up either being highly special cased by hand, or very highly interpreted (sometimes running interpreted code through middleware which reinterprets the code again) requiring quite a lot of bloat to look and behave in a way that is presentable and on par with the rest of the game. (Usually at this point in the games industry, people build their UIs in flash, and then run them through middleware like Scaleform, which reinterprets their flash based bitmap and vector artwork as polygonal art, and reinterprets their action script into an engine-savvy language... not efficient for the game, but efficient for a former marketing guy who now wants to do game graphic and UI design, I guess.)

(For what its worth, I agree with Carmack's assessment. Making highly authorable and also highly efficient at runtime, "AAA visual quality" game UI is just a hard problem to solve, and it's a problem which few studios deem worth solving because after so many years of the current system, the benefits of a change have almost become an intangible.)


UI is a solved-problem from my point of view with middleware like Scaleform.


You would also have to find flash-experienced people willing to work the way console game developers work. They actually might feel a little bit under appreciated for the stuff they do, and not feel very rewarding at work, especially when they have to constraint themselves with memory, cpu, gpu, i/o issues that might not be visible on the desktop.

Also at some point certain integration between the engine and scaleform is needed, and you would have to lose a programmer there supporting it.

We tried it, and the consensus was it's not of use to us. I wasn't the one evaluating it, so I can't tell for sure. But resource budgeting (memory, cpu, video memory (ps3)) is very debated thing in the console game development, and everyone wants more budget for his need (audio, animation, builders, ui, etc.)

We also gave a try of an early flash middleware for Dreamcast for NHL2K2 (2000) - there were only 16mb of RAM, and flash was taking significant amount (no scaleform relation). We actually got some pretty good menus, and for a video sports game you need very detailed UI for rosters, trades, etc. We also got people to do it back then.

But then the problem was during gameplay. While in the main menu we had the memory to pull it off with flash, during gameplay all memory was for vertex buffers, textures, game data, sound, animations, etc. We were thinking of swapping out game data (and reloading later) when the PAUSE button is hit (and the menu appears), so flash would have enough memory - but it turned out to have some bad latency - few to couple of seconds. Ideally PAUSE ON/OFF should take no time (otherwise it's annoying).

Another thing was that we still had to render the rest of the HUD (scores, replays, etc.) not using flash - so we had to support TWO technologies for UI.

Hence we turned our backs on it. It was nice, but not for our game.


How does Scaleform compare to a junior programmer writing UI C++?

Rolling your own UI has always been time consuming, and a breeding ground for bugs.

Any middleware recommendation?


That's a tough question, to be answered in one post. I'm senior software engineer, and there are probably hundredth of thousands junior programmers that are better than me. I would go with the junior programmer - especially if he came on internship position, if nothing else he can integrate scaleform (you still need programmer there).


RAD game tools, purveyors of lots of 'middleware' for games, have developed their own Flash playback engine called Iggy. I don't know how it rates, but its an option: http://www.radgametools.com/iggy.htm


Carmack is specifically talking about Scaleform, though! And he's not alone -- I know a few game developers who hate that thing with a passion because it's so slow that it's introduced noticeable performance problems in AAA games.

Because he's clearly talking about ScaleForm/Flash specifically, though, I don't think that his comments can be correctly generalized to all "script interpreters". It's kind of the equivalent of buying a Maylong Android Tablet and going around saying that Android sucks or the tablet market is dead. You just happen to have picked the worst possible representative.


He may be referring to Scaleform (or another Flash GUI library; I don't think he specifies), but he's also talking about his experience with scripting in Doom 3- which I'm fairly sure didn't use Flash.


My favorite scaleform bug was when the game suddenly ran at less than 1fps due to a setting that led to the UI having more triangles than there were pixels on screen. Scaleform is good at a lot of things, but it's not even close to a "solved" problem :(


Whatever happened to Anark? Was it any worse?


Scaleform is great at enabling web- and interactive advertising-trained graphic designers to make video game UI art. It's not as great at being efficient, or truly integrated into your game.


scripting languages weren't really designed for large-scale development efforts involving millions of lines of code. They typically lack the code-reuse abstractions and development toolsets

I find that dynamic languages with first class functions provide much better abstractions and opportunities for code-reuse than languages like C and C++.


That paragraph jumped out at me, too.

I agree with the point, but not its reasoning. The reason why scripting (read: dynamic) languages fall apart at scale is largely because they're too flexible. Abstractions that are convenient in the small can create unmanageable complexity in the large (unless you are extremely disciplined).


Despite the author's characterization, Carmack's quote doesn't say that script interpreters are harmful or evil, but "bad" in the sense that they're not performant enough for Rage.

Lua was certainly fine for, say, Angry Birds, and has clearly been a huge win for Rovio in rapidly porting the game.


That's interesting. Do you have any links on how Angry Birds uses Lua?


You can actually see the lua usage by peeking into the .ipa iPhone packages (they are just .zip files). Maybe android and other systems are not much different.

I've seen at least the lua levels described there.


You can see it on the indie level editor: http://www.badboll.nu/acme/able/

Also on the dissassembled apk (it is compiled lua by the way, so you might need this[1] to see it more clearly).

[1] http://chunkspy.luaforge.net/


As an aside, are you from Edessa?


His bolded quote is script interpreters are bad, from a performance, debugging, development standpoint.


The real question is: why do people keep writing interpreters, when writing compilers is not hard.

It's not "interpreters vs C++" as Carmack supposedly said, but crappy slow badly implemented user languages vs efficiently implemented user languages vs a guru writing C++. The middle option is the one to go for.

Especially with LLVM which makes compiling your code very easy, and it's got a BSD-ish license, so no problem linking it with your awesome proprietary game.


Ahead-of-time compilation of a dynamic language with LLVM will scarcely be faster than an interpreter. This is a very common misconception. Good performance for dynamic languages requires dynamic techniques -- polymorphic inline caches, tracing, dynamic type inference, and so on.


So use a static language with type inference instead.

Most people don't know about these things because of the poor / non-existent education in computer science and lack of any recognized qualifications, but that's no excuse. One day this branch of engineering will need to grow up.


Err, no thanks? I like my dynamically typed, interpreted + JIT compiled language thank you very much. I'm not ignorant of the type inferenced languages (in fact I use one almost every day), I just don't like it as much.


Wait, you're both wrong.

It's perfectly possible to write a statically-typed language that looks exactly like Python or Perl or Ruby. It's just that nobody's done it yet.

So in the mean time, you have to pick between expressiveness and ease of development versus safety and speed. There's no concept in computer science that prevents us from having all four other than "it's hard and people are lazy".

Haskell and Go are good examples of good progress in this direction, though.


> It's perfectly possible to write a statically-typed language that looks exactly like Python or Perl or Ruby. It's just that nobody's done it yet

Really? You should definitely show us how.

I've been thinking about the issues of dynamic/static languages for a long time, and haven't yet thought of a unifying design (maybe I'm just stupid).

The big difference isn't type inference/dynamic type system, it's the difference between compile-time and run-time. In Go and Haskell, classes are compiled. In Python, classes are created at runtime, so if you wanted type-inference in such a language, it would have to be performed at runtime... Also, proper typing of OO languages is very complicated, see Scala (e.g. sometimes you want the function to return the type of this object, even when it is inherited...).


Maybe it could look like Python, but for damned sure it wouldn't have the same semantics, unless by statically typed, you really mean "trivially typed".


Why? Python already requires type annotations. For example, if you have a string "42", you can't just add 1 to it to get the number 43, you have to say int("42") + 1 or it will throw a type error.


It throws type error at run time, not compile time.

To be a statically type language you would need to throw the error at compile time.

How do you raise (or not raise) error of this code at compile time and still make it feel like dynamic type language?

    x = (rand() > 0.5) ? 42 : "42"
    y = x + 1;


The type of x is Int|String

The type of (+ 1) is Int -> Int

The type of y is Int

Because x is of type Int|String and it's passed to a function that can only operate on Int, the program fails to compile.


I highly reccomend you read, "Localized type inference of atomic types in python" by Brett Cannon (2005), it shines a light on just how much this doesn't work.


So when you said, "looks just like Python," you literally meant, "lexically looks like Python," because you aren't building a language that works just like Python.


That is not a type annotation. Python is simply strongly typed that has nothing to do with whether or not the language is dynamically or statically typed.


PyPy's RPython language is a subset of Python that can be compiled. Unfortunately, RPython was designed to bootstrap PyPy and not general-purpose applications.

Python 3 added optional type annotations for function parameters and return types. Unfortunately, Python does not use these annotations for compile-time or run-time type checking. The annotations are just for documentation or tools that want to analyze the type attributes.


The reality it is the reliance on tools and sucky education. I wrote a compiler for taking a less-sucky C#/C hybrid compiler to PHP (why php? so I could get things done and hack it when needed): See: http://news.ycombinator.com/item?id=226480

Reality: taking it to production was nice, but debugging and other tools are not available. I spent a lot of time making it possible to debug the PHP and trace that back to C#. This was less than idea.

Then IDE support. While I can happy without an IDE, most people can hardly function.

Trying to educate people on it was (a) marketing fail [web devs hate static typing], and (b) too hard for people to grasp (functional programming + new stuff = fail sauce).


The real question is why would you write either when there are perfectly good languages that you can already use. It seems insane to me that Blizzard dumped Lua for their own homebrew scripting language. I wonder what the rationale was.


Both ActionScript and Lua are compiled, one way or another. Unfortunately, not everyone seems to have caught on to this an people are still widely referring to all dynamic languages as interpreted. Dynamic typing, garbage collection, and just-in-time compilation have a cost, which I guess is what Carmack was referring to, but the term 'interpreter' seems poorly applied here.


People usually use "compiled" to refer to languages that lower programs into machine code. Lowering the program into a bytecode isn't sufficient.

Bytecode interpreters must be carefully optimized, and still usually have an unavoidable performance hit from their cache usage patterns (e.g. from bytecode being in d-cache).


Which is why dynamic languages are compiled to machine code nowadays. I'm not sure about ActionScript, but both JavaScript and Lua are compiled to machine code when speed matters.


Turn-around time is what matters most. (I work on a console game). You need very fast turn-around time for a designer to test his ideas. If something turns slow, later it can be put in a C/C++ function and exported.

Also sand-boxing, but this is not of concern to the actual scriptures, it's to the rest of the team.

And something to easy deal with tasks/threads, or state machines, and such.

Also a language that allows you to declare easy what needs to be kept persistent (goes in the game save), what can go through the network (game-play related feature affecting all in multiplayer), or what can be local "effect".


With an interpreter you sort of get an automatic sandbox because your badly implemented user language is handled by a buch of switch-case or if-else statements. Not literally, but the 'user language' is entirely limited within the interpreter and doesn't mingle with the real game code unless you code up that specific interface.

Enter compiled code. Now we have the opportunity to run unknown binary code right within the game's address space. Bad News in many ways. To get the Happy Medium (compile code within the game engine, load and run the resulting executable safely) requires implementing operating system concepts. It's a Big Job.

This leads my thoughts on a slight tangent: why isn't there a Super Awesome Open Source Game Engine that's the heart of all 3D games? SAOSGE could even provide the extensibility framework (compiling, loading), ready for you to expose parts of your games to player programmers.


If you control the compiler and the language is memory-safe, you get the same guarantees with JIT-compilation that you do with interpretation. All execution runs within the memory boundaries specifically allocated by the runtime system, and any other interaction with the machine must go through the runtime libraries, where you can perform any checks you want.


I don't see this kind of sandboxing as likely to help. Whether your scripting language is compiled or interpreted, if any accessible portion of your system is written in a memory-unsafe language like C, that is where crackers will find the most damaging exploits. E.g., http://stackoverflow.com/questions/381171/help-me-understand... which attacks a native XML toolset (some "data binding" IE feature I don't understand) without breaking any javascript rules.


> why isn't there a Super Awesome Open Source Game Engine that's the heart of all 3D games?

Unreal is the closest thing, but there's still lots of new/strange/innovative stuff going on in rendering, which means that lots of bits are hard to reuse. Rage, for instance, has a pretty nifty virtual-memory for textures scheme, which no one else has really done.


Also, licencing costs. Why pay a rival company money when you can create your own engine in-house? Unreal has various licencing agreements, from (probably) 8 figures for an unlimited license down to 99$ + 25% of any revenue above $50k per title (and anywhere in between).

In particular I'm thinking of Criterion's Renderware engine. Originally funded by Canon it was very popular. Then Criterion was sold to EA and everyone dropped it.


> Also, licencing costs. Why pay a rival company money when you can create your own engine in-house?

Because developing a AAA engine from scratch is a multi-million dollar enterprise at this point.

You actually need to develop your system faster than the leading edge so you can catch up. That's an expensive ask. Most companies arrive, after 2-5 years, with something that was as good as UDK or idTech n was ... years ago.

Paying the $500,000 or x% of revenues is often the fastest, cheapest way to get your game out.


Not hard comparing to what? Its still pretty hard... even jonesforth is kinda hard to get for most developers.


I hope no one uses jonesforth [or FORTH] for real development. It's a beautiful thing to learn and I encourage everyone to learn how to write a FORTH, but it's an anti-pattern for real world programming. (Edit: this is not a contradiction)


Why do you say that?


I'm surprised no one picked up on this gem, yet.

>". . . but you know one of the big lessons of a big project is you don't want people that aren't really programmers programming, you'll suffer for it!"

Edit: Fixed formatting of quote.


A lot of other companies' games integrate scripting engines to ease level/mod design, because they don't want to ship a compiler toolchain along with the game. Carmack's games, on the other hand, are just eventually open-sourced, and modded "natively."


Why should it be surprising that performance still is of primary concern to Carmack? If you're pushing the state of the art forward then I think that will always be the case.


scripting languages weren't really designed for large-scale development efforts involving millions of lines of code

Neither was C.

There are languages developed afterwards with large-scale development in mind, but you have to ask, what did they add? True type safety? Nope. True encapsulation? Nope. Automatic resource management? Nope. C++ is C with several times more ways to write horrifyingly unmaintainable code. All big C++ shops have a very long lists of constructs that must never be used and very rigorous code reviews to ensure that you don't use any of those features by accident.

At this point, we see that C++ has one advantage over C, and that's namespaces. Helpful, but not helpful enough for "large-scale software development". The major innovation that C++ brought was waking people up to the reality that a maintainable codebase must be curated with extensive automatic testing and manual code reviews. Anything else leads to epic failure.

But wait, that's easy to fix! Let's invent a new programming language! This time we'll call it Java. It will be like C++ but with all the ways to write bad code removed. No operator overloading! No multiple inheritance.

And it's true that Java helped in a number of ways. But it didn't solve the real problems. There is still no type safety; null is an instance of every class in Java, but you can't call any methods on it, for example. So instead of a segfault, you get a NullPointerException, but all that means is that the source of the error is easier to determine. But you still have to write a lot of tests to make sure that your code handles nulls properly. (This is compounded by laziness in design like writing "loggedInUser = null" instead of writing a subclass of User that indicates it's not logged in.)

Multiple inheritance in C++ was a mess, but Java's solution isn't much better. What's the conceptual difference between abstract classes and interfaces? Abstract classes are non-composable class-parts that contain API, implementation, and state. Interfaces are composable class-parts that contain API. Why are these two separate concepts? Why not have a generic "traits" feature? (The answer is because Java is mostly a copy of C++, and C++ focuses on irrelevant OO features like 4 different levels of member visibility rather than semantic annotations like "if you compose this method into another class, it should run after the method from the class it's being composed into". Of course, when you have these annotations, multiple-inheritance or multiple-trait-application works perfectly -- see CLOS or Moose. But all C++ had was public/protected/private/friend, and so that's all Java has.)

This is getting ramble-y so I'll get to the point. No modern programming language helps you write huge codebases. If you want to be able to maintain millions of lines of code, you are going to need very rigorous standards and millions of lines of automatic tests. It's the only way we know of. All more modern languages did was push us from "we're leaking memory because we forgot to call free()" to "we're leaking memory because it's pretty convenient to keep all these huge objects around".

They certainly haven't helped us write more maintainable large projects. If anything, a dynamic scripting language embedded in your game means that you'll have fewer lines of code and clearer separation between components. But it's not such a win that you won't have to review code or write tests anymore.


The D programming language has a number of features that are strongly oriented towards being able to deal with very large code bases:

1. checkable function purity 2. checkable transitive immutability 3. code can be divided into checkably memory safe code and unsafe code 4. anti-hijacking enforcement 5. modules with closed name spaces 6. memory is thread local by default (shared memory is typed differently) 7. look for non-null pointers in the near future


"No modern programming language helps you write huge codebases."

Have you used Go? This is one of our design goals. Go is a really simple language. Its features are easy to understand and predictable in use. Go code is also very readable in that you don't need a lot of context to understand a piece of code. There aren't any colossal Go codebases yet, but so far things are looking promising.


I haven't used Go for anything huge, but it does feel like good progress for the future. (In a comment a few pages down in this thread, I say as much.)

In the end though, bad code is mostly due to bad programming and bad process. Go is obscure enough that the bad programmers haven't heard about it yet. Start offering high-paying Go jobs to anyone with a pulse and you'll start seeing why people hate C++ and Java so much. It's not that the language sucks, it's that the programmers using it do.

It always surprised me to hear that Google "got by" on C++ and Java but I lightened up a bit when I was reading some of Android. Normally you open up Java and are immediately stunned by the smell it's emitting, but when I started reading the Android code, this didn't happen. Classes did one thing and delegated to other classes when they needed something done. The methods were small and made sense. Line of code inside methods were in "paragraphs". There were no comments like "// hack around bug in SomeOtherClassIWroteButAmTooLazyToFix". It was clear that it was the work of someone who knew what she was doing.

I guess I knew it was possible, but was never convinced of it by any concrete code. The standard library, for example, is horrifyingly bad.

Ultimately languages can lead you in the right direction or the wrong direction, but which path you take depends on the programmer. Google requires code reviews for nearly every commit, and they get to hire the top 0.01% of programmers. Imagine what the rest of the world is like, without code reviews, testing, or good programmers.

Go isn't going to fix that little problem :)


"Google requires code reviews for nearly every commit, and they get to hire the top 0.01% of programmers. Imagine what the rest of the world is like, without code reviews, testing, or good programmers."

Yet even with our quality of code we are still struggling with our massive code bases. C++ build times alone are reason to find an alternative. We hope that Go will work at scale while having many of the productivity advantages of scripting languages.


> I guess I knew it was possible, but was never convinced of it by any concrete code. The standard library, for example, is horrifyingly bad.

Between your comment about Android and this one, it looks like your problem is much more with the Java API's than Java itself.

Besides, I disagree: the Java collections are fairly solid with the right mix of abstraction and efficiency. Compare with those of Scala, for example, which require a lot more work before they become decent.


> There aren't any colossal Go codebases yet, but so far things are looking promising.

Not really. Go keeps all the old errors everybody should know are errors (nullable pointers, shared mutable state, raw types, ...) and then proceeds to add new ones, and packages all of that in a less regular syntax just in case there was any chance to get a good language out of the previous clusterfuck.


Nullable pointers and shared state reflect the way the machine actually works. If you regard these design decisions as mistakes, then Go clearly isn't the language for you.

(also, which language is Go less regular than? Lisp?)


Go strikes a middle ground between low-level and high-level in some awkward ways. It wouldn't be hard to use e.g. nullable types or option types to outlaw null pointer exceptions without restricting the set of possible programs, and with stronger static guarantees of correctness. On the other hand, Go also has mandatory garbage collection, which emphatically does not reflect the underlying machine and also restricts its usefulness in certain situations.

w/r/t regularity: most of the functional programming languages (e.g. ML, Haskell sans GHC extensions, various Lisps) are incredibly regular, especially in the semantic sense of providing a few semantically simple features and milking them for all they're worth. Go has quite a few special cases (e.g. the make versus new distinction, the iota keyword) and some odd omissions (e.g. simulating union types involves what I perceive as interface trickery; const only allows numbers or strings as values.) Coming from C++, Java, &c, Go seems incredibly regular—the lack of OOP goes a long way towards keeping it simple—but it's not a simple language except in the context of "modern, Algol-derived applications languages." Which it is an improvement on, but it's not regular in the strict sense.


> Go keeps all the old errors everybody should know are errors (nullable pointers, shared mutable state, raw types, ...) and then proceeds to add new ones,

And because of the absence of exceptions, Go forces you to deal with errors at the call site (see the number of times you see "ok, err = Foo(); if (err)..." which is not scalable to large scale software.


I left this specific point out, because I'm on the fence about it. I do think the C-style way of Go is a genuine mistake, but I also think when type systems are used to force the caller to know about what's happening but the language provides tool which let this be done in a non-absolutely-painful manner (à la haskell, with the `Either` type being used to report success/error, and pattern matching or monadic lifting letting users either act cleanly or propagate errors without being overly verbose and drowning their own code in explicit error propagation) it works rather well, and limits the amount of runtime surprises.

On the other hand, return-value-error-reporting does not give a way for deep callers (caller of the original API when the error happens 6 frames down the stack) to try and recover (instead of just bail out, or more generally customize the error recovery policy) the way condition systems do in Smalltalk, Common Lisp or Dylan.


Go is imperative language. You will need a lot of context to understand a piece of code. Sooner or later. ;)


IMO, the absence of real exceptions (deferred et al. are not enough) and generics makes Go unsuitable for large scale projects


From the Article: scripting languages weren't really designed for large-scale development efforts involving millions of lines of code

In this instance Carmark is talking about a specific instance - scripting for high-performance games, that still need to work with the tradeoff of high levels of reliability.

He even mentions tempted by functional languages such as Haskell and Caml. Then he lists a couple of reasons it's not appropriate - performance is one, the learning curve is the other.

He then goes on to say that he'd think about it differently, but that performance is such a dominating factor and issue for them. That's driving the majority of the logic.

So I'd not draw too long a bow to suggest that he's canning scripting languages in general. I think the author is stretching with that statement.

you'll have fewer lines of code and clearer separation between components

I think that's the most important statement. If you have millions of lines of code, the answer to to _write less_... and that's where dynamic & functional languages are so important.

I've worked on large codebases. It's rare that one language can cover the whole domain without some big potholes appearing (when I say cover I mean vertically in terms of low-high level code and horizontally in terms of function). That's where your DSL and dynamic languages some into play.


Out of curiosity, what do you think a language geared towards "big software" projects would/should look like and does it exist, yet? Or would you argue, at some point, we hit a design vs. code problem where choice of language doesn't matter so much as does choice of framework and we push more of the boiler plate code onto the framework?


I'm not sure a language can solve all the problems.

One I think would help would be the inability to interface with concrete APIs. If you want to have a variable or instance attribute or argument to a function or method, its type must be declared in terms of an interface, not a particular implementation. This way, you never get yourself into the situation where you say, "I don't want to inherit from Foo, I want to implement its interface myself" but can't.

Beyond that, there is only so much a language can do to help you. It's nice to have guaranteed privacy, but if you use it like the Java standard library does, it becomes the worst language feature ever. To prevent misuse of a powerful feature, you need to be smart and you need to have other smart people reading your code.

The "way forward" is to realize that software development is not easy.


So what you're saying is : "difficult problems are difficult to solve whatever language you use"?


Sounds like someone hasn't used Ada.


As far back as Quake 1 (in fact, before Quake needed version numbers), there was https://secure.wikimedia.org/wikipedia/en/wiki/QuakeC which was an interpreted language, although later on it got a compiler to turn it into a native dll.


I am clueless about this stuff but:

It was interpreted but it was compiled. QuakeC was not a scripting language as one would say. You had/have to compile it. The engine has a VM to handle it.


yeah, you're right. I grabbed the first line of the WP article and missed the part about bytecode and progs.dat.

Quite embarrassing, since quakeC was one of the first languages I played around with, and I don't remember the compile step. Then again, I was working from tutorials so it's possible there was just a 'compile & run' batch file or something.


It's refreshing to see such a great communicator. He spoke, off-the-cuff, for over an hour, at a level that seemed pitched to appeal widely.

The best thing, though, was the obvious love and enthusiasm he has for his team and their software.


Google's cached version (site was down for me): http://webcache.googleusercontent.com/search?q=cache:veFy_mx...


Makes me wondering with all those WoW-Addons which are Lua, it certainly can't be _that_ bad..


Sure it can. When I play WoW years ago there were addons that could take up over a gig of ram. The mods that were item database related could often obliterate performance. I had a beefy PC at the time and there were still a few popular mods I couldn't run.

The perceived advantage of something like Lua is that it's easy to write, allows for fast iteration, and designers/artists can use it. The issue is that designers/artists really shouldn't be using it. To write fast code it requires a programmer and if you have a programmer why not just do it in C++ where it will be several times faster?

A common game development pattern is to write the bulk of gameplay code in a scripting language and at the 11th hour in a desperate panic start moving as much of it as possible to C++ until you fit in 360/PS3 memory and are fast enough.


  To write fast code it requires a programmer and if you have 
  a programmer why not just do it in C++ where it will be 
  several times faster?
Why should I as a programmer not profit from extremely fast design/implement/test/fix/repeat iterations? I'm not saying scripting/high level languages like Lua, Python or Ruby should always be preferred. But it's equally wrong to assert that C-oid languages are always the superior choice. It's much more a question of knowing when to use what and why, which in my opinion makes an important part of being a good programmer.


I've never seen the effort involved in making a nice quick-feedback REPL-style affair pay off. Perhaps for certain types of game it would be worth it, and perhaps amortized over a long enough period the result would become value for money. My experience has been that making a script system work that way, and work reliably, and provide a nice interface, just takes up more time than you end up saving.

Anyway, all the programmers I've known would want good debugging facilities as well, like the ones you get in Visual Studio/ProDG/Code Warrior/Xcode/gdb/etc. - oops, only kidding about gdb! This thing really needs a GUI. This all just expands the time-suck further.

That's not to say it can't be made to work, of course! People can do anything, if they set their mind to it hard enough, and if they wait for long enough, maybe it will even become value for money.


Where it really does work is in explicitly actor-modeled games, like MMOs with thousands of NPCs. You don't want to have to compile a new shopkeeper and dlopen() him on the client; you just want to stream his source and eval() it, then toss it away.


My bet would be that most experienced programmers actually would prefer to compile a new NPC, and then load it in as a shared library. This arrangement stands at least some chance of working neatly with whatever debugger you are using, and should provide pretty decent iteration times.

I would certainly win this bet if it involved all the experienced programmers of my acquaintance.


Perhaps I wasn't specific enough with "MMOs." I'm talking about the kinds of games where players can create their own maps, with their own scripted NPCs, and push them onto the server, where other players will then pull them down and execute them in real-time. Second Life, MOOs, etc.

This is basically an equivalent model to web-browsing. You don't load compiled Javascript into your web browser's process address space and jump into it; you put an interpreter between you and it, both for the sake of the code (JS has its own debugger, which treats the browser as a black box that scripters rarely have to think about, and never see backtraces from) and for the sake of the web browser (which doesn't have to deal with any consequences of bad scripting affecting the validity of the browser's own data structures.)


Let me tell you about one project I have been working on. It was a large embedded device, with lots of highly optimized code in C and ASM on Sharks and Blackfins. The problem was, every time you changed the code, you also had to push it on the device via FTP, then power down the device to reset it's memory, then reboot it with the new binary and reconnect your FTP and debugging host. Some of that could be automated, some not.

However, we also had a scripting interface. Update a script, tell the software to restart, done. Easy.


Nice.. I too was working with embedded C++ dev w/ a Blackfin & Sharc combo and yes the standard build cycle was kind of lame. I attended a Blackfin course that just happened to be located at Analog Devices HQ. In the 4 days that I was there I got an OSC (Open Sound Control) library w/ message parsing and Lua support up and running hooked up to desktop GUI audio control apps built with a framework I've created in Java. I continued to try and get things going with the OSC message parsing to Sharc communication for DSP control (Ambisonics mostly), but then Android steam rolled me..

I already had a real time app dev runtime / framework in Java for J2SE that immediately started porting to also run on Android. A modern SoC and Android w/ my framework on the desktop and on the device for 3rd party app dev is now driving my future audio hardware dev efforts. First though I get to do a general release of the software dev framework as a product which never really would have happened without Android showing up. Things changed a lot recently!

Anyway, good idea with the scripting / Blackfin deal though as that can pay off big.


Surely you profiled to see that it was the interpreted language causing the slowdown?


I think there's something to be said for the ease of patching with a scripting language, though. If you need to update a quest, just ship a few new lua files and it's done - seems much easier than messing with a binary.


Being a PC game, they might have easy solution waiting for them - LuaJIT. Now Mike Pall is saying that the garbage collector is still the old lua one, so in your case (item database) it might not give much speedup. Then again it might!


> but you know one of the big lessons of a big project is you don't want people that aren't really programmers programming, you'll suffer for it!

Take that and shove it up your cross-functional agile team :-D




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: