The optimizing for life stuff I think is the most important point here.
Using the hippest languages and techniques is just so much less important than getting things done in a simple, effective way that makes sense for you right now.
A little anecdote real quick. I have an Android app out there (and more on the way). I work with my girlfriend as kind of a team on Android stuff. We needed to implement an "action bar" kind of like the Android twitter app has up top. I saw a horizontal bar with two buttons. She kept insisting I dig through some of Google's open source implementations and go rip out some random code that I didn't write and don't yet understand and try to get it to work properly with my code. She was convinced this was the only right thing to do.
I refused. Whipped up some basic UI XML in a couple minutes with two images buttons for the actions. Done. While many people tweak and tweak, our app is out there. Nobody would know the difference between the official bar and my completely idiot-proof implementation.
As someone who is (usually irrationally) paranoid about optimization, I found this talk to be inspiring. Also, Braid was by far one of the most wonderful gaming experiences I've ever had. So for that, thank you Mr. Blow.
An excellent talk, but the thing that I disagree with is his defense of having functions with thousands of lines of code in them.
It's true that breaking up a function into smaller ones just for the sake of it doesn't make sense, but if you have a function that needs to be over 1000 lines in the first place, it makes me think that the code wont be very reusable in future projects.
It's not going to be reusable anyway. It's application code, by the time it's actually polished, featureful and useful to the end-user it's so tightly integrated and full of special cases that you have to consider it a one-off.
Again, as Blow mentions several times, this is a domain-dependent thing. If you are shipping line-of-business code that sits around for 30 years, yes, that kind of polish becomes important. But games don't (typically) have that lifespan. If you want to do an update 10 years later you can remake it with all new assets as well as new tech, and the exceptions to this tend to be on a completely different scale from indie games(MMOs...) So when "in the moment" of bringing the game together you optimize for broad, fast changes, which favors having a bit of lengthiness and redundancy within the source.
Here's an example from a game I'm working on now:
I have some UI code that makes rounded-rectangle frames. I thought, when I first made it, that I might want to customize the look a bit(roundedness, thickness, coloring, etc.) so I parameterized it. As it turns out, I used the same parameters everywhere. So the abstraction wasn't useful. But at the same time, I don't really have a great need to refactor it, because the problem it causes(dirty looking code) is so localized to the instances where I create the frames. It's still there and the customization is possible. If I need something different from a rounded-rect frame I'd be writing completely different code, so it just doesn't make a huge difference.
On the other hand, one thing that I got wrong and want to fix is an architecture problem. I went in with an architecture tuned around a single, serializable scene capable of defining complex entities. But I made a game with a lot of UI/editing screens and one gameplay screen with very simple game elements. So ideally I would have had something with a DSL for defining those rounded-rects and tweens and positioning efficiently, better ways to manage all the scene transitions, a system to iterate on UI quickly(e.g. load a file = restart scene), and a simpler model for the gameplay. As it is, it gets a bit hacky in a few places, and the overall "shape" of the code is likely to be discarded, but I will be able to reuse lots of the pieces if I do a sequel or similar type of game; they'll just be assembled in a different way.
Yes, it's important to make the code clear. Shipping is so much more important than code reusability though. When the lifetime is small, rewriting it to be more modular isn't quite as justifiable as in a long-term project where you actually have the time to get it right.
Pointless internet argument. :) If you have a function that's over 1000 lines, a lot of it is probably cruft or very app-specific which you might be able to reuse part of in other parts of the same project, but code-reuse across projects doing different things is something fairly rare. His reasoning is pretty sound on this, it didn't take long for me to find a 200-line function (to be fair it contains an inner class...) in a million-line+ OSS project I contribute to, but it's Java and doing something very specific that isn't going to be used outside of the project (since it makes use of lots of function calls to other parts of the project) and wouldn't really be used outside of that package in the project. There are places where you could save 5 lines over 3 or so places, but adding a utility function and documenting it usually negates any line-length benefits and doesn't do much for understanding. It can also make things harder for newcomers to figure out what the code is -really- doing, as he mentions.
Of course, 200 lines is one thing, 1000 lines is really long... Especially if it's in the context of a 1000+ line-long class or module. In general I agree with you that modularity is good, I really favor a functional style at heart. :) I think it really depends on the context of use, for function-length. Some people can't stand functions bigger than an 80x24 terminal screen which I think is nuts.
What I disagree with is his defense of not using (correct) data structures. Though if you have to roll your own data structures or they're hard to work with because your language and/or set of libraries suck, that's a problem, especially if you agree with his "optimizing your life" bit. And sometimes what seems like a good data structure actually isn't, which is why people do some analysis before writing code.
Re: the data structure defense. In the context of games, when you first write code you're in a chicken-and-egg situation, where you need finished content to demonstrate both correct behavior and desired performance in your code. So you use the simple structure, which you can rely on, and perhaps put a wrapper around it with methods describing specific functionality, if you have an expectation that you'll come back and rework it eventually. Having done that, you can move on to building/integrating the content and seeing if it meets design goals, and once a performance problem arises, you have a great test case with the correct behavior to work from.
Compared to a naive solution, "putting things in a hash table" should take just as long to implement, not change the size of the executable, not be bug prone, not increase the size of the source code, not take longer to compile, and not take longer to link. The only way all these things could happen is if the programmer decided to implement a hash table from scratch every single time he or she wanted to use one.
This could also happen if the programmer simply never spent the five minutes necessary to get a library with a hash table implementation in it, opting instead to spend a third as much time on hundreds of occasions. I suspect that this is closer to what's going on...
Not to be insulting or anything but you kind of confirm one of the OP's points. That is there is a difference between "knowing" something and "deeply, intuitively understanding" something.
The OP talks from experience gathered over many years of "getting things done". I would listen very closely to him.
local resource_name_to_address = {}
local n_resources = file:read_int()
for i=1,n_resources do
local name = file:read_str()
resource_name_to_address[name] = file:read_int()
end
is easy to write indicate that I have not spent any time programming?
Not at all. That code IS easy to write. And when it's fine, it's fine. But when it's a bad idea, it's a bad idea.
This code allocates a lot of memory, needlessly. Why not just do a single `fread' to read in the TOC as one big lump, followed by a loop calling `strncmp' on each entry each time the offset of a particular resource is required? (You don't even need to malloc in this case; just decide that the TOC will never be more than a certain size, and put the buffer in the BSS.)
I personally would prefer the `fread' approach. It doesn't allocate lots of small pieces of memory, so the list of memory allocations is easier to read. You waste less space on malloc bookkeeping overhead, and/or you don't need to go to the effort of writing a specialised allocator for small allocations. If your hash table is closed, you won't end up with a big hole from where it had to rehash.
I've spent a fair amount of time getting game projects ready for release and it always, no matter how cunning everybody is (or tries to be), ends up with you - that'd be me, by the way - sitting there staring at a big long list of allocations, because something weird is happening. There are big holes appearing in the middle of memory, exactly where you don't want them, or there's just too much being allocated, or things are leaking, and the debugging infrastructure (if any) isn't being helpful for whatever reason. Probably those zillions of tiny little allocations exhausting the pool available to store the debug info.
This sort of thing is an easy way to eat up the man-hours saved by writing code like the above. Though I suppose that if you (and this would not mean me this time) can get somebody else to do the cleanup work, it probably doesn't matter.
Of course, once you've done all the hard work of following my advice, you could just pre-generate the hash table in the file already, maybe using a perfect hash system, and use that instead. (One of my colleagues used to like this.) That's fine, but... what's the point? You're about to LOAD A FILE. You have to assume it will take at least 0.1 seconds before the read can even begin to start. The difference between getting to the entry quickly, and getting to the entry really quickly, will just get lost in the noise.
The points implied by this post are all there in his talk, I think ;)
It would be fairly difficult to create most of these problems if I wasn't writing the bulk of my game code in C or C++. If I wanted to rapidly develop a game worth shipping, I'm not sure why I would write more C++ than what's necessary to expose the APIs I'm using to some other language.
That is, writing 90,000 lines of the product in C++ is likely a case of premature optimization, or precisely what I was just told I shouldn't do. If not, it is at least a decision to spend hours upon hours of my life waiting for things to compile and days more messing with bugs that only show up if I don't compile with debug flags, which doesn't seem very life-efficient either.
The biggest take away is that it is better to be direct than indirect. When you have only one component to modify, it is clear that the new code goes into that component. When you have multiple layers of components, it takes a lot more analysis to figure out which one is actually responsible. This is especially in the case where there are wrappers upon wrappers. It is a current programming style that I hope goes out of fashion quickly.
Using the hippest languages and techniques is just so much less important than getting things done in a simple, effective way that makes sense for you right now.
A little anecdote real quick. I have an Android app out there (and more on the way). I work with my girlfriend as kind of a team on Android stuff. We needed to implement an "action bar" kind of like the Android twitter app has up top. I saw a horizontal bar with two buttons. She kept insisting I dig through some of Google's open source implementations and go rip out some random code that I didn't write and don't yet understand and try to get it to work properly with my code. She was convinced this was the only right thing to do.
I refused. Whipped up some basic UI XML in a couple minutes with two images buttons for the actions. Done. While many people tweak and tweak, our app is out there. Nobody would know the difference between the official bar and my completely idiot-proof implementation.
All this kind of reminds me of a twitter post I made a while back when reading an argument here on HN: https://twitter.com/BeyondHostile/status/20922938324688896