I tell all the junior Devs: if you aren't horrified by something you wrote a year ago, you aren't learning fast enough. 20 years in and I still find things I wrote 8 months ago that I would not do again.
My favourite is when I go back to old code, think "why on earth was that written that way" and then proceed to rewrite it. Then after an half an hour in I realise that my rewrite is worse than the original method I used. At which point I promptly undo my changes then leave a comment for future self explaining why my superficially bad code works well.
Sadly those instances are fewer than those where my old bad code is literally just bad.
I don't mean this as a response to you personally, but instead to everyone who works this way.
We've been in this situation lots of times, where we'll write some code, then finish the solution after a few days or a week, then look at it a month later and say "Wow, this is really not a good way of doing it," only to spend another few days or weeks rewriting it.
The idea of spending 10 minutes drafting a solution, throwing it out, spending another 10 minutes on diagrams and guesses, throwing it out, typically gets met with "Just _code_ damnit," but, diagramming could compress months of rework into minutes.
You're not wrong and my programming style is very much like that albeit I use the code itself to diagram as I find that's easier for my brain to parse than flow charts. But effectively it's the same process of "sketching a design and throwing it out" cycle that you described.
I suspect quite a few developers who "just code damnit" follow this same process too. After all, it's not exactly hard to rewrite code and with tools like Git in place you can easily stash the different implementation methods so you're not having to lose any work during the cycles.
Usually when bad code gets written by experienced developers it's not so much because of a lack of willingness to conceptualise the design but often just because either the deadline is sufficiently tight that you are forced into writing a "quick fix" rather than something robust. Or because the code base is already an mess (due to the evolution of the product and the aforementioned issue of quick fixes) which means the "ideal world" solution is a significantly larger undertaking than it should be and best not undertaken while you have deadlines depending on it. Or sometimes you see "bad" code simply because the aim of the project is a minimum viable product or proof of concept, thus it's more about proving the product works as a concept than the implementation of it. In that scenario it can make a lot of sense to throw quick code at a problem with the understanding that chunks of the application will be revisited when you start to scale the product.
I do sketches both on paper, and in code. I write all my code in many, many drafts, so that by the time it ships each function has been rewritten. I'd guess about 5-10 times on average. Possibly contrary to intuition, I find this enables me to work much, much faster, and have far fewer defects than not doing it.
> My favourite is when I go back to old code, think "why on earth was that written that way" and then proceed to rewrite it. Then after an half an hour in I realise that my rewrite is worse than the original method I used.
...and remember that the "rewrite" is similar to an even older version that you temporarily forgot about.
As I've matured as a developer, I've taken to commenting much more heavily. However, I focus more on capturing why I'm doing something, which sometimes includes why I didn't do it a particular way.
Why vs What in comments is such a hard distinction to make, that I suspect many professional programmers never reach.
In my past few contracts, the devs at each shop fell cleanly into the overcommenting or never comment school of thought. Many old time C coders hate or love comments for exactly the opposite reasons as new grads writing Javascript or Ruby, and when conversing most argued that the code should explain itself. It is very hard to make code explain why because it is doing what it does, and this tiny nuance is hard to grab.
Just associating the code with a bug or feature request often helps the next guy intuit why you did something.
One job we kept reintroducing the same bugs and the customer was furious. I started reading the version histories more closely and figured out two developers were dueling over two separate bugs in the same block of code. Each would reintroduce the other' bug. Since then and due to some other experiences, I spend time looking at how the code arrived to deduce why it was the way it was. I pride myself on a low regression rate and this helps a lot.
Note that if you value quality over quantity, you're self
-selecting for writing less code but in more critical parts of the application, like libraries or cross cutting concerns like security or localization. And you also have to accept that if you insist that everyone on the team coded like you then nothing would ever get done. The last bit is, IME, the hardest part.
Once I had a situation where I was the developer assigned to a particular interface and my code was synonymous with the code.
Unbeknownst to me, a defect was entered and another programmer made a change. It wasn't really a bugfix but more like a change in the desired functionality. His change worked perfectly well but I wasn't made aware that someone else had changed the code. When I was assigned another defect, I used the version of the code that I had from my last checkout to make the change. When I checked in, I overwrote his change.
We quickly discovered this during testing and I reverted my change. I used his last checked in code as the basis for my change and all was right with the world but we had a confused user and two confused programmers for a couple of hours.
I'm surprised your version control solution let this situation arise considering this is exactly the kinds of problem version control aims to resolve. Usually you only hear of these things happening when you have multiple devs FTP / SFTPing files to a shared repository rather than following a managed check out and check in procedure via your preferred VCS frontend.
My father was a fairly well-known concert pianist in his day, who also taught students continually. Each week, he would be enthralled with a new practice technique or hand exercise he had arrived at. Last week's technique was abandoned, not mentioned again. My mother would mock this, implying that he must not know what he was doing, since he changed his mind so often about what works.
He explained to me that the exercises were not being abandoned at all. They were being refined gradually over time. He had one goal, which was improving the uptake and resilience of brain and muscle memory for music. Every refinement was an opportunity he'd spotted, deep in his own mental language, to hone in on that goal.
The way my coding style (both deep and superficial) changes continuously over time always reminds me of the way my father taught piano. My goal, roughly, is to find the perfect balance of clarity and brevity, while maximizing the ease of continued development. My 27 years of practice has resulted in a long chain of insights about how to achieve that. It's always changing. The way I wrote code 8 months ago wasn't wrong, but I found a set of principles that's better. It wasn't bad, but I would not write in that style today. Why would I? I've learned things in those 8 months.
I disagree, if you consistently think that the code you wrote a year ago is shit then you are probably just chasing a new fad every year. I'd say that if a junior developer with a year in a language doesn't write code he can rightfully be proud of then he is focusing on the wrong things.
Meh, I've been doing this for around 10 years now, and I dislike code I wrote yesterday. It's not wrong, it's just merely sufficient.
Why?
To paraphrase someone else: I wrote sufficient code because I didn't have the time to write good code.
There's always a better abstraction I could have teased out, a more complete refactor to make that one line hack never have to exist, and better documentation which would read more like English and less like shorthand.
25 years of C++ myself. Back in the 90s I coded up all the grossest memory management blunders many times over. Long time since I produced one of those; these days I'm capable of producing some exquisitely subtle synchronisation and threading bugs ;)
Much of the time what I'm learning today is how to write code that is obviously correct or obviously wrong to everyone who looks at it. It's not prima facie 'better' than the code I wrote before, but it's more likely for the original intent to survive multiple modifications by others, and emergency fixes are less dramatic.
The analogy I'm experimenting with now is stage performers for Opera or Musical productions. If you see them up close, such as video recording, they clearly 'overact'. They use exaggerated facial and arm expressions to project their performance far enough so the people in the middle or even the back can see what's going on. They are expanding the audience greatly, but admittedly at the expense of those closest to the action. But on the whole it's a better performance.
When I find bugs in my code, introduced by me or added by someone else, I think about how it looked before and try to determine if it was just a dumb mistake or whether I tricked them into it. Maybe I grouped the code oddly, or chose an unfortunate variable name that implied something else was going on. If it's the latter I think about whether I really want to use that pattern anymore. What's tricky is that everyone tries to make their code look like the existing code, so people may be copying the pattern thinking it'll get them a clean code review. If I think the problem is bad enough I may stop and refactor all the places it's used (it only works if you're really, really practiced at refactoring), otherwise I'll just make a note so stop doing that in the future. I might mention it in a team retro.
I want my code to be as plain as it can be and still get the job done. Indeed at this point I positively fume when a 'senior' developer writes clever code, because they should know by now this isn't about them and they're hurting the entire team trying to fluff up their own ego. Look how important I am that you need me to fix all the really hard bugs. Fuck that, and fuck your dazzling bullshit. The only thing that pisses me off more is a manager that thinks 'blame allocation' is a viable management style.
thank you, I say the same thing every time I see this come up.
That's a young mans game, after you've been doing this stuff long enough the things you learn are in terms of systems design, maintenance, etc. I regularly go back to code I wrote 2 and 3 years ago and think to myself "knowing what I know now, that code is mostly alright".
I can't understand how someone can go back and look at some authentication code, for example, and think the way they did it was horrible. Just how many different ways can you write auth code?
It's one thing if you're a young developer, but at some point the improvements to your code are negligible and have nothing to do with the value you bring to a project.
Where do you guys get a job without any time constraints? Also were you born with all knowledge? You are saying for example that the booking system you made in 2009, which was the first time you ever "programmed time" is pretty much perfect? (change the example to shaders, or physics, or your first Node back end blah blah). I am calling Bullshit.
I would think the ones who constantly go back and worry about old code are the ones you would accuse of having no time constraints.
The thing is, if you're going back to the first time you've ever "programmed time", then what you're assessing is design, not code, which falls in line with what I said.
Yes, the scale at which you notice problems with older code changes as you gain experience. i.e. you may have it down at a function or class level but then, at a system design level, or perhaps in the way that the code addresses business needs there will likely be improvements that you can see given the benefit of hindsight and experience.
I didn't say I still think my old code is horrible (well, some of it is). I said I wouldn't do it that way again. That's more like, "Well this is alright, I guess, but..."
I regularly ask interviewees at all levels, after they've talked me through some code they've written, how they would do it differently (emphasising that I always wish I'd done things differently). It's one of the questions I've found to be consistently helpful in finding out how a person thinks about making software
1. it's a good indication about how they think about improving existing code
2. it can show how they learn from what they do
3. it give them license to talk openly about mistakes (much better than a rote 'tell me about something that hasn't gone well' type question)
That's the best strategy. The most bug-free code is the one that wasn't written in the first place, therefore I refuse to write code unless absolutely necessary :).
Not to mention the best one for code coverage. When junior devs ask me for advice regarding increasing code coverage, I tell them instead of writing more tests, how about writing less code? Our boss really liked that one.
Unfortunately, I've seen IT departments taking this joke as an axiom. It seems common in schools and universities, where the infrastructure is maintained by being so limited as to be completely useless - and hence unused, and thus without problems caused by users...
One of my favorite tricks, that I sometimes play on myself:
Insist the person documents all the quirks and gotchas in a block of code or library. Sometimes the embarrassment at having to justify it will prompt them to fix the problem, other times they (or I) will realize that it literally takes less time to fix the issue than it does to apologize for it.
Something about the writing process unearths simpler solutions to tricky sounding problems. Possibly the same phenomenon that TDD tries to exploit.
Yeah absolutely. I always reduced this to "three months" and passed the phrase around to juniors, etc. And when I interviewed at the last company I worked for, the CEO specifically said, "If you aren't horrified by something you wrote three months ago..." And that's when I knew I wanted to work there (among many other reasons, of course).
I would worry that this would encourage more tech debt than I'm comfortable with. I already struggle with people shrugging off obviously predictable consequences of their thoughtlessness. I kind of expect people to at least attempt to think three months out. Who knows what we'll be doing in six but cmon.
Do you really look at code you wrote three months ago -- or any code you've just written -- and not see a lot of things you could improve? This isn't about not having foresight or failing to plan or not designing well.
Every time I am about to do a commit, I look at the code and give myself fifteen minutes to make it better. Always have (probably a habit I learned doing homework in college? I'm not quite sure).
In the grand scheme of things you'd hardly notice that it takes me a little bit longer. Over time those improvements become habituated and I just do a lot of them while I'm coding. Especially when I get stuck.
It's not in like cleaning while you cook. Once you have a tempo it hardly takes any longer to prepare the food but you don't leave yourself with a mess that discourages you from doing it again.
You seem to be implying that you only ever need fifteen minutes after any programming task to make all possible improvements that could ever happen to that code.
One of my favorite quotes, "the dev whose code I hate the most is me from 2-years ago."
One of the unfortunate outcomes of the job market encouraging job hopping for salary bumps is many new engineers don't get the opportunity to see the cost of their own mistakes.
The difference between a junior dev and a senior dev is that you already know it's ugly code and it's going to bite you some time in the future when you write it.
Or maybe they just had a decent feel for design already? I look back at some things I wrote in the first couple of years that I was coding, and it's actually pretty alright. Maybe a bit amateurish and not how I'd do it now, but not terrible. I can't be the only one like this.
> Maybe a bit amateurish and not how I'd do it now, but not terrible. I can't be the only one like this.
Not at all. I started my first real project in September 2015, had a release out by mid-November. And the code worked, really well. I was praised by users of the software for how reliable it seemed to be. Now, 15-16 months later, I can definitely see the warts - there's a lot of global state, some crackpot abstractions and if I ever want to add new features to it I'll have to do some spring cleaning. But at the same time... it still works. The design is appropriate enough for the needs of the project and its functionality.
Judging by some of the 'professional' code I've read over the years, I'd say that's not a particularly high bar...! At least if you're horrified by code you wrote, it shows you care, and that shows some promise :)
I concur. Especially if you define "professional" as "stuff people do at work, for money", the kind of code I saw in my career... well, it wasn't that much different from the code I was writing when I was 13.
I once had to implement "encryption" that consisted of rot13(base64(data)). The class ended up being named SecurityTheatreUtils, to ensure no one else in our team ends up thinking it does any actual encryption...
He basically admits that several of the "improvements" boils down to ripping out patterns that decouple code instead of finding alternate solutions. They're kinda quick and dirty, not agile, and could make refactoring a nightmare. I mean, he's not necessarily wrong, but hindsight is 20/20 and once you're done you can look back and cherry-pick and say "ahh, well obviously these things would always need to be coupled! I shouldn't have done the gymnastics to decouple them!" .. but that's a dangerous mindset to start solving the problem with.
The whole topic of global state is very interesting and one I approach cautiously. I'd love to hear what more experienced programmers have to saw. When I go with global state I usually rely on someone else's judgement and use a framework. Like OpenGL and the graphics pipeline has global state.. and I'm okay with that. I use Cinder/Arduino, those things have some global states and for good reasons as well.
But you really try your best to find a stateless solution - and here is feels like he isn't really trying
The whole topic of global state is very interesting and one I approach cautiously. I'd love to hear what more experienced programmers have to saw.
1. Quake 3 relied heavily on global state. It was good design, and the engine produced billions in revenue.
2. Emacs relies heavily on global state. A global variable is a first-class citizen. When a package declares a global variable, it's a contract between the package author and the user that "You may configure the behavior of this package by binding this variable to a new value, then invoking one of my functions." Emacs lisp has language constructs which makes this easier: `(let* ((foo-state 42)) (foo-bar))` will update the global variable `foo-state` to 42, call the global function `foo-bar`, and ensures that `foo-state` is returned to its previous value even if an error occurs and an exception is propagated. In other words, global variables are the interface to a module, just like a module's functions, and global variables are almost never left in an unexpected or invalid state due to errors.
This system works shockingly well in practice. Emacs is basically a gigantic state simulation, and it parallels a game engine quite strongly.
The Emacs Lisp language itself is cumbersome, which has led many to dismiss it as ancient. The true power of emacs has little to do with lisp and almost everything to do with excellent design decisions.
3. The history of programming informs us that those who hold to dogma are quickly made obsolete. Every tool and pattern has its place. A technique that simplifies X in a certain domain will make X far more complicated in a different domain. Context matters.
> The true power of emacs has little to do with lisp and almost everything to do with excellent design decisions.
Those design decisions didn't come out of the vacuum. The ability to patch and modify almost everything at runtime is something that came out of Lisp and Smalltalk, and wasn't even on the map of the (currently) mainstream languages until very recently. Dynamic scoping is something that AFAIK has strong Lisp roots too.
Emacs Lisp is less of an accident and more of an old, old language. Even Lisp family moved on to default to lexical scoping, though it didn't remove dynamic scoping - as in many cases, it's still extremely convenient.
Dynamic scoping can easily be included when desired in the form of a key-value store. I can't think of any advantage to having it built into the language itself. I guess you could argue standardization, but most modern languages provide a standard kvstore.
Now that you mention Quake 3, this article [1] by John Carmack himself on the subject is interesting, and then someone reviewed the source code from Doom 3 [2] where apparently they put some of that in practice.
The Emacs property you are demonstrating with let is "dynamic scoping" and not "global variables" (which is leading to a weird and essentially incorrect description, even if it is difficult to notice a behavior difference between that mental model and what really happens).
A global variable is a variable with global scope, meaning that it is visible and accessible throughout the program, unless shadowed.
Emacs added lexical scoping several years ago. When a global variable is declared in emacs using defvar, in addition to becoming globally accessible, the symbol is also flagged as special. This causes it to use dynamic binding whenever it is `let`-bound, even if lexical scoping is in effect.
I pointed out dynamic binding in order to call attention to a technique that allows global variables to be composed into large-scale systems without being hindered by the problems traditionally associated with global state. But if you'd like more precision, feel free to ask.
Game development is kind of different, in that it's rendering the state of a simulation; so aiming for completely stateless solutions ends up making code quite convoluted -- you're hiding state rather than getting rid of it, in my experience.
Now, stateless functions are still well worth it, at least up until it's time to get it optimised for performance ;)
Game development is a fascinating world where a lot of the things I would normally reach for don't apply quite as well. It's quite likely you know all of this already, but I found learning this myself absolutely fascinating, and one of the reasons I still like to tinker with game development in my spare time :)
Coming from a background mainly involving game development and control systems, this is one of the things that I find myself shaking my head at when reading HN. So many people here forget there's a software world outside web code, where memory and CPU time aren't effectively infinite, you're modeling processes and systems with continuously evolving state, and where trying to make everything immutable is just plain silly.
Well my last job was working on simulations and I've only worked on performance sensitive system....
Making everything immutable maybe is silly, but it's still a very good goal to have in mind. A continuously evolving system can be designed in an mostly immutable manner as well without sacrificing performance.
In a big picture sense you are trying to have your system be a function of time. And while for efficiency you sometimes should preserve intermediate states, you can typically accomplish that with clever caching systems that are not exposed in the interface. So the design remains "immutable" in a sense
It's certainly not realistic to make everything immutable, but it's still worthwhile to move global state into some monadic encapsulation to make your code more testable and composable. This might not be an option on embedded systems, though, or for certain specialized hardware, e.g. GPU's.
Since a lot of my programming experience is writing hobby engines and games (and reading up a lot of engine code and gamedev resources), I sometimes wonder how negatively does this impact my regular career (previously web, now desktop applications).
EDIT: to clarify, I mean the code I write, not my career prospects.
I know several people who used to work in the video game industry, and I used to as well (I still may get back into it eventually). It's much harder to convince interviewers that you can handle the work than it is for you to be able to do that work.
Granted, there was a lot I've picked up over the past few years working for large corporations, but not all of it was good. And there's just as much awful code in business applications as there are in game applications, and it's even less fun to debug them.
Architectural patterns are pretty similar in both; the primary difference is that in a desktop app, you deal with single user and a lot of impersistent state.
Decoupling isn't the same as statelessness. The state machine pattern used by OpenGL is a logical decoupling to the physical state of GPU hardware. State machines, buffers, protocols and the like - abstractions that we tend to think of as "low level" or "implementation detail" - are in fact the primary means of decoupling used when dealing with stateful systems. Computing hardware itself makes considerable use of these abstractions in providing modular interfaces.
Stateless abstractions are great when you can provide them. But the stuff you're building on isn't itself built out of statelessness.
but that's a dangerous mindset to start solving the problem with
A far more dangerous mindset is to think that when starting the problem you are capable of planning the perfect architecture up-front. Often you don't even know what the problem is. Far better to keep things coupled and simple till you know in retrospect that you need to split them out. Same goes for global state.
This is a hard lesson to learn. Abstraction is seductive.
I think about making an effort to loosely couple from the start as building in flexibility, which is useful precisely because "Often you don't even know what the problem is." Flexibility isn't a commitment to any particular abstraction of the solution, on the contrary it's the ability to dump any abstractions that you care to later on with less fuss. (Of course, you can still be unwise and decouple or create modules in ways that commit you to more than you realized.)
We agree that placeholder code shouldn't commit you to too much, but there are patterns that help you with that.
In the parent, the original decoupling turned out to be less performant (I take it) but wouldn't hinder completing the first draft or get in the way of a rewrite with closer coupling.
The main challenge in abstracting for flexibility is that you are competing against your text editor. If copy-paste-modify introduces fewer new sources of error than any abstraction that is near-to-hand, then you end up in a scenario where it's actually the best option, because it is clearly reusable(in the sense of, hey, I can reuse it by copying it again) and it doesn't burden you with dependencies, so you can quickly try something and then delete it if it didn't work out. And for a lot of features, first-pass attempts, learning code - that's the space you want to be in. Cheap, dumb code.
This is what I started doing in the last year or two. Now my abstracted code tends to be directly motivated through refactoring the copy-paste stuff, with only a little bit of intentional architecture generated after learning that the abstraction addresses a common concern. It's a very cultivated way of developing the code - picking fruit as it turns ripe.
Abstraction isn't necessarily what loose coupling is about. It's worth some thought at the beginning to structure dumb code, to reduce waste and speeding up debugging. I agree the there's a useful element of Darwinism in coding, but Modules of cheap, dumb code are better than a big ball of mud made of cheap, dumb code. However it does depend a lot on how much you like rewriting and refactoring. Most people don't seem to. If you like it a lot, which I don't think is a bad thing AT ALL (it's just not common) I can see the attraction of starting with as much dumb code as possible and evolving from there. (But beware of local maximas.)
Even so, some decisions turn out to be extraordinarily hard to undo. One of my great blunders was a poor choice of data structure (too greedy with RAM it turned out.) The more experience you have, the more likely you are to sidestep those fatal early choices, of course.
A far more dangerous mindset is to think that when starting the problem you are capable of planning the perfect architecture up-front.
This sounds off to me; if you assume you'll need to change the architecture later on, aren't you better off starting with a decoupled system that can more easily be changed? Or are you essentially arguing for YAGNI, in the sense that decoupling upfront is potentially useless?
> aren't you better off starting with a decoupled system that can more easily be changed?
I think the problem is that you don't know where the boundaries of your abstractions should be until you have a better understand of what it takes to solve the problem.
My current rule of thumb is: if I know how to write it correctly, then I write it correctly. If I don't feel confident, I write the dumbest / most direct version that works to get the feel of the "lay of the land", and then refactor it later based on the new experience.
Majority of blogging is focused on how to use something or describe some final idea/conclusion. They are not bad per se but reading about failed ideas, dead ends etc. - the journey itself that lead to the final shape/conclusion is very valuable.
exactly. But, imo, you should do that in one point. Only then you can learn from mistakes, and learning from mistakes is the best learning. At least for me. Same in real life. Many examples, here's one: I'm confident and careful with ultrasharp cutting tools. But only because I cut myself bad a couple of times: before that I was more reckless even though I thought I was doing ok. Also others telling me to be careful had no effect whatsoever.
If you think of patterns and frameworks as tools, when you're a beginner the only way to master and ultimately discover the limitations of those tools is to use them everyday.
Patterns maybe, but you shouldn't think of frameworks as tools.
In the sense of tools in the real world (like hammers, etc) which are fundamental and basic to the task, used by everybody in the profession, and have staying power.
Frameworks on the other hand, are just some code some random guy (or random community) put together. Some are just awfully coded, despite their popularity. They are frequently rewritten. And they get out of fashion every few years.
I get where you're coming from, it wasn't a particularly good metaphor I put forward. I would argue that there's some legitimacy in claiming that frameworks are just prepackaged collections of tools.
It's funny you mentioned hammers actually, the movie playing in my head when I was writing that post was of a neophyte hammering nails in all day, struggling to remove nails with some contraption, eventually deciding that there must be a better way to remove nails and inventing the hammer with nail extractor head.
Sometimes professional graphic designers end up selling one of their concept pieces to a customer with a request of "no changes". Sometimes it's even the one the designer least liked and produced as number six of the minimum four concepts just to show the customer more options.
Of course I don't literally mean that; you can write code in a way that makes it easier to debug, extend, etc. But too many developers get wrapped up in engineering the perfect system and forget that they're supposed to be, you know, writing a program. I think this is why a lot of hobbyist projects run out of steam.
It's funny but it's true. There's tons of production code out there making MILLIONS OF DOLLARS for it's company, that 1. compiles with 10,000 warnings, 2. has no unit tests, 3. has global variables and gotos all over the place, and 4. exhibits buffer overruns, undefined behavior, and thread-un-safety. In the commercial SW world, you almost never get time to polish the stone. You race to your deadline, release, and then race on to the next mess.
When I was a young engineer I used to think, "One day, ONE DAY, the company will go down in flames for all this bad software practice, and then I can stand triumphantly among the ashes gloating over how I was SO RIGHT to worry about compiler warnings!"
If you replace "bar" with "rung" I think you may evaluate that a bit differently. According to Kent Beck, "Make it work, make it right, make it fast."
Something that's reached for the most proper implementation, the fastest execution, and the most compatibility with something else is still useless if it produces the wrong results quickly and prettily.
Going through this with my side project (a game of sorts, C++ and DirectX) but it's more of the opposite issue. I needed to implement a very simple GUI and didn't want to dig into any libraries, so I created a sort of minimalist mouse event flow with GUI objects. It was designed just well enough that it wouldn't be full of bugs.
But as I've been adding all the features I originally intended I'm finding it's a bit of a slog. Every new feature I add is pretty well compartmentalized but it all still smells a little + sometimes requires some refactoring to fix unintended side effects. I'm only working on it a few hours a week (as opposed to 16+ last month), partially because it's just not fun writing code that I'm not proud of, but also because new code requires more and more caution as the system grows.
Do you ever reconsider your no-libraries stance? There have been times where I didn't use a library but eventually learned that the library exists for good reasons. Some time is wasted, but on the other hand, I have a deeper understanding.
I use libraries all the time. However, all the C++ DirectX GUI libraries seemed way too heavy for what I was trying to do. Part of the fun of hobby projects is creating your own little development style, and digging through old docs on opinionated frameworks to reconcile those choices doesn't make for a very fun hobby project :(
Now that I think of it, the GUI itself was simple enough to do, but the mouse clicks in the 3D scene + various methods of selection were the biggest pain. The GUI just had to show things related to those selections.
I'm going through similar stuff at the moment; fluffing up my opengl application with an actual interface that is forgiving and functional is proving slightly less trivial than I thought - but fun. And oh so nice compared to native Windows.
I like to think of the original code as the first approximation to correct. You can always get closer to correct, but if you ever actually reach it, the universe will hit a breakpoint, and the daemons will halt their processes, grab you, and drag you and your code-that-mankind-was-not-meant-to-know off to coder heaven.
Coder heaven is nice. It has free juice, catered lunches, and foos-pong tables. Individual offices, with plenty of collaboration-friendly huddle rooms. You're only on call 1 eon out of every 4. Unfortunately, the universe is one giant CRUD app.
> I think this is why a lot of hobbyist projects run out of steam.
Thank you! I asked that for myself and tried to create (really) good code. Result: Hobby project felt more like work, than actual work did. Now I realized why.
Exactly. Perfect is the enemy of good enough. Granted, you still do have to cross the good enough threshold. I think this is the harder question to answer: Is this code a good balance of correctness and maintainability without being perfect?
At the end of the day, you are either adding value for the company and team or you are not.
I might be biased as I agree 100% on the takeaway, but this is amazing! Not only did he use best practice, but most importantly he made his own conclusions, and learned new things, instead of just accepting things how they where.
When I started coding, I though you could only have one letter variables, like a, b, c, x, y, z. I did not question this, but one day I ran out of variables ...
Edit:
I looked at some of his other posts and saw an altitude bed on an old photo, witch concludes he probably is/was an athlete. I think it's common among athletes to have the mindset that they want to keep improving their skill level. This can be a good trait as a developer, but they are hard to manage, as such persons will quickly outperform their colleges and get bored. They work best with other people like themselves, or with old experienced developers. Make sure you protect him so that he do not burn out himself or his team. While these "10X" developers can work very hard, they are just humans.
> When I started coding, I though you could only have one letter variables, like a, b, c, x, y, z. I did not question this, but one day I ran out of variables ...
At least the minifier wouldn't have to do much work. :)
I started development on Texas Instrument calculators. On most of those, you could only have one letter variables in its TI-BASIC programming. I lucked out and convinced my parents to spring a little extra for the TI-85, and it had a lot of extra goodies, including named variables.
I credit a lot of my initial understanding of how programming works to making various text-based games on those calculators while only half paying attention to class in high school.
I've been running a browser-based MMORPG for a couple of years and when I started out I was a total novice in Node.js. I didn't even know how modules worked so the back end was just one big file. Needless to say, the code is a complete mess and the refactoring has made it even worse since I've started to implement some patterns just to regret it.
I'm thinking of making the whole game open source but I'm so ashamed of the codebase that I don't want anyone to see it. It's a dilemma since I think the game has potential as an open source project. It's inspired by Ultima Online, Minecraft and Reign of Kings but with a lot of constraints making it easy to implement new features. It also has quite a few dedicated players.
It's actually almost unplayable right now since a bug I haven't managed to pinpoint is messing up all the mines (ugly codebase doesn't help). They're completely strip-mined and the economy is broken with a few players sitting on all the resources. I wouldn't recommend starting until I've managed to fix it in and restart the world.
I'm currently in the process of rewriting the front-end from the old jQuery mess to React. It's like 95% feature complete but as soon as that is done I will start investigating what's going on with the mines.
Again, I strongly advise against starting right now since you won't experience the game like it's supposed to be. It's hard mode even for seasoned players. :)
> It's actually almost unplayable right now since a bug I haven't managed to pinpoint is messing up all the mines
...
> I'm currently in the process of rewriting the front-end
These two things seem somewhat at odds with each other. Clearly, to players of your game, a fix for this show-stopping bug would offer a fair amount of value, whereas rewriting the frontend, from your description at least, sounds like an exercise that is more for you as the developer.
I'm not criticising, by the way. Just sharing my thoughts based on your comment :)
Yes, you are right. The thing is though that I'm making the game mostly for myself and I always tell my players that they are playing at their own risk. It's very early alpha and they may lose everything.
I tried fixing it initially but eventually gave up and put the whole development on hold for 6 months. Also note that the problem gradually got worse. In the beginning it wasn't such a big deal but eventually became post-apocalyptic when players had a hard time finding even the most basic resources.
Another aspect is the amount of support I had to take care of when the player base was at its peak. It almost made me burn out, consdering that it's a side-project that I'm not making any money of. So when this major bug started affecting the game and the active players began dropping it was almost a relief. I never expected that anyone wanted to play it to begin with. I just put it out there as soon as I had something playable.
So after a long hiatus the joy of rewriting the front-end made me motivated to work on it again. Perhaps not the best strategic move but it feels good to be back. :)
Great response, thank you for taking my words in the spirit they were intended :) I battle hard to find approaches to my next features that will motivate me to actually get them finished too. I'm learning that this is a big factor in a programmer's experience. So I understand completely.
I say go for it - everybody learns something about good and bad code design when more code becomes available, and despite what you may think of it, there's still stuff to learn, especially form larger code bases.
> There are many, many more embarrassing mistakes to discuss. I discovered another "creative" method to avoid globals. For some time I was obsessed with closures. I designed an "entity" "component" "system" that was anything but. I tried to multithread a voxel engine by sprinkling locks everywhere.
Dear God, I feel like I'm having my game dev life read back to me...
I have a very similar history (though a lot worse games). I still have the backups of my first OpenGL programs and game code from ~16 years ago, I might put up a similar article at some point. The difference is that the author's game seem to improve visually over the years; mine actually regressed from 3D to 2D world (since the latter is easier to manage) ;).
Some random comments follow.
> Still, I loved data binding so much that I built the entire game on top of it. I broke down objects into components and bound their properties together. Things quickly got out of hand.
Boy, you have to see JavaFX. And I inherited a codebase at work written by a cow-orker who was absolutely in love with bindings at that moment...
Also the big blob of code reminded me why good macros in a language (not C-like macros) are a blessing ;).
> Separate behavior and state.
Yeah, learned that same lesson too. I prefer to think in systems operating on data as opposed to a web of objects calling each other. One of the reason is that the web of objects is very implicit; you soon have no idea who actually refers to who anymore.
> If I start typing self. in an IDE, these functions will not show up next to each other in the autocomplete menu.
That's a good and practical point, though I feel dirty about renaming functions for solely that reason. I wonder if this isn't time to experiment with some "first-class editor semantics". Here, ability to explicitly mark functions as grouped together solely for the purpose of IDE displaying them like that in autocomplete and outline view. In my evening Lisping, I'd love to have more control over indentation for macros I write.
> global variables are Bad™
Yeah, that's almost like "goto considered harmful", and about as much correct as a soundbite (i.e. not very). Unfortunately, everyone keeps writing how global variables are evil, and what I'd love to read is some treatment on how to use global variables correctly. For the situations where - like often in games - you absolutely, positively need a lot of globally accessible data.
> Group interdependent code together into cohesive systems to minimize these invisible dependencies. A good way to enforce this is to throw everything related to a system onto its own thread, and force the rest of the code to communicate with it via messaging.
That's a... very self-conscious approach. Odysseus-tied-to-a-mast style.
Agreed on globals. I learned to code making a huge Java + OpenGL game engine some years back, right when scripted web frameworks were really blowing up and people were becoming test-obsessed. Every new article said that global variables and static anything were a deadly sin, so I did some really ugly things to avoid using them. Now I think global state is significantly simpler for certain systems, better than trying to pass that state around when it'll never be instanced.
Well one day I didn't foudn anything better to solve my sync issue with queries to the server than putting a Thread.Sleep(int) the variable name for it was
private int DO_NOT_TOUCH_THIS_IF_YOU_DONT_KNOW_WHAT_YOU_ARE_DOING_Timeout = 7000;
The code actually made it and is still in production
> Unfortunately, Sun decided that almost all Java errors must be recoverable, which results in lazy error handling like the above.
Not quite: Java insists that all (checked) Exceptions be handled or passed up explicitly. One can argue that this design decision is bad, as has been done extensively elsewhere.
However, in the particular example given, I'd argue that it would have made much more sense to display a user-understandable error message to the user (e.g. "Required gun resource file is missing, please re-download the game here: (...)"), and then just System.exit() if it's non-recoverable. Maybe print a stacktrace if the game was started in debug mode.
Real comment: this advice is excellent, don't fall into the trap of perfecting your tools before starting. The article presents the code quality issues as constantly there and always getting more regrettable over time, but the finished products get better and better!
I say, write bad code if it will get you closer to your goal (here, making a game people will pay money to play). Learn from your mistakes, and make new mistakes next time.
As a 2nd semester cs student, that was pretty helpful. I have to design a battleship game in Java and have no idea what I'm doing actually. I'm just staring at the cursor and don't even know where to start, and it's comforting to know that other people struggled as well
I remember having to do something similar in college. I think the first time we had to do it in C, the second time we had to do it in C++ and make it OOPy. No GUI was required but with such a simple game it was possible (I think some students did it for extra credit?) to strap a GUI on top once the text based version was done. That might give you an idea of how to proceed. I was fortunate to have been programming before college but I was always amazed at how lots of those who made it to their junior year and beyond were basically as productive as me for school assignments despite only really starting to program in college. I still remember struggling with the syntax of the for() loop in PHP. My struggles are different now (and mostly related to modifying code that already exists rather than creating fresh new code) but struggle is a good sign that you're learning something. Just be careful it doesn't turn into never-ending masochism!
Haha, this is great! Memories. All those terrible lines of code I wrote when younger led up to less terrible later. I too got on some of the cargo cults until the harsh reality of the projects showed me why not. :)
This year marks 20 years since opening up QBasic as a 13 y/o.
Because bindings[10] might have triggered a process that removes bindings 7-9, causing "i" to be out of bounds of the array. I wanted to continue updating the remaining bindings even if some of them got removed.
I want to start writing something that would have been on the original atari to begin with, just have to look like that style. and keep moving up a generation in terms of gameplay.