> once Epic's written Unreal, they never have to create a first-person shooter engine again. Instead, they just need to keep adding new features. Day 1 may need to write a first-person shooter engine, but only because we've never done that before. This is why programmers always want to rewrite everything. Most shipping software is the equivalent of a novelist's first draft.
...and Unreal had been licensing their first person shooter engine since 1996, though it wouldn’t be until two years after this post that they dropped the price on UE3 to the point where hobbyists could afford it, then made UE4 damn near free, to try and grab some of the indy game love that Unity has been getting since its first much-lower-price release in 2005. And don’t forget Id, who’d been licensing their engines since, what, the Doom days?
Big games are still obscene sprawling messes, and so are little games I’m sure, but we are kinda at a point where nobody ever needs to write a FPS engine ever again, unless they specifically think that sounds like a fun project. Or any other kind of basic game format, it’s not like the four people who made Untitled Goose Game had to do much for the initial steps of getting a goose wandering around a 3d modeled world full of objects with a decent caricature of basic physics.
There's cases where the game you want to make is so different that it's easier to write a new engine, like with the 4D graphics/physics in Miegakure (https://marctenbosch.com/news/). Ignoring unique features like those, most of what game engines do is magic incantations to make graphics cards work, file format munging, and resource management. That stuff can be copied over between projects pretty easily.
OTOH FPS mechanics like running, jumping, shooting are the core of the game and get re-implemented differently each time. Game engines don't really have much interaction implemented, and the physics they provide still has a lot of bugs, so behaviors end up with custom code most of the time.
Assets are kind of hit or miss as to whether they're reusable, and usually need some tweaking if they are reused.
Overall I'd say the reason games have moved to 3D is because of hardware support and better gameplay mechanics, rather than code reuse. Untitled Goose could have been an isometric tile game made in RPG Maker or whatever but they would have had to spend more effort on graphics and the physics glitches wouldn't be as funny.
Apparently Minecraft is still the top-selling game in the world; the code there was written completely from scratch.
I wonder if it's really software that's hard though, or that life in general is hard and software failures are just more visible.
Remember, Id is now part of ZeniMax. I don't think they actually license game engines anymore to studios outside ZeniMax. I know for a fact that they will not open-source anything beyond Id Tech 4.
The way I have thought about software's necessary complexity for a long time is by considering what software IS. Software, no matter what it does or how it is written, boils down to instructions and data that operate on transistor gates in a computers processor and memory. We must write code which orchestrates the flipping of millions or billions of switches, millions or billions of times a second, in exact perfect synchronicity. The machines these run on, if transistors are considered a 'part' are the most complex machines in terms of part count ever created by humanity, packing up to around a billion parts into about a square inch of space. Also, our software isn't the only thing running. It must coexist with an unknowable number of other pieces of software flipping the gates in unknowable ways. Also, we do not interact with the gates directly. We do so at a level which is many layers of abstraction removed from them, like building a car the size of North America from the moon with chopsticks long enough to reach. Oh, and if we, at that level of gates, get one thing out of sync or make a single mistake, it can effectively instantly cascade into taking the entire system down.
So, yes, software is complex. And it will always remain complex. It is doing monstrously complex things under the hood, all abstractions are leaky, and we are definitely past the point at which one person can even reasonably understand the entire stack from transistors all the way up through OS, compiler, language, etc. We must accept this fact of complexity and formulate ways to deal with it, to contain it and reduce it when we can, but it will always be with us.
> and we are definitely past the point at which one person can even reasonably understand the entire stack from transistors all the way up through OS, compiler, language, etc. We must accept this fact of complexity and formulate ways to deal with it, to contain it and reduce it when we can, but it will always be with us.
Beautifully put and we definitely are, I think the trick now is knowing which bits of the layers underneath you need to know to interact with things well (in terms of your goal), I came up on computers in the 80's and used C and Pascal, those early experiences have been useful for 30 odd years because often I have a 'feel' for what the computer is doing underneath that younger (and really capable!) devs lack.
Often when performance is really critical I'll go look in the source code for the tool to get a feel for what it is really doing (even though I haven't written C in a long time) which is regarded as voodoo.
On the flip side of course is I can build things in an afternoon or a day or two that simply wouldn't have been possible with months or years of work and a dev team of dozens back then.
My next side project is a tool for motorcyclists, you drop pins on a map where you are going to be at points in time and it then pulls the complete meterological data for those points and does some calculations (are the roads likely to be wet, icy, show the direction of the wind, show the windchill at 30mph, 40mph, 50mph etc) with the ability to set recurring routes and email you the day before something like "Tomorrow morning, there may be ice on the roads, wind chill will be 5C, feels like temperature at 40mph -2c, Sunny, low winter sun so wear your shades/visor"
The data to do that didn't exist 30 years ago and the GIS tooling (I'm using open street maps) to process the entire UK would have cost millions.
Have you seen the 30 million line problem* by Casey Muratori (of Handmade Hero fame)? There is an argument that a lot of today's complexity in software is an (unnecessary) byproduct of changes in hardware design.
Simple software is simple. Complex software is hard.
In the choice between simple (and cheap and correct) vs complex (and expensive and buggy) customers and stakeholders with very few exceptions go for the latter.
Our software is exactly as buggy as we want it to be - a decision we make with the tradeoff of cost and complexity.
> In the choice between simple (and cheap and correct) vs complex (and expensive and buggy)
Often simple is not correct. In much of the code I have worked on there is simple code for 99% of cases, but correctly handling the other 1% can be much more complex.
Also, simple is often not fast. An implementation of bubble, insertion, or selection sort is almost always simpler than quicksort, mergesort or especially heapsort.
Web browsers are a really good example that some software needs to be complex to do its job. Web pages can do pretty much anything so web browsers have to support that which requires a bunch of code off the bat.
Not having correct behavior in the 1% of cases is unacceptable because that introduces security problems. And not speeding up webpages as much as possible will make web browsing very unpleasant.
I think if we got a fresh start, we could redefine what a browser needs to do and make it simple and faster, but supporting the web as it exists today requires complexity.
> Web browsers are a really good example that some software needs to be complex to do its job. Web pages can do pretty much anything so web browsers have to support that which requires a bunch of code off the bat.
The point is that web pages don't inherently need to do "pretty much anything." The web could have been simple and browsers could be simple. Stakeholders decided that no, we want more and more and more and even more features.
And when you say simple is not correct, it is often because someone wants something complicated instead of acknowledging that simple, in fact, does all they really need.
I think op is referring to something you’re ignoring. Simple algorithms can often become quite gnarly if you can’t ignore some base assumption. I’ve seen some really elegant mathematics represented in a single equation turn into thousands of lines of code simply because reality means we can run out of stack space.
Similarly, I can tell you that dealing with ill formatted feed data, parsing any given value can be trivial, until you find some random example of data that abuses some seperator. Then suddenly you need to do thing like try seperating on X and see if you get data that looks right, else seperate on Y. Oh, and include some data (if it exists) from a prior seperation based on Z. I feel bad for anyone who has to modify my code. Hopefully they won’t ignore the unit tests that include all the gnarly cases...
> Similarly, I can tell you that dealing with ill formatted feed data, parsing any given value can be trivial, until you find some random example of data that abuses some seperator. Then suddenly you need to do thing like try seperating on X and see if you get data that looks right, else seperate on Y.
Yes, if you accept complexity, this is where you end up. The other alternative is to reject complexity. Stop accepting and trying to make sense of broken data.
At this point, we get back to my point because you'll say that someone (a stakeholder) demands that the program works with existing legacy/broken/misguided systems/users. Sometimes that is genuinely the only reasonable option, but all too often I see people introducing more and more features and complexity instead of figuring out whether it's really necessary or whether the intended end result can be achieved with less.
> And when you say simple is not correct, it is often because someone wants something complicated instead of acknowledging that simple, in fact, does all they really need.
I think you can see evidence against your point and for the GP's point if you look for "Falsehoods programmers believe about X" articles. Whenever software interacts with the real world, corner cases abound.
Just as often, the problem is that the programmer had introduced assumptions (and complexity and problems) where none were required. Did they do it just for the heck of it? Or did a stakeholder ask them to add feature X because it "would be nice to have" (maybe they just think it would be nice to have, without realizing that it is unnecessary or that they could achieve what they want in a different manner using a different, less complex feature Y)?
I'm well aware of those corner cases. And, at work, I'm dealing with.. well, not corner cases, but "real world" stuff right now, related to dates and timezones. None of what I'm doing right now would be necessary if people had the guts to decide that internally and in logs, everything is always going to be in a single format such as Unix time. Unfortunately, people made bad decisions and there are stakeholders so I'm adding complexity to the software.
If it were my software, I would outright reject this complexity. It is not needed for the software to do what its core purpose is.
People like to argue that the real world is absolute and software is strictly inferior if it cannot deal with all the complex cases people in the real world attempt to shove into the software world.
My argument is that a good engineer can look at a (seemingly) gnarly real world issue and finds a way to make it simple. Not all complexity is inherent.
Using OpenBSD after Linux is quite illuminating in this regard. At points it might seem like it's lacking features, but on the other hand you find lots of cases where it achieves exactly what you could achieve on Linux in a simpler manner and with fewer features & less complexity because they took a simpler approach to it, and in doing so, made a bunch of features a Linux user would look for simply unnecessary.
> The point is that web pages don't inherently need to do "pretty much anything." The web could have been simple and browsers could be simple. Stakeholders decided that no, we want more and more and more and even more features.
I did kindof address that in my final sentence. But there is a fair amount of inherent complexity in what I would want from a replacement for the web.
The web interface for Github should still be possible. Doing that would require a graphical layout engine (eg something like css), some way to manage authentication, and some way of submitting user data from a form. None of those require a turing-complete language, so I would perhaps be in favor of not having a JS equivalent, which would dramatically simplify the task of a browser, but there is still a lot of complexity there.
> And when you say simple is not correct, it is often because someone wants something complicated instead of acknowledging that simple, in fact, does all they really need.
The easiest case to say the simple is not enough is with encryption. I am not aware of any simple encryption algorithm. If you want to achieve, communication that others cannot eavesdrop in, simple is not enough.
Numerical stability in all sorts of applications is important and it is hard to achieve. Projecting the real number line into finite bits is hard to do in a consistent way.
You didn't contest my simple vs fast claim, but a good example of that is pathfinding algorithms. Dijkstra's algorithm is simple (relatively), but for many video games, a more advanced algorithm like hierarchical A* search or jump point search is needed.
> The easiest case to say the simple is not enough is with encryption. I am not aware of any simple encryption algorithm.
I disagree very much. Most crypto is very simple to use as well as implement. From DES to Chacha20, you can fit an implementation on a business card or two.
> Numerical stability in all sorts of applications is important and it is hard to achieve.
Counterpoint: I hardly ever need to worry about numerical stability, and most of the time I can just throw more bits at it.
> You didn't contest my simple vs fast claim, but a good example of that is pathfinding algorithms. Dijkstra's algorithm is simple (relatively), but for many video games, a more advanced algorithm like hierarchical A* search or jump point search is needed.
I wish the problem of complex software were just algorithms. Because most algorithms are easy to abstract in a box with loose coupling, and even the more complex algorithms usually boil down to some dozens of lines of code.
No, the millions of lines of code in big bloated applications are not made up of that.
> But everyone on the Chandler team has a different vision of what that means, and in the absence of pressure to ship, Chandler becomes the union of everyone's ideas. Instead of borrowing from successful similar products like Outlook/Exchange, the Chandler team is determined to invent something entirely new from first principles. They want to support user plug-ins and scripting, built-in encryption, storage of messages in multiple folders and infinitely customizable user views. As the project goes on, the simple replacement for Outlook/Exchange grows more and more complicated.
> No decision is ever final. Time and again, the Chandler team hashes out compromises on complex issues, only to hit reset when someone new joins the project with new ideas or when it turns out that someone wasn't really satisfied with the compromise.
God, this whole process happens within my own head on most of my projects, especially games. It's infuriating.
There is a Dunning-Kruger-esque quote on painting, that applies maybe even better (*) to software:
"Painting is easy when you don't know how, but very difficult when you do." (E. Degas)
(or for a more literal translation: "Painting, that's very easy when you don't know how to do. When you do know, it's very difficult.")
(*) (Although Ecce Homo by E. G. Martinez is a good image of what can happen to the software you're working on when management adds someone to help you out ;).
I think about this sometimes. When I was 15, and just learning to program, I thought I could code anything. And if you look at history, I was perhaps right.
Now, as a seasoned professional software developer, all I see is pitfalls everywhere...
How timely. I was reading the Unicode Standard the other day, and realized how this seemingly simple project of "modelling a character with a number" has become a huge, sprawling behemoth over years because they tried to cover every aspect of human language; I'm sure it's going to be even more convoluted and politicized in future, and it will never be complete.
And this brings me two sides of the argument: should software capture every aspect of human life with all of its complications, or humans should change their lifestyle to fit a software framework? This is probably a never ending discussion, and I'm keen to listen to both sides.
What lifestyle change do you suggest? That people in China (as one example of many) switch to ASCII and stop using Chinese characters when they use computers?
Unicode exists because people all over the world want to represent their written language on computers, and that seems like a reasonable thing to want. Software needs to serve reasonable human needs such as writing systems.
Unicode is terrible. They can use Chinese character coding when writing in Chinese on the computer (perhaps cangjie encoding), and use ASCII when dealing with computer codes which are based on ASCII.
Yes, although Unicode is still a mess. (One thing Unicode is good for though (if you removed the emoji and a lot of other stuff) is searching documents in multiple languages all from one interface. However, someone on IRC mentioned Duocode, and I think that would be even better for such purpose. But for other purposes it can be variously less good.)
Some people would argue that emojis don't belong in Unicode because they evolve too quickly and new ones need to be added too often, and the Unicode standard simply won't be able to keep up. As such it would hinder the expression of users, and it would stifle the evolution of online expression.
Unicode (ISO/IEC 10646) is without doubt a hugely successful project. But like every huge project, it keeps nurturing itself ad infinitum, and is becoming absurd in the process (for example, Unicode has also captured emojis). As Unicode is covering more and more symbols all the time, it becomes less and less useful for its original purpose to communicate the character set encoding standard used in a document. Eg a basic info about a document is whether it can be rendered with a given font, but Unicode (or a character encoding information such as "utf-8") totally fails this basic use case, and one has to build small ad-hoc subsets within Unicode for this purpose instead.
It happens anyway. Humans are more flexible and adjusting than software I guess.
For example: where I am from, people use Hindi language to communicate but write Hindi using Latin alphabets rather than native Hindi alphabets, especially when typing on mobile phones. This is a result of Hindi letters being inherently complex in writing and lack of proper software ecosystem available (keyboard autocomplete, swiping etc)
Kazakhstan decided last year to convert from Cyrillic script to Latin, with software interoperability being one reason for moving, so I guess in practice yes, there really are those people - although there's always a bit of nuance in that kind of decision.
This isn’t exactly what you’re talking about, but I’ve read about large companies engaging in tortuous reorgs just to conform to the constraints of their new shiny new SAP implementations. I wonder how much truth there is to those stories.
Yes. It will happen whether it’s desireable or not. For example, I think it’s inevitable that the US will use an ISO date format eventually, and further down the line I think many Asian scripts will be replaced with western etc.
That is a stupendously dangerous question to ask, and that the answer is not crystal clear from the outset is terrifying. When humanity and computers collide, the computer is always in the wrong. Every single time, forever. At no point is it acceptable to ask that any human being alter their culture, lifestyle, or thinking because it fits the boxes the developers were asked to put in place. Doing so would make stupid bureaucractic policy mistakes into cultural catastrophes. I legitimately fear two agents of oppression: 'Sorry, it's policy' used to excuse cruelty, and 'the software doesn't have a place for me to put that.'
And I reduced it. The aurhor had no say in the matter. He could have written in italics if he knew how, or ALL CAPS, and there would be little I could do about it. He could have worn a sandwich board and paced up and down a thoroughfare, but I would not have seen him.
People adapt their behavior to their communication media instinctively and continually, and are as routinely deliberately controlled thereby.
...and Unreal had been licensing their first person shooter engine since 1996, though it wouldn’t be until two years after this post that they dropped the price on UE3 to the point where hobbyists could afford it, then made UE4 damn near free, to try and grab some of the indy game love that Unity has been getting since its first much-lower-price release in 2005. And don’t forget Id, who’d been licensing their engines since, what, the Doom days?
Big games are still obscene sprawling messes, and so are little games I’m sure, but we are kinda at a point where nobody ever needs to write a FPS engine ever again, unless they specifically think that sounds like a fun project. Or any other kind of basic game format, it’s not like the four people who made Untitled Goose Game had to do much for the initial steps of getting a goose wandering around a 3d modeled world full of objects with a decent caricature of basic physics.