Hacker News new | past | comments | ask | show | jobs | submit login
Don’t starve, Diablo – Parallax 7 (2016) (simonschreibt.de)
323 points by homarp on Aug 21, 2021 | hide | past | favorite | 55 comments



2D games had so many tricks that they often looked better than their 3D replacements for quite some time. If you were able to do every pixel by hand for a few views you often ended up with more detail than you could in the low amount of polygons available early on.


Games in general suffer from the fact that more technology does not necessarily lead to better visual - or narrative - outcomes. There was a period of time where every game engine producer was very excited about their engine's ability to render hair and I dunno if there are many games where the quality of the hair rendering has been a factor in the graphics. "More realism" runs up against the hard truth that these games are being projected on a 2D screen and can't be realistic. And even if it was, there still needs to be a justification for why that is better - it isn't self evident that more realistic graphics is a batter outcome. The real world is pretty boring, that is why people are playing games.


I'm not a professional artist or marketer, but I suspect the main reason we see the pursuit of realism in games is that it's easy to communicate and coordinate, so it reduces uncertainty across the board. Developing a style is hard and requires a lot of coordination across different teams to understand and internalize what approximations are being made where. At scale with hundreds of artists, it becomes very difficult. If, instead, the approximations are enforced by the tech and the goal the artists need to consider is just to create something realistic, then that's easy to get started with without needing a concept artist to dictate everything. There's probably a crossover point of team size where realism has lower risk and lower cost. In practice, I think what we see is that the more creatively-minded large studios shoot for 90% of assets being "realistic" with some strong stylistic choices for the last 10% that really defines the look.

On the marketing side, there's something to be said for knowing that your visual style will be uncontroversial for the most critical time period for getting sales (launch). Looking horrible 10 years from now doesn't really matter if you're trying to recoup your investment today. It has slowed down now because a lot of the low hanging fruit has been done, but a decade or so ago, graphical features were one more reason marketing could point to for someone to buy a game. Some people really do buy games because they look pretty. Even today, there are games notable only for being benchmarks.


reminds me of the outcry when windwaker was announced to use cellshading. Nowadays people love it but back in the day it was initially reviled by a lot of people before it eventually won them over


I think this is true beyond graphics as well. I remember when games started shipping with fully voice-acted dialogue, it felt like all the interesting interactive text based systems (however clunky) were immediately abandoned. This hurt immersion for me far more than text vs speech ever did. I'm excited by the progress in dynamically generated voices in games happening now, although in many cases that seems more like having better placeholders in your asset pipeline than using them as production-level dialogue.


I'd posit that there's a general principle underpinning this that is true for most things (not games, or art, but everything).

This is a bit half-formed in my head, but: Computer tech makes certain important classes of things cheaper and more convenient to create. Sometimes it makes those things feasible in the first place. But it's always via enabling of cheap duplication of an existing process.

As tech evolves in power, it can do more. The high-speed duplicative properties of computers are the thing that makes them useful. For them them to be generally useful [and critically, financially useful], crafted, stylised, personal aspects are stripped out. So much does not work as well or look as good or is as interesting as things from the previous evolution that were built or provided or staffed by knowledgable, skilled practitioners. But given that skilled practitioners are finite, on average what comes out is better in some useful aspects than most of the efforts of the previous evolution. And on average it's easier to produce/manage/handle at scale. So pick anything: Amazon, Uber, Google search, medical records, 3D games, etc etc, ad infinitum. Almost anything you can use computer tech for.

What's being talked about in the comments: it's not generalisable, so it gets chucked. It's too hard to do, so sand off the sharp edges, cut anything that can't be reduced to an algorithm, package the inferior but much more convenient version that can scale

(I assume there is some body of thought with literature relating to this process that I should dig into, but I think people like Walter Benjamin and Milan Kundera have probably described it much better than I can w/r/t art [or maybe Ruskin or Morris, though their idea are all maybe a bit idealistic] this destruction of the personal to the service of the machine)


Agreed. Looking at https://en.m.wikipedia.org/wiki/List_of_best-selling_video_g... only 2 of the top 10 (GTA V and PUBG) have what I would call a realistic aesthetic. Even looking at the entire list, games with a more stylized look are far more prevalent.


Hair I'm not sure, but I think the sort of 'dynamic' trees and foliage that move with wind and rain, should add more realism. I hear Witcher 3 had some kind of system that did that, not sure about any other games that do that.


It was a little over the top. But yes it was nice.

https://www.youtube.com/watch?v=LzSS5oxKjUA


Great example of this today is real-time raytracing. They look hideous.


Hand-painted/drawn backgrounds were unfairly good back in the day. Something like Planescape: Torment looks better than its age because there’s really very little to render in real-time. Conversely, something comparable like SimGolf looks much worse despite coming out later because it couldn’t leverage anything pre-rendered.


Which SimGolf?


Playing through Final Fantasy VII as a kid (albeit emulated) was a great introduction to that dichotomy. The game mixes both 2D and 3D art styles to varying degrees of success, but there's definitely a reason why the game's art direction is so iconic. Looking back on it, all I see is a watercolor, handpainted haze (as opposed to games from the system that haven't quite aged as well, like Metal Gear Solid)


I actually think of FF VII as one of the PS1 games that have aged the most, and far more to my eye then MGS. The 3d battle models are fantastic, but the overworld models are, like, laughably bad. It wasn't a hardware limitation either, because FF VIII and IX both fixed this with character models that actually look like the character, but VII looks like polygons are in short supply. They just didnt know how to fully utilize the system yet.


How much things have aged is all subjective, but think FF7 is charming with it's absurd hair spikes and fingerless cube hands. Other games that tried harder to look realistic don't feel like that.


I think this was an art choice rather than technical inexperience.

The proportions of the field models are more similar to the chibi sprites of the SNES games, so we’re seeing the gradual evolution of FF’s style with VII the first transition from 2D to 3D.

That VIII and IX used more realistic proportions is just a later stage of that stylistic transition.

Personally, I find them quite charming. There’s a fair bit of slapstick in the game that IMO works better with the low poly models.


Which is interesting because the games that combined 2D and 3D (common on console) actually now can look much BETTER on the 3D side (emulators often allow upscaling etc) but the 2D painting remains the same.


One of the worst examples of a 2d to 3d transition is Europe Universalis 2 and 3. The 3d map got all messy and hard to see stuff on and it lagged alot even on fast computers.

I tried to D2 open beta and surely the monster stood out way less. If 3d would be as clear each monster would have to have a thick 2d border.


On the other hand, the main thing I notice in the first Diablo 2 shots (having had my attention drawn to the rendering -- I'm not sure what the article means by "pay attention, how the pales (?) don't cover the same floor-pixels all the time") is how the torch appears to be casting shadows perpendicular to itself. And how as you walk by the torch, the direction of your shadow doesn't change at all.


By pales they mean the tall spikes (impalers). The spikes visually appear to tilt as you move left/right, seemingly revealing different texture underneath during the transition. Whereas the standard setting version is static, fixed texture, no tilt, no visual shift in the texture beneath/behind the impaler spikes.


I will also add to look for the effect as the background moves behind the top of the spikes (the bottoms are connected to the ground, correctly!)


"Fake perspective" was problematic in early flight simulators as well where texture-mapping was used. The classic example being a long runway.

If the runway was a long polygon and you had a long texture map (with a dashed centerline for example) the standard code to map the bitmap to the projected runway polygon would not in fact apply perspective to the bitmap — the intervals between the dashes would be a fixed not perspective distance.

The solution was to break up the runway into a whole bunch of shorter polygons — each with a bitmap of perhaps one stripe/dash.


The PlayStation didn't have perspective-correct texture mapping despite having a real 3D rendering pipeline: https://www.youtube.com/watch?v=x8TO-nrUtSI


A lot of this has to do with the fact that many Japanese manufacturers of the era basically got to 3D hardware by improving 2D sprite chips so they could scale and do affine transforms. This is why the PlayStation couldn't do perspective correct nor filtered textures and lacked a Z buffer, and why the Saturn drew with quads instead of tris.

Nintendo partnered with the most advanced Western 3D hardware manufacturer -- SGI. The N64 was kind of a cut down Indigo.


for that reason some games (on PS1) would subdivide on the fly, and have special code to avoid t-junctions (extend by pixel, such that there is no gap), also there were no sub-pixel coordinates hence the "shimmering" - still though some games brought quite impressive graphics for what the chip offered.


Another difference was using small 4-bit textures with own palette applied on different parts of the character body, instead of one huge skin (like done in Quake), not only this allowed for better sampling - e.g. no blurries around shoulders, but also better and more optimal use - for example most of your hands, or legs often are with the same texture,hence two (or more) different 4-bit textures might server better than one huge 8-bit one (and the pallette for it would be smaller too).

But then there were also tricks of reusing textures (mapping) directly from what was drawn on screen (for blurring, etc).


Yes, which is why games that really took advantage of the hardware like Crash Bandicoot just used tons of small triangles.


To be fair, the best N64 games did that too, though for different reasons (pitiful amount of RAM for storing textures).


Nowdays, I make the distinction between maintainable programs and hackable ones, the later of course being, imo, way better. What I mean by that is essentially that the ratio # by-design features over simplicity of code of the hackable program is considerably higher than the maintainable program (but there is no method to write a hackable program, you either have the sensibility for that or you dont, from my own observation - I'll be happy to discuss the nuances of that statement).

I don't know today but blizzard used to write its game in the hackable way - which is both a feat and rare. Take WoW for example, do you know how the 3d engine basically work? It's so simple and so powerful, you are not going to believe it. WoW is made of levels that are connected by portals. That's it. A level will have portals that will get you to other levels which in turn will have portals which lead to other levels. For example, when you are "outdoor" (with quotes because the engine doesn't know outdoor actually, it knows levels!), you may find a portal that leads you into a tavern for example. Once you are in the tavern, the program can ignore the rest of the world and focus the whole computer resources on the tavern, just like that, by-design. So you'll have taverns as detailed as all of what you can see in the exteriors. Better, you can have landscape in each and every level. (note when the player is in the air, say on a gryphon, some portals are ignored.)

This is both incredible simple and powerful. Remember the game run on early-2000 PCs. Imagine what you can do with that. Caves? Done. Houses that leads into catacombs? Done. Incredibly detailed cities? Add walls and portals, done. Texture streaming? Well, d-o-n-e. Buildings that leads to indoor landscape? Done.

I wouldnt make a game engine differently today.

As a comparison, I'm working on one of the most played game in the world atm which is a maintainable program. You have no idea the pain in the ass in comparison just to add simple features. Oh yeah the code is readable. You can fix bugs. But features, just forget it, it takes a shit load of money to get one done.


If I may ask, I would love to hear your perspective about the fuzzy, squishy reasons why hackable turns into maintainable from a management perspective.

I think I understand what you mean about the two. I know embarassingly little about electronics despite being interested in it for a long time, and have boggled at how much a circuit schematic can differ from PCB layout. On a schematic you can just go "and here are 48 data lines and we'll just draw some dots to illustrate that they yeet all the way over there" but on a PCB you not only have to route them (possibly across multiple PCB layers) but maybe some of the lines have to have squiggly messes in them so they're all exactly the same length and the electrical impulses arrive at the correct nanosecond. The good electronic engineers are the ones who've learned to fluently translate between schematic and PCB layout in their heads, in exactly the same way someone might fluently translate between languages to the point that they forget which language they heard something in, their memory just encodes the meaning directly because the neural routing is that deep/strong/integrated.

A good programmer has learned to understand/accept/integrate(/resign themselves to) the fact that the "focal point" of almost every program is in the programmer's head, not on the screen, and that whatever's onscreen is instead ultimately just a giant pile of little pinball-paddle nudges that hopefully prompt the reader to go "OH" and have the program structure "pop" into focus. Once you have that, you're good to go: that distinct mental model is detached from this class structure or that naming convention or whatever pattern, and thus not bound by the mnemonic/interpretative/structural limitations of the proverbial puzzle pieces that constitute the methodology; it's the superset of all of that and all the machinery that hasn't been set into stone yet.

It is so, so weird how programming slams right up against the edge of design/engineering/bigger-picture-greater-than-the-sum-of-the-parts vs Management™.


I remember the D2 perspective mode was a neat trick but it made the game kinda fuzzy and broke core graphics features such as my character wouldn’t be green when poisoned.


The green was a palette trick. The Direct3D mode used high-color mode, so that trick wasn’t free. It did draw a green light halo around the character’s feet instead, though.


It did. But it was very subtle.


sometimes I wonder how much green/blue is/was lost in media due to greenscreen/bluescreen. Anyone has any idea about this?


You can key on any color; green is the most common mostly because it's pretty far away from the range of human skin/hair.

You also only need to worry about this when you're doing effects shots. If you're doing relatively naturalistic filmmaking where your set/location isn't gonna be replaced, there's no need for any of that.


"Moon Patrol" seemed like such an entertaining game at the time. But it grew stale very quickly. The parallax effect was really popular in home consoles like NES games just a few years later.

This article is all about rendering but gameplay is what makes "Don't Starve" standout IMO.


I really love the idea of Don’t Starve, but I felt like I had to read a guide to get better at it, so I lost interest. I adore the artwork, especially.

At the same time, the main game loop of Don’t Starve seems like you have to memorize a bunch of survival tips and constantly read the wiki to survive, or you’re doomed after the first week for sure, and probably before.


The problem with that game is that the rules of the game completely defy common sense.

99% of first time players who survive winter will die from freezing because the first thing they make to fend the cold of is insulating clothing. The game doesn't work that way. No, instead you must use a very specific item that only works in this magic video game world called a thermal stone. The stone is hot? You don't freeze. How stupid is that?

The entire game is like that. Instead of having a continuous learning curve the game has a discontinuous learning curve. I.e. most items in the game feel like they shouldn't even be in the game. If there was a beginner mode that simply restricted access to all the useless items the game would be significantly easier.

Once you get into summer you run into the heat problem. Either your stuff is burning or plants aren't growing. The solution? An Ice Flingomatic. It doesn't even need ice to run... If you played the game with a cheetsheet it would be pathetically easy because 90% of the game is intentionally meant to be a red herring that wastes your time.

I say this as a player who survived 300+ days at which point the risk of death was 0%.


> At the same time, the main game loop of Don’t Starve seems like you have to memorize a bunch of survival tips and constantly read the wiki to survive, or you’re doomed after the first week for sure, and probably before.

In truth, I gave up after failing to thrive through four or five attempts. In other games, I hate the nagging tutorials and compulsory/free savepoints. But it seems like DS could've benefitted from some compromises here.

So, I 'cheated'. After telling a friend who was excited about the game that I'd given up, they told me about the cues the game was giving me and how to mitigate them. The dynamics of the game are somewhat open world (not quite Dwarf-Fortess level but lots of great flexibility in interactions/systems). After getting past some initial hiccups and knowing what the threats are (beyond starvation) and potential solutions means you can have a lot more fun. And it's a drag when you die but going from a clean start to a workable place is easily doable after you know the basics.

"Don't Starve Together" is a much easier way to be introduced to DS, especially when entering after the team has made some basic progress.


Don't Starve lets you tweak many of the details of the world so you can remove threats you find too hard (or unfun) and make a "safe" game. Conversely this means you lose some resources, for example enemy drops or that are the result of weather phenomena. In turn that means that some items can't be creafted (in Don't Starve - Shipwrecked a rain of meteors comes in the dry season and some hatch into Dragoons that you can harvest Dragoon Hearts from to make items in the Volcanic tab that you can't make otherwise). But on the plus side it lets you create a relatively peaceful game that can be enjoyed for a long time (especially in a large world) and that gives you the chance to learn the basics without having to constantly struggle against sudden calamity. In a sense, you can make an Easy mode that you can play for ever until you get bored and want some challenge.

Personally, that's how I always play (except to unlock characters which requires surviving a certain number of days in the standard game). I just like to build stuff and survive and the constant attacks (e.g. by Hounds) end up annoying me after a while. I appreciate that the game's designers had a vision for the world but I appreciate it even more that they gave the player the tools to craft his or her own vision in turn. Like the menu item where you can choose your own simulation options says "Your world, your rules". Very often I've played games where the designers seem hell-bent on imposing their own, personal idea of fun on the player and it pisses me off (my worst example is in Underground Sea where apparently the designer thought it's great fun watching a slow moving boat treading water without anything happening for real-world minutes on end).

Don't Starve still takes a lot of wiki reading though. I think it appeals more to old-school strategy or city building game players, who would have expected to have to read a manual and playing a tutorial campaign before even playing (like Cesar III for example).


I genuinely recommend Don't starve together as a remote co-op game.

I played it with my son and his friend. In the beginning we got stuck pretty quickly but the 12-13 years old guys had all the patience to look at the videos and figure it out.

The confused faces of the moms when hearing the loud (non american friendly) curses and shouts from two different corners of the house for "We need more meatballs, deerclops is coming in two days" or "oh no not a tree guard again". It will be an important memory of my oldest son's teens. Hopefully his too.


I never figured out those cues. I also gave up after a while because the only way to progress was to cheat.

I think if I re-played it today I'd check one of the many gameplay videos on youtube to understand how to get started and then take it from there.


Ah yes, such great tricks to fool the eye. Like programming tricks to fit into small memory footprints I suspect the "how" of these things will get lost over time.

Contemporary with these game developments was an excellent series called "Graphics GEMS" which highlighted different ways to achieve various cool effects. Back in those days I felt a real sense of accomplishment when I got something to both work and "look right" (your eye is a very harsh critic!) and its a bit sad for me that you just call the right Unity of Vulcan API and voila, effect you were looking for on your scene appears.


> its a bit sad for me that you just call the right Unity of Vulcan API and voila, effect you were looking for on your scene appears.

You are free to not call those APIs. You could constrain yourself. I did this and I am having way more fun than if I tried to use the full capabilities of a modern GPU+software stack.


True, there are always interesting ways to create challenges to enhance the experience.


> you just call the right Unity of Vulcan API and voila, effect you were looking for on your scene appears.

Nitpick:

Vulkan (with a K) is a low-level graphics API, operating on a level that's comparable to OpenGL or DirectX, or perhaps even a bit lower. It's not going to implement any kind of interesting, nontrivial graphical effects for you -- doing that is going to take work, just like it would with older graphics APIs.

With a higher-level framework like Unity, on the other hand... that's a fair comment.


In fact, when you look at the progression from OpenGL 1.1 to 3.0 Core to Vulkan (and the parallel evolution of Direct3D), you'll see that with every step along the way, the APIs became more and more flexible by providing less and less convenience. Just look at the code it takes to render one triangle using each one of these APIs. The difference is that by the time you manage to render a triangle using Vulkan, you're also mostly set up to do very advanced rendering tricks that would be plain impossible with OpenGL 1.1 and would require some contortions to get working on OpenGL 3.0.


Yeah it does feel like this kind of clever graphics trickery is becoming a lost artform. Of course, niche circles can continue it. Retro console/PC programming circles, demoscene, etc. But I'm not sure how long those circles will last once the generations that grew up with that hardware die out.


Fake 3D racing games with did it worse, as you just had a trig shifting road and some background scrolling right and left. When I played Road Rash 3D on the PSX the change was like night and day.

Altough some of them did it really well, like Lotus Turbo Challenge I and II for the Mega Drive.


Ulillillia's Platform Master also has a lot of really interesting parallax / pseudo 3d effects: https://youtu.be/Tc4djuMvP2k?t=2117


I love You mentioning moon patrol. It was one of my first games back in the days on Atari 1024St. Spend hours in front of it, just a me a friend and a joystick...


Return of the Obra Dinn is a real 3D space but rendered entirely in 1bit monochromatic using dithering.


I wonder why they enforce a redirect from HTTP to HTTPS.

https://simonschreibt.de/gat/dont-starve-diablo-parallax-7/


I thought this was considered a best practice. Years ago people cited performance reasons but I think this doesn't make sense anymore.

Relevant thread https://news.ycombinator.com/item?id=13301280 (but from 2017, can't find a 2021 discussion)


Indeed it is, since browsers now mark http pages as "insecure". Let's Encrypt's certbot, for one, explicitly asks upon installation if you'd like to redirect http to https.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: