Hacker News new | past | comments | ask | show | jobs | submit login

Wow they didn't use any DB at the start of UO, that is amazing haha

"As Raph also notes, there were no databases originally involved in the storage of game state or player data for UO (disregarding analytics here), everything was kept in flat files. Backups worked by flagging a moment in time where no one was allowed to cross server-boundaries -- during that moment, each areaserver was commanded to fork(), essentially duplicating itself in memory (it's more complicated than this, thanks to Copy-on-Write, but let's simplify). After everyone had fork()ed, the "lock" preventing boundary-crossing was cleared. Then each areaserv began to dump out its huge chunk of memory-state into a file on an NFS server. Those files were then all tarred together and kept as a "backup" of the state of the server. These heavyweight backups happened at half-hour intervals, I believe."




I'm not so shocked... I'm a web developer, not a game developer, but databases were avoided as part of production stacks in the 90s. Most people tried to stick to flat files. A database layer added complexity and decreased performance. Even most content management systems had an option to write everything to a flat file to serve via a web server. Those were the days that Apache servers and a huge pile of flat files was the norm.

Once web content began to be personalized, that was no longer realistic. Databases became a required part of the stack, web servers ran applications instead of just serving file, we gained massive amounts of functionality, and haven't really looked back.


By most defintions, I think structured "flat files" in a filesystem IS a type of database.


It is, but it adds the requirement for installing and configuring another application running in the background. Before SQLite, flat files really seemed like the best option for small projects.


Plenty of big projects use flat files because the techniques to get a batch processed overnight are well understood.


I don't understand what background application you're referring to. If I have an app that reads and writes to a bunch of CSV files how is that any different from using libsqlite3 to read and write to an sqlite file?


Imagine if you will that database projects do exist, but don't come with easy to deploy documentation. What you have is stuff in source code tarballs and a vague idea of what dependencies you will need and what versions. You can't just `apt-get install mysql-server` and be done. You have to compile the stuff from scratch. Download a tarball, run `./configure && make`, wait ten minutes, come back to see that some dependency wasn't met, download that, `./configure && make` that, find yet another dependency. And this in some cases is the superior method to the RPM hell you might encounter. Oh and don't forget to supply all the necessary configuration flags to get the features you need, or else you won't be able to use the thing you just installed.

If all you want to do is run a small project, or a big one where performance matters and you have deadlines, you might just "do it by hand" and simply create your own pseudo-database out of flat files, memory dumps, whatever.


I meant RDBMSes that are not SQLite. For MySQL or Postgres or whatever you usually need to do a system-wide package install and have a daemon running in the background. Portability of (compiled version of) your program disappears. That's why desktop apps - games in particular - tend not to use any database other than SQLite or flat file structure.


Commenter said "before SQLite". Pay attention.


HN uses flat files instead of a database


Were oracle/DB2 systems considered viable options back in the day? Obviously not for a small project, but a company like origin was quite successful then and would have had the capital to buy licenses.


Still tremendously expensive, and not as performant as keeping everything in memory.

The approach described is what people nowadays would call memcache, but the entire architecture is different because client connections are persistent rather than resetting every page load like the web. They just built a standard game server keeping everything in memory in the regular data structures. Fork-and-persist is a nifty way of using the operating system's copy-on-write functionality to checkpoint state.


Their initial strategy involved purposely losing an hour of player data every day. Everyone used to troll newbies by pretending to be nice and giving them free stuff around 11am, and then by 12pm it would suddenly all disappear and revert back to the original owner.


As mentioned in some of the replies to the Quora question, this was also a great time to mess around in PvP and put on all of your best equipment (armor of invulnerability, and weapons of vanquishing) and head out to the dungeons to cause chaos. Or taking 80+ purple explosion potions with you and hucking them at groups of people still fighting in the dungeons. If you died, oh well, like you said everything that happened in that hour got reverted.


Oh the days of the Chesapeake shuffle, what many us used to call the lag spikes and rollbacks the afflicted the game early on. You could tell your servers cycle time with little effort and there were days when everything was just out of sync and even movement from certain areas was not a certain event.

This model apparently allowed for quite a few dupe bugs, one of which I published on an old fan site. It was quite involved, in fact I wrote it up so deliberately convoluted that what you were doing was obvious. Did it work, I wasn't quite sure but I did suffer both duplication of items and loss from their system through no effort on my part.

Its kind of funny to learn how they did it this late, reminds me of mechanisms we employed with some early multiplayer door games on BBSes. Supporting multiple players for some early titles was a bear, the least of which was the lack of stability of most connections just plain bad code that bit you in the butt more often than not.


I don't suppose said fan site was DrTwisTer?


DrTwister and LumtheMad were 2 sites I followed closely back in the day. Wonder what they're doing now…


Lum is working on Shroud of the Avatar, what's supposed to be UO's successor.


This website (as far as I know, I don't know if they changed that) since the beginning uses flat files instead of a database. Depending on the case it might actually work and be a valid strategy for persistency


I also think flat file storage is a valid solution. If you are abke to use advanced filesystem features like snapshots and online replications, the use-cases for databases shrink even further.


I built a content pipeline and CMS on top of flat files (json) and mercurial.

You can get repository level transactions by committing on each consistent state (squashing before push) this lets you do undo/redo and recover from failed operations easily - databases sort of give you this - but only for the stuff they actually contain (so unless you're putting your asset files in to the DB you don't get this).


Yeah I have no doubt that flat file are a valid solution, I think the only headache was related to a dupe bug.

"Server-boundary crossing edge-cases and race conditions persisted for a long time -- allowing for gold and items to be duplicated, though I think we had largely eradicated the big ones by the time I had moved on to UO2."


SQL is so fancy

you can get a long way with join, awk & grep

http://man.cat-v.org/plan_9/1/join


Transactions are really fancy too.


HN itself uses flat files for each of these comments!

Somewhat off topic, but one the pieces of code I'm most proud of was effectively a recursive XML scraper that resolved each leaf (each node was a web request) into a line (like a file path) of a massive text file, which was then grepped when a user searched for something. I hacked it together really quickly to solve a problem, which is solved orders of magnitude faster than commercial solutions that tried to be overly clever. I like it because while it was gross and bad and would be embarrassing to read, in reality grep is crazy fast and the OS cached the text file once you'd read it once. It's a nice personal reminder to me that you don't always need to build some Grand Design, especially when you're just building a tool that has no future evolving path.


That's clever.

Freeshards usally went with a stop-the-world-worldsave. Which means they were infrequent and easily corrupted, leading to "timewarps".


fork - the original NoSQL


MUDs, the both spiritual and technological precursors to UO also used flat files for storage of both maps and users. it's not too surprising the UO storage was the same.


Neither did EQ AFAIK




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: