Hacker News new | past | comments | ask | show | jobs | submit login
2048 implemented in 487 bytes of C (gist.github.com)
123 points by adamnemecek on April 5, 2014 | hide | past | favorite | 45 comments



Like the small JS version this also has the bug where you can summon more tiles by pushing again in a direction where movement isn't allowed.


He shouldn't call his game "2048" if it has that bug. It's a much easier game than the original one, as you are never forced to move a big number out of a corner and you never fear the dreaded position where you have exactly one open row and you are forced to move all your big numbers away from the edge.


Doesn't run correctly under windows. The code relies on the system call "stty cbreak", which is linux specific.

'stty' is not recognized as an internal or external command, operable program or batch file.

I would love to read comments instead of downvotes. Where am I wrong?


`stty` is not Linux specific, it's common to all Unix-derived systems (including Mac OS X, for example). Windows is not Unix-derived.


HN got really trigger happy in the last few weeks.

That part of the comment only is partially correct, and not was even the main point of my comment; what I really meant was that it doesn't belong in windows, and the main point was that the code is not portable.


A reaction to the new moderation proposals, maybe?


Were you expecting WINAPI wMainMain(...)?


As an obfuscated c example I was expecting portable code. Anyone hoping that as many people as possible try their code should aim for that.


Obfuscated C is always with caveats. The fact that it uses terminal escapes and stty suggests its not going to work on anything other than a colour xterm, if you're lucky, there are no sunspots and the wind is blowing in the correct direction...


I'm not sure that's a common goal for obfuscated c code


Same here. stty is available through cygwin, although I didn't result in a playable game.


Huh, was not aware of the "Implicit Int Rule" - http://stackoverflow.com/questions/11064292/declaring-variab....



How is that more beautiful?


uglified = minifed


While 487 bytes is impressive, it feels like you should round up to the nearest power of two...


That's nothing. Let me know when someone manage to implement 2048 in 2048 lines of Java


I haven't tried it, but a game of this complexity looks like it could be implemented in roughly the same number of bytes of binary, meaning it fits in a bootsector...

Coincidentally, a sector on optical media is typically 2048 bytes.


ISO 9660 is 2352 bytes, actually. The logical size is 2048 bytes, but the other bytes are needed for error correction and other headers.


    puts(W&1?"WIN":"LOSE");
Shouldn't exit(W&1); be "good enough" while saving some bytes?


Unfortunately it's the same every time it is played. It needs some srand() love in there somewhere.


Author here. Per many requests, here is an annotated version https://gist.github.com/justecorruptio/10248923


well, bugs or not, 487 bytes is a new high bar... I guess the lisp lads might need to curl up their sleeves.


On any modern filesystem, the source will take up substantially more than 487 bytes anyway. Perhaps you should round up to the nearest block size.


I'd understand 2048 implemented in 2048 bytes of C, but this I can't understand.


Can I build this using gcc ?


Sure:

    gcc -o 2048 2048.c
Ignore all the warnings.


That was the first command I have tried, but I get many of those: error: use of undeclared identifier 'X'. I am running OSX.


If you haven't installed gcc on your OSX, the command gcc actually calls clang.

http://stackoverflow.com/questions/19535422/os-x-10-9-gcc-li...


It sounds like the result of g++


nothing is returned from function s?


the only usage of the "s" function is down in T(i), and the "i" parameter passed to T() is ignored anyway ("for (i = X + rand() % X;"). It's just needed to save a int i; in the function itself.


This is called sheer brilliance!


wow. im more than impressed 0.0 wonderful



No


I never understood why this sort of thing is interesting. Let's say you can program 2048 in 487 bytes of C ... and then someone else comes along and they can program it in 387 bytes of C. So what? Is there a use case in the year 2014 where 100 bytes matter?

How many bytes would it take to code it in C in a human readable fashion? Such that some other programmer could look at it, understand it, modify it, fix bugs, etc?

Is the savings in bytes going from the latter case to the former case really an issue in 2014, and does it outweigh the downside (which I think is enormous) of generating obfuscated code?

Having said all that, as an exercise purely in "how small can by code be, regardless of how obfuscated I make it", I suppose it is ... interesting?

but even still I have to admit I don't get the appeal


Dude, it's just for fun. It's done in the same spirit as making speed runs for video games. Does everything have to be completely utilitarian?


It's also very good mental exercise, since it forces you to think even more about the corner cases of the language and its syntax. Even if you won't be using all these corner cases in production code (hopefully), it will make you a better programmer.

I find that reading minified/obfuscated code really helps with spotting syntax errors and related subtle bugs in regularly formatted code. Instead of relying on cues like indentation, you begin to parse more like a compiler, and things like missing semicolons stand out.


Point taken. I think I need to take a vacation


Although the obfuscated case is really pushing it in terms of code usability and readability, being able to write miniaturized code is really important in some industries (ie defense projects).

Take for example the Kinetis KL02 ( http://www.engadget.com/2014/02/25/freescale-kinetis-kl03/ ). With only 32KB of storage and 4KB of RAM, writing efficient code with a small footprint is important. These microprocessors can be used to make swarms of autonomous fly sized drones, and programmers that can cram more procedures into those 32KB are extremely rare and useful.


That could be true for other embedded projects as well, but then it's the size of the executable binary that is of concern, not necessarily the size of the source code on your development box.


Exceptions like C++ templates exist (in which a small amount of source can expand to a huge binary), but usually there tends to be a general correlation between the size of the source and the binary.

Trying to write as little source code as possible also leads to trying to find the simplest, most concise algorithm to do a particular task, and that also has effects on the binary size.


This isn't C++. Just adding whitespace, descriptive variable and function names and comments wouldn't add any size to the resulting binary since the code is semantically the same. Using library functions can keep your amount of lines low, but you get that back when at link time if you do static linking at least. Wouldn't you prefer maintainable code that people can easily understand. I just want this said since obfuscated minimal C code isn't the same as minified JS.

Algorithms is an interesting case since you have to ask if you are optimizing for size or speed, as an example you can implement a linear search in fewer lines of code than binary search. Does that mean that linear search is a better more optimized choice. It does if you are optimizing for small executable binary size.


Think of it as a puzzle.

Checkout the demoscene stuff and superpacking JS.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: