NB: as mentioned in the "AMA" page, the whole team also helped a lot to reduce the size of this app after I got into the game, it's not just golfed by me.
Initially, I wa puzzled by how the grid could be showing placeholders (the grey "A1", "A2", etc.) when there's no placeholder attribute defined in the source code. Then I discovered that the code running in the demo is not actually the showcased 220-byte source code at all. The code is still impressively small, but you should fix this — it's misleading as is.
I think that's probably just because it's been revised and the placeholders were added after.
To be fair, it only adds an extra 18 chars, taking it up to 238-byte in size.
EDIT Here it is once you add the extra property:
(o=b=>{for(j in a)for(i in a)y=a[i]+-~j,b?document.write([["<p>"][i]]+"<input onfocus=value=[o[id]] onblur=o[id]=value;o() id="+y+" placeholder="+y+">"):eval(y+(".value"+o[y]).replace(/[A-Z]\d/g," +$&.value"))})(a="ABCD")
Yeah, sorry for the confusion. The placeholders and the compact style (no gutters) were added in the homepage for puerly aesthetic purposes, but these things take a lot of bytes without really adding functionnality, so we decided to not count them in the most golfed code. You can see a version with labels and localStorage persistence at the bottom of the page though. But thanks for your edit!
Judging by some of the comments one can regularly find on HN about the size of apps, this may be considered a pinnacle internet technology to be standardized mainstream
Tongue in cheek aside, very creative code. Thank you for sharing
<script>(o=b=>{for(j in a)for(i in a)y=a[i]+-~j,b?document.write([["<p>"][i]]+`<input onfocus=value=[o[id]] onblur=o[id]=value;o() id=${y}>`):eval(y+(".value"+o[y]).replace(/[A-Z]\d/g," +$&.value"))})(a="ABCD")</script>
(o=b=>{for(j in a)for(i in a)y=a[i]+j,b?document.write(`<${i*j?'input':'p'} onfocus=value=[o[id]] onblur=o[id]=value;o() id=${y}>`):eval(y+(".value"+o[y]).replace(/[A-Z]\d/g," +$&.value"))})(a="_ABCD")
Yesterday I was blown away by both tetris and snake golfed out to this degree and now this. I feel like this is the modern reincarnation of obfuscated c.
Can somebody explain to me how:
"a[i]+-~j" is the same as "a[i]+(j+1)" ?
I hate to say that, but I just seem to have broken it. Here's how: I put 0.1 in A1, 0.2 in A2 and =A1+A2 in B1. Now B1 displays "=A1+A2" instead of the result.
If you click on other cells, the values keep increasing. Otherwise, it eventually gives NaN. If you change the steps and first initialize A2, then it converges to NaN immediately.
> I guess you would have to sanitize when you save and/or load the spreadsheet
Sanitizing? No chance. Either you have a dedicated expression parser, or you run it directly throgh eval. There is no reliable middle ground. Decades of security failures of so-called "sanitizers" show this pretty clearly.
(Even if you manage to create a perfect sanitizer today, wait a few months, new features are added to the browser, and new loopholes will appear out of nothing.)
But that may be missing the point, because if you want more code quality, more safety and more features, of course you need more code. This demo illustrates the other way around: If you allow for dirty hacks, you can get away with a surprisingly small amount of code.
Blacklisting (checking that the input doesn’t contain any of a fixed set of known troublemakers) is asking for trouble, but whitelisting (checking that the input doesn’t contain anything but a fixed set of known safe constructs) should be fine.
If your whitelist allows a wide range of constructs, it isn’t much easier to check that an input is in the allowed set than to write an evaluator that is limited to that set, so it may not be much of an advantage to have a more powerful ”eval” lying around.
Is there really no middle ground? Sanitizers fail because they try to salvage the clean part, only blacklisting some possible inputs. But what if you turn it around? Only send to eval what fits through a matcher for a very small subset of the language. The matcher can even allow invalid inputs if you know that eval will safely reject them (think unbalanced brackets). That matcher will be much easier and safer to implement than a full parser/interpreter for the same subset.
> Only send to eval what fits through a matcher for a very small subset of the language
That's exactly what I meant by "dedicated expression parser".
(Not sure why you name it "matcher", though. Please be aware that a regex-based matcher will almost certainly fail for that task. You usually want a grammar, i.e. parser, which is more powerful, and shorter, and easier to read and to verify.)
EDIT: To those who downvoted my clarification, do you care to elaborate?
There is a difference between a recogniser, which answers the question "does this belong to the language", and a parser, which outputs a data structure. All you need here is a recogniser, and then pass the string through to eval which will do it's own parsing. Recognisers are smaller than parsers.
If you relax the rules, as the gp said, you can get away with something like a regex to do the job. While regex's are bad at context free grammars [0], if you forgo balancing brackets etc. a regex will do just fine.
All that said, with the crazy things JS lets you do [1] a recogniser for a relaxed language is likely to still let potentially dangerous code though.
[0] Yes, with most regex engines you can parse CFGs, but it's not nice, and at that point you _do_ want a grammar based parser
Please note that the term "recogniser" is very fuzzy, it could mean a regex matcher, a parser or even a turing-complete thing. Not very helpful for this discussion.
> a parser, which outputs a data structure
Please note that a parser is not required to output a data structure. In classic computer science, the parser of a context-free grammar usually has a minimal (boolean) output: it just either accepts or rejects the input.
If your "recognizer" is too weak (e.g. regexes), you risk not properly checking the language (see below).
If your "recognizer" is too powerful (e.g. turing complete), you risk tons of loopholes which are hard to find and hard to analyze. You probably won't be able to prove the security, and even if you do, it will probably be hard work, and even harder for others to follow and to verify.
But if your "recognizer" is a parser, you have a good chance to succeed in a safe way with minimal effort. Proving security is as simple as comparing your grammar with the ECMAscript standard.
> you can get away with something like a regex to do the job [...] with the crazy things JS lets you do a recogniser for a relaxed language is likely to still let potentially dangerous code though
That's exactly my point: Sure, you can try to build a protection wall based on regexes, but there's no reason to do that. Use a proper parser right away and don't waste your time with repeating well-known anti-patterns.
> In classic computer science, the parser of a context-free grammar usually has a minimal (boolean) output: it just either accepts or rejects the input.
All of the literature I remember from my uni days on formal grammers had a recogniser defined as something that accepts/rejects, and a parser as something that builds a data structure.
It's difficult to retrospectively find the literature, because outside of formal grammars recogniser _is_ used more loosely. But a few Wikipedia articles [1] [2] [3] and their referenced literature [4] [5] do agree with me.
> A recognizer is an algorithm which takes as input a string and
> either accepts or rejects it depending on whether or not the string
> is a sentence of the grammar. A parser is a recognizer which also
> outputs the set of all legal derivation trees for the string.
> Either you have a dedicated expression parser, or you run it directly throgh eval. There is no reliable middle ground.
While there is no safe middle ground, using eval directly is the worst case; it's not a case where those extremes reliable and there is greater danger in between.
That being said, rejecting everything that fails an expression parser is a form of sanitization.
I was wondering and wondering why it wasn't working for me. Colons break the script. I tried to write "TOTAL:" in a cell and none of the equations would resolve. I wanted in on the fun and I didn't get it until I relented and checked the console. :P
Super impressive and I love code-golfing but the title is slighlty misleading since the HTML part of the app, the cells, seem not to be included in the 220b. Still great.
I think I see what you're meaning, but this code actually generates all the cells (<input onfocus=... onblur=... id=...>). You can execute it in a JS console and it'll work fine.
Well, it's a 3 years old app, and we like it in this "minimal" form. We've made a lot of other mini apps, tools and games though, you can find the link on top of xem.github.io/sheet
I'm concerned about the lack of type annotations. What about XSS prevention? Maintainability? An API? How are people supposed to use this code in future projects? Your variable names don't conform to Code Complete. This code smells funny. Haven't you read clean code? And I don't think it's general enough. What if people want to add columns, but preserve formulas? What made you think you shouldn't use React for this? At least Vue, come on.
ah, if only we could all code just what we needed, instead of over-engineering solutions to problems we wished we had
Yes! It's confusing seeing as it is the top comment but clearly everybody gets it and appreciated the joke. It's perhaps just too chillingly accurate ;).
All of the things you mentioned are best practices when building something that you want other people to use in production. The author was trying to build this with the absolute minimum amount of code as an exercise so I don't think any of your comments here apply.
Well, this spreadsheet as it is now isn't very much useful. In fact it is annoying at best. Only programmers will find it amusing and precisely because of how small the code is. No one is actually going to use it ever.
XSS prevention, maintainability, code reuse, good variable names, good-smelling code, readability, and using popular libraries are not "over-engineering". They are 95% of what makes someone a good employee instead of a shitty one.
With modern languages and tooling, you can have the benefits of most of these things at little-to-no additional cost. If any of these require something complicated and "over-engineered", I'd instead describe it as "bad engineering".
Rule one is making code that works correctly and quickly, in a short about of time. This is 100% of the job.
If you think that "maintainability" and "readability" and "popular" libraries, and "code reuse" and "good variable names" and code that "smells good" to you, help someone do that, then great: That's an argument we can have just by pointing to anyone who gets rule one without those things.
However.
There's another kind of person - and there's a lot of them - who thinks those things are important by themselves, and that putting those things ahead of what I think are rule one is ever acceptable.
The number of times I've heard some "experienced" programmer say with all earnestness that we should ask for another 1/4 million dollars of tin so that he can use his "clean" (but slow) algorithm makes me wary of anyone who uses those words you just used in seriousness.
> There's another kind of person - and there's a lot of them - who thinks those things are important by themselves, and that putting those things ahead of what I think are rule one is ever acceptable.
I think you totally understood my humor. Thanks!
I saw a good blog in 2011 about how it is a waste of time to overbuild things since mostly you are only ever going to use that code for the specific purpose you are writing it, and you'll likely never need to generalize and you'll probably chuck it out anyway and start from scratch.
Obviously there are a bunch of caveats to that and it's not always true that's the case but I think it's a very useful counterpoint to the profession's current best-practice obsession. If you keep it in mind, alongside all the best practices, and you make a choice about approach in any given situation based on appropriateness, rather than just robotically applying best practices or giving into that common Engineer's temptation to overbuild, this counterpoint can help you build quickly just what's needed, which is effective, exactly like you said.
I don't think those caveats are obvious to a junior (or otherwise inexperienced) developer. Maybe some of them are obvious, but I don't think anyone runs into these enough daily that programming by dogma actually helps.
And yet I do see value if there are rules! Why should we think about having "good variable names" or not if most of the time "good variable names" don't cause much harm, and most of the time not having them does?
It's just that I don't think this is a real problem; I don't believe anyone actually makes that choice.
I think if something doesn't have "good variable names" (as an example), then it's possible someone was programming who couldn't think of good ones (to which the rule wouldn't help anyway), and it's possible that these are good variable names, and that it's our opinion that is wrong (perhaps because we don't yet understand what we're looking at). NB) I'm not saying both are equally likely.
To put it another way, how can one recommend programming by brute force, "as long as you've got "readability" and "popular" libraries, and "code reuse" and "good variable names" and so on... What can we possibly add to that list that will be even a good substitute for a few decades of experience successfully delivering software?
I mean, security and maintainability is important, but the most secure and maintainable code is always the code that didn't get written in the first place, which makes not writing unnecessary code a no.1. way of making the result not suck.
Probably you should first consider that our browsers contain a fully capable programming system which this golfed program abuses as an expression evaluator. :-) By comparison, this C program [1] (hints are at [2]) is a stripped down spreadsheet program for X, and it only has an RPN evaluator (probably to make a room for graphing features?).
Language detection works on language trigrams, and are trained on human language. I bet punctuation is stripped during preprocessing. So it's not surprising that code will have false positives.
I'm xem, author of this mini app, but standing on the shoulders of giants:
it's inspired by http://aem1k.com/sheet/
which was inspired by http://jsfiddle.net/ondras/hYfN3/
Cheers to all the JS codegolf team who teached me this noble art and continue to learn new tricks everyday with me!
You can find a list of all our mini apps here: https://gist.github.com/xem/206db44adbdd09bac424