Hacker News new | past | comments | ask | show | jobs | submit login

Way back in the early 2010s I was very "excited" about coffee script and similar projects. They sounded like they should be great for productivity.

When I actually tried to write a project in coffee script, the results were the opposite of what I expected.

The code was harder to read, harder to modify, harder to understand, harder to reason about.

There's something about removing stuff from syntax that makes programming harder. My hypothesis is this: your brain has to spend extra effort to "decompress" the terse syntax in order to understand it, and this makes reading code unnecessarily difficult.

So I fundamentally disagree with the underlying premise of these projects, which seems to be based on PG's concept of "terse is power".

My experience suggests the opposite: there's power in being explicit. Type declaration is an example of such a feature: it makes explicit something about the code that was implicit.

Type declarations add more to the parse tree, and require you to type more, but they actually give you more power.

The same can be said about being explicit in the language constructs.

There of course has to be a balance. If everything is way too explicit (more so than needed) then your brain will do the opposite of what it needs to do with terse code: it has to spend more effort to remove the extra fluff to get to the essence of what the code is doing.

Being terse is good, up to a point. Same with being explicit.

Languages that try to bias too strongly towards one extreme or the other tend to miss the mark. Instead of aiming for balance, they start to aim for fulfilling some higher telos.




I don't know how much you actually tried coffeescript but I find your opinion strange. Coffee wasn't ever like J or anything crazy terse. Its appeal came not merely from making things shorter (it did that, but not by a crazy margin), but from adding a lot of useful things to the language like ? operator, spread operator, destructuring, classes, ranges, better iteration, etc. Almost every coffeescript feature was ultimately added to Javascript (and with very similar syntax) which made coffee somewhat obsolete. Coffee's lack of brackets and semicolons everywhere and @foo instead of this.foo, as well as usage of other features certainly didn't take away any readability or explicitness, if anything - they made it better; the same way those same features make Javascript better and more readable (and, ekhm. "easier to reason about") as long as you _know_ them.


Coffee added really nice and productive language features to what at the time still was a very primitive and simplistic Javascript. It also offered a different syntax to existing and new features.

The first part was great and deserves credit for pushing the language to evolve. The second part made it a terrible dev experience. My experience was identical to the gp: writing it was fast and intuitive, but reading it was much, much worse. Not just reading other people's code, but even my own: my ability to understand my own code degraded not in weeks or months, but mere days. It was so bad, the overall result was a net negative and I quickly stopped using it. I say this as someone who has enjoyed writing code in over a dozen languages, from assembly all the way to Haskell.


Same, even though a lot of syntax is theoretically unnecessary, they chunk the code into patterns that help navigate the complexity.

Otherwise it's like improving the efficiency of a closet by ripping out the shelves, drawers, dividers, coat hangers, etc... so that it's just one big volume-maximized empty room.


> Being terse is good, up to a point. Same with being explicit.

I'm thinking of using Civet for an upcoming project, I specially want `do` expressions. Even with how much noise an IIFE (`(() => {doStuff(); return ...;})()`) introduces, it's such a natural idea to me that I end up using them very often.

But what makes me question the idea is having to learn a new syntax that is close enough to the JS I already know that I'm going to get confused constantly.

Why would I care about writing `export a, b from "./cool.js"` instead of `export { a, b } from "./cool.js"`? I don't mind those curly braces, I may actually like them a bit; I do very much mind the overhead of remembering these details when I change languages, and there's no way I can remove JS/TS from my life.

Finally, there's expressions like `value min ceiling max floor`. Is that readable at all? You have to actually read each word to know which are operators, which functions, which vars... It seems to me much worse than `max(min(value, ceiling), floor)` or a Lispy alternative like `(max (min value ceiling) floor).


Even the pipeline operator this supports seems easier to read for this particular example: value |> Math.min(&, ceiling) |> Math.max(&, floor)


I didn't mind the coffeescript experience, but it's deeply hurtful to productivity to dev in a platform that doesn't end up winning.


Yeah, I like a lot of the language features here, especially:

- Everything is an expression

- Pattern matching

- Spread in any position

- Dedented strings/templates

However, I wouldn't use it, because the chance of it becoming abandonware that I just have to migrate off of later is way too high. I'll write a few extra TypeScript characters here and there for the stability.


Unless civet's compiled output is hard to read you could always just check in the compiled Typescript source and continue from there if it gets abandoned. Not much of a risk when the migration is built in by the way the tool works in normal use.


The compiled output seems pretty clean, but isn’t necessarily what you’d write by hand. e.g. adds things like anonymous functions called immediately, when you’d probably just write a named private function, that sort of thing.


CoffeeScript did end up winning though, all of its important features ended up in the next Javascript spec. It was a bit sad to transition into the slightly less aesthetic next Javascript release, but it also felt triumphant. To me it feels like the CoffeeScript community won. Every time I type some Javascript that's actually not fragile and not absolute garbage, I remember it's because we as a community backed CoffeeScript, and that led to the browsers listening and adding its features to Javascript.

I am certain we're doing the same thing now with Typescript.


Is it? I find most of the "winning" tech deeply unproductive. Have you tried developing in a project with Webpack and Redux? It's kind of its own little hell. Everything is way too slow and complicated. Tasks that should take 20 minutes take 3 hours.


“Winning tech” may be less productive for a one-off single person project.

But “losing tech” is unproductive when you have to maintain/upgrade your code over many years and onboard new developers into the team.

It is unfortunate that the tech industry’s choice of tools is largely fashion-driven but that is the reality.


The problem is, the entirety of the javascript ecosystem is built for large teams doing multi-year enterprise projects, and so unless you're launching a social media startup or something, you have to wade through a swamp of unnecessary complexity.


Try again with Vite and useReducer and prepare to have your socks blown off.


Even vite is slow. Just use esbuild directly. Jotai is much better for global state management.


I think you're talking about two separate things here,

1) Having to learn an entirely new syntax just to save on a few characters (it "decompresses" directly to the original thing)

2) Explicitness vs implicitness/inference

I wholly agree with you about #1 (superficial brevity isn't a very important goal and doesn't justify a whole new language), but #2 is much more of an "it depends"


Go's error handling pattern is a great example of being overly explicit IMO. I personally like it, but I can understand why it causes so much controversy.


The thing about:

    if err != nil {
        return err
    }
...is that it's not even Error Handling. It's "Error Shovelling" (manual work to move the error from one place to another).


I find that Go errs way too strongly on the explicit side, but overall it's still better than many other alternatives.

For error handling I tend to write in a style where errors are either asserted out or "folded". If I do several operations in sequence any of them could err, I code in a way where I don't check every single op: instead I make some kind of "error accumulator", or write the code in a style such that if the previous operation failed the next operation will become effectively noop. I then check for errors at the end of the process.

That said, Go is actually right about treating errors as values and not giving special language constructs to throw/catch them.


Rust is one of the few languages I've seen that really does error handling right.

Errors are still values in Rust - usually as part of the `Result` type - but unlike Go, it actually has tools to let you deal with them in a convenient way, like the `?` propagation operator (https://doc.rust-lang.org/book/ch09-02-recoverable-errors-wi...), or the functions on the `Result` type like `map`, `and_then`, `map_err`, or crates like `thiserror` for defining error types, and `anyhow` for easily converting them when you don't care about the details.


The fact that the anyhow/thiserror crates are basically required, or you have to make a bespoke enum for your crate's errors and write conversion functions for them, is not great.

The try operator and Result type is amazing though.


Many languages have this discussion about what functionality belongs in the standard library and what is best left to external libraries - to avoid being stuck with a bad design forever because of backwards compatibility, etc.

I don't see a problem with relying on a few super popular basic libraries for almost every project.


Yea but Go's solution to errors is a straight jacket. There's nothing in say Java that prevents returning a Result type with an error or value and writing code that way.

I guess you can panic/recover in go but it's very very unwieldy and not quite the same.


The problem is that it doesn’t compose, not the verbosity.


All natural languages have a little bit of redundancy. It helps solve ambiguities more easily, especially when solving it in a strictly minimal grammar would require re-parsing the entire text from start. Having both opening and closing parens / brackets / braces is a good example.

Redundancy also helps when transmission is imperfect. And you do have imperfect transmission when writing code (typos), and even when reading (skimming text, missing a character).

CoffeScript makes every character count, especially punctuation. It's really, really easy to make a small typo in these. But CoffeeScript eschews redundancy, so the typo becomes another valid grammatical construction, with an entirely different meaning. At best, you get a cryptic translation error elsewhere. At worst, it gets accepted but works differently than you had intended.

APL has this property, too. But an APL program is very terse, you pay attention to every character in a short string of them. It does not feel like Javascript which is traditionally lax in the punctuation and whitespace department, catching you off guard.

CoffeeScript was an interesting experiment, but I'd say its result is negative.


I agree. The Rust foundation (or Mozilla's Rust Team in the early stages) tried out a sigil heavy approach in the beginning, but they ultimately made the decision to steer Rust away from being overloaded with single character keywords of differing significance.

Sadly, with Steve Klabnik's withdrawal from the core Team, the current maintainers are on a path to repeat these mistakes.


> the current maintainers are on a path to repeat these mistakes

Do you recall specific RFCs that are leaning in this direction?


Yes and no.

A good example would be the behavior of the tilde operator. Another would be the current drafting process of keyword generics.


Reading CoffeeScript, and this Civit language, feels like reading prose that doesn't have any punctuation. Slightly quicker to write, much harder to read.


We also had same experience with cofeescript. It was a not a nice experience. Later we move with JS because of maintenance issues.


\tangent Things stated by implication are harder to understand because of the cognitive load.

But I wonder, if the compiler can get by without it, perhaps we can too? With a different mental model/abstraction, that simply does not need that information - not even by implication. If there is one, probably not easy to come up with.

Like kinematics omitting force (e.g. high school physics, x = x_0 + vt + 1/2at^2). https://wikipedia.org/wiki/Kinematics


As terse as possible for me, I think you underestimate the human brain.


I don't underestimate the human brain - however, I know as a fact, through ample empirical evidence, that implicit or dynamic typing makes my head hurt and has me scrambling through multiple code panes trying to understand the input or return types for code that I wrote five days ago.

I also know as a fact that programmers 100x my caliber have nevertheless written great large-scale software without types.

So I don't make generalizations on the human condition and just do what works for me!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: