Hacker News new | past | comments | ask | show | jobs | submit login

(For those who were wondering, the article is about DSLs in Kotlin).

DSLs can be very elegant ways to improve code readability (as long as the assumptions and language they use is meaningful to the team).

This is an area that I wish node.js had invested in. Perl6 has "slangs" which are extremely powerful, see [1] for some examples that just flow elegantly. Combine that with custom operators to write code like "4.7kΩ ± 5%" that would make sense to electronicians [2] and you can get some really user-friendly syntaxes.

(Again, it bears repeating that a DSL improves readability by sacrificing generality, so a DSL is for audience that's usually a subset of that language's users. That electronics code would be useless to the average web developer.)

[1] https://github.com/tony-o/perl6-slang-sql

[2] https://perl6advent.wordpress.com/tag/dsl/




Wow. The Perl6 slang feature is awesome. I can't believe I missed it.

I haven't had a chance to use Perl 6 but I would really like an excuse. I love Lua's LPeg module. Perl 6 has deeply embedded PEGs (i.e. Grammars, aka parsing combinators) into it's design. Knowing first-hand how powerful PEGs can be if the engine and language integration is implemented well, I've never doubted the awesomeness and elegance of Perl 6. It's unclear to me if Perl 6 Grammars' Action feature is as powerful as LPeg's capturing and inline transformation primitives[1], but the fact that you can plug a grammar into Perl 6's parser is crazy awesome.

I'm not surprised Slang exists--I've known it was possible; just surprised that it's a module and idiomatic pattern and I hadn't read about it before.

[1] See http://www.inf.puc-rio.br/~roberto/lpeg/#captures Most PEG engines just return your raw parse tree as a complete data structure and require you to fix it up manually. LPeg has really well thought-out extensions that allow you to express capture and tree transformations much more naturally; specifically, inline (both syntactically and runtime execution) with the pattern definition(s).


Having read a lot of Python code that uses scientific libraries that override operators and key indexing I'll have to disagree.

The result is easier to read if you're familiar with it but exponentially more complicated if you're not and still learning. It creates marginally shorter code but I feel like the sacrifice of the explicitness / verbosity of a normal method / function call isn't worth it.


Those libraries also make it easier to write that code. The majority of people using scientific libraries will be used to mathematical and scientific notation, so the closer the programming is to that, the better for them.


> key indexing

sure, that should come with a huge warning sign in the documentation. Edit: and as code should be self documenting, that warning sign could be an extra function, sure.


> a DSL improves readability by sacrificing generality, so a DSL is for audience that's usually a subset of that language's users.

You'd think that'd be self-evident from the "domain specific" in the name, but I guess a lot of things are obvious once they're pointed out to you...


> 4.7kΩ ± 5%

And now ask somebody to type those unicode chars on the top of his/her head...

Plus how do you look what it does in google ? In a a documentation ?

How do you generate documentation for those ?

Cause you have no method name, no class name, nothing.

And then what's the exact implementation behind it ? Does it do something funny ? So I have to mix the dozen of funny details of all the code from the guys who though the 20 times they do this super-important-operation they need to recreate a whole new syntax ?


Programming languages characteristically haven’t adopted Unicode syntax because Unicode input methods aren’t “there yet”. But if we start accepting Unicode in programming languages, at least as an option with ASCII as a fallback, then input methods, searching, and so on will be forced to improve.

I currently use Emacs’ TeX input method, so I can type “\forall\alpha. \alpha \to \alpha” and get “∀α. α → α” or ”4.7k\Omega \pm 5%” for “4.7kΩ ± 5%”, which isn’t too bad. And in Haskell at least there’s Hoogle, which lets you search for unfamiliar operators, even Unicode ones. For instance, searching for “∈” gives me base-unicode-symbols:Data.Foldable.Unicode.∈ :: (Foldable t, Eq a) => a -> t a -> Bool and tells me it’s equal to the “elem” function which tests for membership in a container.

Using ASCII art instead of standard symbols introduces some cognitive load for beginners as well. When I was tutoring computer science in college, I had students get confused by many little things, like “->” instead of “→” (“Minus greater than? What could that possibly mean?”) or writing “=>” instead of “>=” because “≤” is written “<=”.


I've been using Fira Code[0] which supports ligatures and (in addition to being a nice looking monospace font) it makes reading code with those symbols so much nicer. I mainly write Python and that generally uses a pretty limited set of special symbols, but it really does make a difference when you see ≤ rather than <= and 'long equals' instead of == etc. I definitely recommend it.

I'd love to see ligature/unicode support rolled out more widely, symbols allow for more easy differentiation and they're so nice to look at when you do have them.


Interesting idea to have it solved in a font.

For now, I've been using Emacs configured in a way to replace e.g. the word "lambda" with a symbol "λ" for display only - i.e. the original text stays in the text file, but on the screen, it gets rendered as a pretty version. I recall Haskell folks developed a lot of such replacements for mathematical symbols, too, but I can't get a proper Unicode font that would make them readable at the sizes I use (I prefer rather small text for code, so I can see more of it at the same time).


I’ve tried Fira Code before, and I think it goes a bit too far honestly. It’s nice that it preserves character widths, but ideally I’d like to get away from monospaced fonts as well—they’re pretty much just a holdover from the technical limitations of typewriters and text displays. The main reason I don’t use proportional-width typefaces in programming is that I like to work in a text-only terminal (to avoid distraction) and that the ASCII-art approximations of symbols tend not to look very good.


You realize your whole comment is actually arguing in my favor right ?


It isn't. Recall the general rule of thumb: code is read many more times than written.

Also related, most of the "code writing" part is still actually just code changes, which don't require as much typing from you.


> And now ask somebody to type those unicode chars on the top of his/her head...

That's about the only problematic thing with this, though your editor should let you do it relatively easily; if it doesn't, find a one that doesn't suck ;).

> Plus how do you look what it does in google ? In a a documentation ? How do you generate documentation for those ?

Like with any other piece of API code. There's nothing special here.

> And then what's the exact implementation behind it ? Does it do something funny ? So I have to mix the dozen of funny details of all the code from the guys who though the 20 times they do this super-important-operation they need to recreate a whole new syntax ?

Check the associated documentation, or source code if available.

I get the impression programmers nowadays are afraid of reading the source of stuff they use.


> > And now ask somebody to type those unicode chars on the top of his/her head...

> That's about the only problematic thing with this, though your editor should let you do it relatively easily; if it doesn't, find a one that doesn't suck ;).

On macOS, and on a qwerty keyboard:

• Ω is Option Z

• ± is Option Shift = (or Option + if you look at it another way)

Many moons ago, HyperTalk (and I think AppleScript still does) supported ≥, ≤ and ≠ as shorthands for >=, <= and !=; and they were typed as Option >, Option <, and Option = respectively (fewer keystrokes).


You really need to work outside of your 10x bubble. This is scary to see you completly ignore common dev issues.


I don't ignore it. I refuse to accept that a developer can be allowed to forever use only the little knowledge they got on their bootcamp / university, and never learn anything on the job.

It's like when we were transitioning to Java 8 on a project at work, and someone asked me if I don't think using lambda expressions will be confusing to some people in the company. My answer was: this is standard Java now, they're Java developers - they should sit down and learn their fucking language. We should not sacrifice the quality of the codebase just because few people can't be bothered to spend few hours learning.

(Oh, and over time, it turned out nobody was confused for long. Some people saw me use lambdas, others saw their favourite IDEs suggesting them using lambdas instead of anonymous class boilerplate - all of that motivated them to learn. Now they all know how to use new Java 8 features and happily apply this knowledge.)

Programming is a profession. One should be expected to learn on the job. And refusing to learn is, frankly, self-handicapping.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: