Hacker News new | past | comments | ask | show | jobs | submit login
Examples of everything new in ECMAScript 2016, 2017, and 2018 (freecodecamp.org)
361 points by node-bayarea on April 3, 2018 | hide | past | favorite | 121 comments



> Trivia: the JavaScript spec people wanted to name it contains, but this was apparently already used by Mootools so they used includes

That sounds ... unfortunate, to say the least. Why is the language spec. beholden to one framework written in that language? Doesn't mootools [1] check first for functions it might be overriding?

> Emojis and other double-byte chars are represented using multiple bytes of unicode. So padStart and padEnd might not work as expected!

It's a shame this isn't followed up on. What's the solution? Only use ASCII? Don't use padStart/padEnd? Does anyone know anything about monospace fonts and any guarantees they make wrt. unicode?

[1] https://mootools.net/core/docs/1.5.2/Types/Array#Array:conta...


Basically mootools screwed up, and this is what smooshGate was about as well.

They modified the array prototype directly with their own methods. They did feature detection on the prototype by checking if the method of that name existed ("flatten" in the case of smooshGate, "contains" in the case of "includes"), and didn't replace the prototype method if a function of that name already existed on the prototype (whether or not its a compatible implementation or not). Unfortunately, the behavior of these mootools methods differed enough from the official spec that they cannot be used interchangeably.

The consequence is if "flatten" or "contains" were to become part of the official spec and implemented by browsers, then all legacy mootool applications would break and mootools is apparently a big enough deal that it ends conversation.

This is what happened when Mozilla implemented flatten on firefox nightly: https://bugzilla.mozilla.org/show_bug.cgi?id=1443630


That just proves that "contains" is the correct name for the language feature.

Implement it correctly, without regard to any silly things that 3rd parties were doing. 5 years from now, nobody will remember that library.

Do it the way they're doing, and 5, 15, 50 years from now people will still be asking why this stupid language used a non-standard method name for Array.contains.

Extra points for encouraging all the other poorly implemented javascript libraries to fix their things before they break for similar reasons.


I disagree. "contains" might be a nice name, but if it breaks a significant number of websites and forces many to have to update and fix websites that have worked for years, then a simple rename is a fantastic solution in my opinion.

This isn't about being "right", it's about finding the best solution to the problem at hand, and a solution that breaks things but is "technically correct" is worse than one that doesn't break things but has a slightly wonky name.


Only one of those solutions has negative ever-lasting effects though and a bad name definitely does break things.

A broken app can be fixed once, by one developer. A bad name will cause confusion among all developers and waste their time.

Unfortunately, JS already has such shitty names for things that everybody just accepted this. But, they were wrong. They should have definitely broken apps built with mooTools because of the reasons that I stated and also since browsers have already broken tons of sites for lesser reasons such as removing ads.


JS prioritizes backwards compatibility to a very high degree. This has proven to be a strength over the years, as it allows the language to evolve quickly without big drama like python.


It's not comparable with Python, which broke correct code to the point where even `print "Hello world"` was broken! This mootools issue only affects a very small number of users.

It amounts to a denial-of-service attack against the language, if you author a popular enough JS library you can screw things up for everybody in perpetuity. That isn't a healthy way to design a language.


Give the guy a break, he was given 10 days to design the thing. It worked out pretty well in the end. This is spilt milk from 20+ years ago.


Reminds me of the decision for DOS to use backslashes for path separators because some dufus had chosen front slash for cmdline switches.

Instead of a temporary pain for perhaps a thousand users, ~40 years later millions are still suffering from it.


The Unix-Haters Handbook also has a story about backcompat:

> Way back in the early 1980s, before each of the bugs in Unix had such a large cult following, a programmer at BBN actually fixed the bug in Berkeley’s `make` that requires starting rule lines with tab characters instead of any whitespace. It wasn’t a hard fix—just a few lines of code.

> Like any group of responsible citizens, the hackers at BBN sent the patch back to Berkeley so the fix could be incorporated into the master Unix sources. A year later, Berkeley released a new version of Unix with the `make` bug still there. The BBN hackers fixed the bug a second time, and once again sent the patch back to Berkeley.

> …The _third_ time that Berkeley released a version of `make` with the same bug present, the hackers at BBN gave up. Instead of fixing the bug in Berkeley `make`, they went through all of their Makefiles, found the lines that began with spaces, and turned the spaces into tabs. After all, BBN was paying them to write new programs, not to fix the same old bugs over and over again.

> (According to legend, Stu Feldman didn’t fix `make`’s syntax, after he realized that the syntax was broken, because he already had 10 users.)


Ha, one wonders why it doesn't just allow spaces OR tabs.


Make that billions.


The real pain is from people that don't know better when saying a URL out loud. "domain.com/page" gets incorrectly spoken as "domain dot com backslash page"

Relevant XKCD: https://xkcd.com/727/


The Chrome devs talked more on it here https://developers.google.com/web/updates/2018/03/smooshgate

Basically the web is designed with backwards compatibility in mind, or at least, not completely breaking old websites which is why they chose not to go down that path.


"The Chrome Devs" is not a single entity, it would seem.

These don't seem like the same "The Chrome Devs" who broke every WebAudio app in existence by removing .noteOn within 12 months of .start being proposed as a replacement for it.

And that was a case where there was absolutely no long term harm keeping the recently deprecated method name around. It just seemed tidier to some junior dev someplace and nobody questioned it. There is actually a thread someplace where they moved it back by a build so that they would have a chance to fix Google's "Chrome WebAudio Showcase" site which had broken because of the change.

In this case though, I disagree that we're talking about backward compatibility. That's about not breaking things you previously released with things you release in the future. It takes Microsoft-Windows-esque attention to detail and testing effort to ensure you never break some fool's reliance on an undocumented feature.


> That sounds ... unfortunate, to say the least. Why is the language spec. beholden to one framework written in that language? Doesn't mootools [1] check first for functions it might be overriding?

I've looked at it and things are actually worse than I thought: not only do the spec and mootools disagree on behaviour (spec is shallow while mootools is deep) the actual issue — and the one which already broke `contains` is that Mootools adds methods to Array, conditionally for recent-ish version, but it then copies them over to Element. The issue there is that it iterates Array.prototype and copies over what it finds.

"Spec" methods are defined as non-enumerable, so it doesn't see them, does not copy them over, and thus using these methods on Element doesn't work anymore. Funnily enough this breaks whether it implements its extensions conditionally or not: even if it re-sets the method, it doesn't reset the enumerable flag, so they're still not enumerable, and still not copied over to Element.

This means the method would apparently have to be enumerable (which normally isn't the case) or it would need to be a non-data property with a setter magically setting the enumerable bit. And the committee is apparently unenthused by this prospect.


The problem is that mootools did it first.

But (I think, I’m on my phone and can’t check), they won’t replace the method if it already exists.

So the problem is that if the spec differs in behavior at all from the mootools implementation, existing websites will break (meaning that the website was expecting the behavior of mootools’ implementation and got the spec version instead).

Even if TC39 didn’t care about breaking old websites, the browser vendors do. So they simply wouldn’t implement the spec if it broke a bunch of websites. Backwards incompatible changes don’t really hurt developers as much as they hurt users. And the browser vendors don’t want people to start complaining about how websites are broken in [X].


This is a terrible idea, but if it really is the browser vendors fighting this then perhaps the browser should detect mootools (black list websites?) and disable the newer versions. Perhaps Google crawler already has this list.


Existing websites will break if they depend on whatever the implementation difference is. Does anyone know what that difference actually is? Seems significant.


Even if their implementation is identical to the standard, it is still problematic. The key problem is the way MooTools tries to copy over that method (and many other methods):

> Currently, Array.prototype.flatten = mooToolsFlattenImplementation creates an enumerable flatten property, so it’s later copied to Elements. But if we ship a native version of flatten, it becomes non-enumerable, and isn’t copied to Elements. Any code relying on MooTools’ Elements.prototype.flatten is now broken.

> Although it seems like changing the native Array.prototype.flatten to be enumerable would fix the problem, it would likely cause even more compatibility issues. Every website relying on for-in to iterate over an array (which is a bad practice, but it happens) would then suddenly get an additional loop iteration for the flatten property.

> The bigger underlying problem here is modifying built-in objects. Extending native prototypes is generally accepted as a bad practice nowadays, as it doesn’t compose nicely with other libraries and third-party code. Don’t modify objects you don’t own!


On Chrome and Firefox, at least, as long as I delete Array.prototype.whatever first, I get an enumerable property when I polyfill it.

Not saying that's the ideal solution, and there are certainly other issues with extending built-ins and clobbering the original name. The Chrome blog also mentions forcing a patch to legacy websites is an unacceptable solution (though I'm not sure I agree with that unless we want to maintain name compatibility with every library forevermore).

However, it's certainly not a matter where MooTools would have to completely rework what they're doing, or where the browser would have to make the property enumerable from the start.

In fact, it'd be easy enough to use a compatability shim on the site side that'd nuke all the conflicting properties from Array.prototype before Mootools ever loads, if they know they're on ES5 and would never use the native ones.

    > Array.prototype.map = () => { console.log('map, yo'); }
    function Array.prototype.map()
    > [].map()
    map, yo
    undefined
    > for (k in []) console.log(k)
    undefined
    > delete Array.prototype.map
    true
    > Array.prototype.map = () => { console.log('map, yo'); }
    function Array.prototype.map()
    > [].map()
    map, yo
    undefined
    > for (k in []) console.log(k)
    map
    undefined


> as long as I delete Array.prototype.whatever first, I get an enumerable property when I polyfill it.

The key point there is "deleting first". If you assign to a property without deleting it first, it continues to be non-enumerable.

> it's certainly not a matter where MooTools would have to completely rework what they're doing

> it'd be easy enough to use a compatability shim on the site side

MooTools has released a patch for this problem. The problem is indeed the sites that have been built and left to their own devices.

This is not at all dissimilar to Windows dragging around garbage needed "to permit four programs written in 1994 to continue running" [1].

[1]: https://blogs.msdn.microsoft.com/oldnewthing/20031103-00/?p=...


I definitely agree with that last point. I get the desire to keep the top 100 sites going--I used to be on test for Mozilla, and you really don't want to be the browser that can't view a significant chunk of the web.

But that sort of fear-based commitment to backwards compatibility didn't do Windows any favors. I'd really hate to see it become endemic to the web.


I can see both sides of the argument here — from the technical point of view, it is certainly unappealing to maintain such kludges, and from the business point of view, "it's been working fine so whatever changed on the other end is broken, we won't spend money fixing it"…

Do you have any thoughts on how this quagmire can be avoided on the web? I can imagine solutions similar to Ghostery's stub scripts, but putting that in a browser, let alone many browsers, sounds like a large legal problem.


Aside from something like forced namespacing, I don't, really. I suppose being able to pin the JS version in the browser might help, similar to quirks mode of old, but that leads to another version of the BC issue.

But ultimately this is a question of the lesser of two evils, and I do sort of think that the real solution here is that site owners own their sites and their decisions as to what libraries to use, and thus own their own downtime. Browser vendors can't reasonably try to take that responsibility on themselves in an uncurated ecosystem, especially in cases like MooTools where the library vendor did something long known to be questionable.

That may be idealistic, but I think it's the only path that really works in the long run. Unfortunately, in browser market share is king so I doubt that attitude will be adopted. But I definitely wouldn't buy any argument that it's for the user's benefit--it's all about not wanting to get blame splashed back.


"but this was apparently already used by Mootools so they used includes"

serious question... does anyone even use mootools at this point in time to even consider this? seems like more and more people are going leaving jquery for vanilla javascript everyday, so i have to conclude that jquery itself will eventually run it's course, like prototype did, and die off, but i would have thought that mootools died long ago.


Approx 460,000 websites [0]. Among them jsfiddle.net, tripadvisor.com.

[0] https://www.wappalyzer.com/technologies/mootools


There are various ways of testing this, for instance in the case of ‘flatten’ Firefox released it in their Nightly and it did break a top 100 site, that was enough to end the debate.

https://bugzilla.mozilla.org/show_bug.cgi?id=1443630


Old sites using old versions of Mootools might never be updated. Even if Mootools fixed their feature detection of contains() and flatten(), old Mootools versions will still be floating around on real sites forever.


What's the solution? Only use ASCII? Don't use padStart/padEnd? Does anyone know anything about monospace fonts and any guarantees they make wrt. unicode?

Well.

The description given of "multiple bytes of unicode" is terribly misleading.

There are multiple reasons you might run into trouble in a JavaScript string. One is that it uses UTF-16 for its string type; this represents Unicode code points as 16-bit units. For code points which fit in 16 bits, it uses one unit, and for code points that don't it uses two units, so it's a variable-width encoding (all code points of Unicode can be represented with 32 bits, so two units for a code point is the most you'll see in a UTF-16 byte sequence). This is done via a mechanism which derives two "surrogate" code points from the original, and the resulting two code points are called a "surrogate pair".

Unfortunately, JavaScript leaks this implementation detail to the programmer, which means many string operations can "cut" a surrogate pair in two and leave you with code points that don't actually represent any character, because they're from the surrogate range.

But that's not what's happening in the given example.

Emoji are complicated. Some emoji use only a single Unicode code point, while others are composed from multiple code points, potentially with a joiner code point in between. Here's a comment I just posted in another thread with an example:

https://news.ycombinator.com/item?id=16757317

In the example in the linked article, the "heart" emoji is actually two code points: U+2764 HEAVY BLACK HEART and U+FE0F VARIATION SELECTOR-16. The first is a heart-shaped character that's been around for years; the second is a "variation selector" character which tells whatever's rendering this that it should use a variant emoji-style presentation.

But since that's two code points, again, operations on it can "cut" it in half and cause havoc.

The "workaround" is not to use string indexing or slicing operations in JavaScript if you think you'll be handed emoji or anything else from outside the Basic Multilingual Plane of Unicode, or to be prepared to manually handle them.

As to monospace fonts, this article has a great rundown of how Unicode actually works, and an exploration of how various monospace environments try, and often fail, to handle it:

https://eev.ee/blog/2015/09/12/dark-corners-of-unicode/


Don't get too excited for SharedArrayBuffer. Every major browser disabled it to help mitigate spectre, and there's no current road map for re-enablement.


Speaking of which, while I can see the usefulness of SharedArrayBuffer and Atomics for certain libraries and creating certain functionality, I have had some concerns about these new modules. Specifically, I think it complicates the simple model that ES had going for it for a few reasons:

* There's already many potential spots for side-effects and mutability in ES as it is. Now we've introduced another one but it works differently than the rest of the model.

* Aside from maintaining order with the event loop and async operations, you really didn't need to worry about shared mutable memory in concurrent environments. Now we need to keep that in mind when we work.

* If one needs to work with a SharedArrayBuffer instance, the functions they write need to assume/test that the argument is a SharedArrayBuffer specifically because it needs to deal with that type using a separate module (Atomics).

* A smaller point, but it does add to the API surface of ES. That could be confusing as time goes on, especially if a developer is unsure of why SharedArrayBuffer is around.

These issues can be dealt with by being very careful and selective of when to use these tools, as well as trying to only use them within libraries or narrow contexts. So they're tolerable for sure, but they are just things that have crossed my mind.

That being said, what future plans are there for SharedArrayBuffer/Atomics? Maybe the future holds some ideas that will make things better for this feature.


Having just watched one of Kevlin Henney's many talks on the subject on Youtube, I'm reminded of his mutability and shared state diagram. Since I can't find a copy of it outside of an hour long video, I'll try to replicate it here (sorry for those on mobile):

               Mutable
                 ^
      (Good)     |      (Bad)
                 |
  Non-Shared ----------> Shared
    State        |       State
                 |
      (Good)     |      (Good)
               Immutable
On the top half of the graph you have Mutable state, on the bottom half Immutable. On the left you don't share the state and on the right you do. Everything is fine as long as you don't both mutate the state and share it. Of course, that's what we always feel like we want to do ;-)


I'd remove the "state" part from either shared or non-shared. Shared immutable data is not state, it's just data.


What's the rational behind non-shared mutable state being good? State is a huge source of bugs even if you're just using it locally in my opinion.


Simple example: sometimes it's tremendously easier and more performant to write loops with an increment counter. Test it, stick it in a function, and it works just as well as any thing else.


I agree with you generally. However, using persistent data structures can sometimes be tricky and can complicate the code. As long as you contain the mutability you can get some benefits: increased performance, reduced memory usage, etc.

I once wrote a SIP client for Windows mobile in .Net. Resources were scarce and .Net would spin up threads if you so much as looked at it the wrong way. I had a single thread reading from the network into a circular queue -- clearly that had to be mutable or memory allocation alone would have set the machine on fire. I then used a reactor pattern in the UI thread: basically on the windows event loop being idle, I read from the circular queue. Because I could guarantee that there was never any more than 1 thread reading from the queue and 1 thread writing to the queue, I could get away with no locking (you drop packets that will overrun the queue).

For me, that's the kind of thing that this graph is saying: mutability is problematic, but if you are forced to use it you need to be really careful about concurrency.


I just mean I wouldn't refer to non-shared mutability as "good". You should aim for immutability by default and drop to mutability if it has a clear benefit which depending on the domain can be rare. Most of the time I'm optimising for developer productivity and reduction of bugs first; performance and memory consumption comes later if it's an issue.


Actors in actor model have local state.


That is a great graph, show's the advantage of Immutable state, but also shows that Mutable state is not strictly always bad.


I hope they scrap it for a better model. Shared memory + locks is not the only way to handle concurrency and can be difficult to reason about. I don't quite understand what problem it's solving that message passing doesn't already.

Anyone have any insight on this?


Parallel scanning of large data sets, like object graphs. You don't want to copy the entire graph to each thread, nor do fine grained passing around of a node per traversal.

However, if what this exposes is just an array of bytes, that's less useful as you have to cast your own object model on top of it. Large image data might be still useful as just bytes, with threads running convolutions or NNs on it.


It isn't the only way to handle concurrency, but they are the only primitives that can be used to port existing code (C/C++/Rust/etc) into the browser sandbox without killing performance, or introducing crazier and unsound primitives (like stack manipulation).

I've used SharedArrayBuffers to port things like latex into the browser ( https://browsix.org ), and without it you can't use wasm or asm.js, you need to interpret C code in JavaScript to save/restore the stack on system calls.


> It isn't the only way to handle concurrency, but they are the only primitives that can be used to port existing code (C/C++/Rust/etc) into the browser sandbox without killing performance

I thought a big reason why we have WASM is so that that emphatically does not need to be a concern of JS.


Shared memory + locks isn't about concurrency, but probably about parallelism. If you just need concurrency, then messaging between web workers should be good enough.


Fundamentally, you have to implement every other approach using locks behind the scenes (with the possible exception of immutable data).

It may not be an idea user primitive, but serves library makers well.


I think the main use case is to support c++ codebases compiled with emscripten without having to rewrite the whole codebase to not assume shared mutable state.


I wouldn't be surprised if it stays disabled for a long time; there are plenty of non-Spectre-related timing side-channels that are made possible by shared mutable state.


Do you have a link for this, by chance? I hadn't heard.


The browser compatibility section of the MDN article[1] lists the various browsers and has footnotes that provide further info on each browser's handling of the situation

[1] https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...


"Note that SharedArrayBuffer was disabled by default in all major browsers on 5 January, 2018 in response to Spectre."

https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...


Are you sure about that?

If that's the case, what do you think this means: https://bugs.chromium.org/p/chromium/issues/detail?id=821270


Just for context:

1) Site isolation is not enabled in Chrome yet.

2) As far as I can tell, there are no plans to enable it in Chrome on Android even when it's enabled on desktop.

Yes, this would mean SharedArrayBuffer on desktop but not Android.


I'm surprised we don't have the 'safe navigation operator' yet in Javascript.

It would be very useful, especially in templating engines, complex VueJS views etc.

https://en.wikipedia.org/wiki/Safe_navigation_operator



This is one of my favorite proposals, along with the pipe operator and partial application proposals:

  const add = (x, y) => x + y;
  const addTwo = add(2, ?);
  let x = addTwo(5); // 7
  
  x = 5 |> addTwo |> add(?, 10); // 17
The last example is equivalent to:

  add(addTwo(5), 10);
Obviously the pipe operator is more useful when order matters.


I wouldn't mind seeing forward and reverse composition operators (so >> and << from F#) as well. But getting pipe would be a pretty awesome first step.


Looks like there’s a recent proposal for them:

https://github.com/TheNavigateur/proposal-pipeline-operator-...


Although I expect you're presenting a contrived example, I find that syntax horrible. In your example it provides no additional value, and requires more typing. And more thinking to parse.


Which aspect, the partial or the pipeline?

The partial application operator clearly wins over bind, IMO, which is currently the way you'd do that [ addTwo = add.bind(this, 2) ] if only because you don't need to specify the this parameter. It also gives the option to "bind" to other parameters than the leftmost.

The pipeline operator in that example is a little contrived, but coming from any functional background it makes a ton of sense. Depending on what I was piping, I might also format it more like:

    x = 5 
      |> addTwo
      |> add(?, 10)
...which would probably resonate with anyone used to promise chains:

    x = Promise.resolve(5)
      .then(addTwo)
      .then(add(?, 10)) // or v => add(v, 10)
It's still contrived, but it is a reasonably familiar flow in JS. It would also clean up async/await syntax a lot assuming they could be used together:

    x = await foo(await bar(2, await baz(3))) 
...becomes...

    x = await baz(3)
      |> await bar(2, ?)
      |> await foo(?)
...which honestly is a more accurate representation of what's happening.


Thanks for the expanded examples. Maybe I'll like it once I see it in the wild solving real problems.


The point is it looks a lot better than several nested function calls, especially when the order of the calls are important.

In JS you can do stuff like this:

  const rules = parsedTags
    .split(';')
    .map(item => item.trim())
    .filter(item => !!item);
But in some functional languages you would have to do something like this:

  Array.filter(Array.map(String.split(parsedTags, ';'), item => String.trim(item)), item => !!item);
Which makes it a lot more difficult to write/understand. (Note the order in which the functions will be called).

With a pipe operator you can do something like this:

  parsedTags
    |> Array.split(';')
    |> Array.map(item => String.trim(item))
    |> Array.filter(item => !!item);
Now that people are writing more functional JS, a pipe operator becomes much more useful since you can't always chain function calls like the JS example above.


kinda already possible:

const bind1st = (f, x) => (y) => f(x, y) const addTwo = bind1st(add, 2) let x = addTwo(5) // 7

it can get a little confusing unless one's used to point-free programming:

const twostep = bind1st(bind1st, bind1st) // YMMV


this looks... like F#?


Yes, it’s inspired by the pipe operator in languages like F#.



Is there a Babel plugin for pipeline operations?


Thanks! Can we help move that proposal forward?


TC39 delegate hat on: Participate in the proposal issues -- raise any that you find and/or help resolve ones that others raise. One area of concern is that due to the number of syntax proposals being presented, the language would become too "dense" and alienate the beginner. Balancing the power of the language while making it teachable and approachable is important. Feedback on the proposals from JS educators would be helpful.


This is so true. I still don't use arrow syntax since it looks like gibberish (or, more exactly, equal to or greater than) to me, despite me best efforts to get used to it. Those examples using pipes just make my eyes go out of focus ;-)



that's because you're using special fonts...

or JS committee should recommend those ligature fonts


Gigantic sticky header and footer and a huge font size? You can barely fit a single paragraph on the page, this is probably one of the worst website designs that I've ever seen.


You might like this extension [0] for Firefox and Chrome that removes the sticky header and footer from Medium sites.

[0]: https://github.com/thebaer/MMRA


Also goes away if you log into Medium.. but your suggestion is better if you don't want to set up a login!


Thanks to using reader mode in Safari by default... I had no idea what you were talking about. I find few sites delivering any value worth turning it off.


Also, most of their code example are screenshots and thus the code can't be copy/pasted.


A quick and dirty hack for this in Chrome:

- Right-click > Inspect on the menu bar

- Move over the DOM nodes until the header is highlighted correctly

- Hit backspace

- Use delete if you were overzealous in your deleting


It's probably in order to maintain an optimal character count (https://baymard.com/blog/line-length-readability) without the paragraph being too small.


> This is because ️ is two bytes long ('\u2764\uFE0F' )!

No, that's not bytes. You're counting code points. Because of the way Javascript works, it can't accept code points greater than U+FFFF, so it has to use surrogate pairs.


Nope, nothing to do with surrogates either. Surrogates are code points encoded with 2 code units of 2 bytes each. A number of emoji, like the heart, are coded using multiple code points, which is even more bizarre.


> Nope, nothing to do with surrogates either. Surrogates are code points encoded with 2 code units of 2 bytes each.

Surrogates are actual codepoints, the range U+D800 to U+DFFF is reserved exclusively for that use: https://en.wikipedia.org/wiki/Universal_Character_Set_charac....

> A number of emoji, like the heart, are coded using multiple code points, which is even more bizarre.

Not really. It's a base codepoint plus some sort of combining/variation selector, not unlike e +‌ ‌́ = é e.g. U+2764 HEAVY BLACK HEART (which HN strips because as of 2018 HN's commenting system remains hot garbage) + U+FE0F VARIATION SELECTOR-16 = a red heart. That mechanic is also used to select skin tones on "people" emoji e.g. take U+1F476 BABY, add U+1F3FD EMOJI MODIFIER FITZPATRICK TYPE-4 and blamo light-brown baby. This allows a multiplicity of variants when useful without having to implement each combination individually.

There are emoji which are actually coded using multiple codepoints (not a base + modifiers): the country flags, which are pairs of regional indicator symbols composing ISO-3166 country codes e.g. U+1F1F1 REGIONAL INDICATOR SYMBOL LETTER L + U+1F1F8 REGIONAL INDICATOR SYMBOL LETTER S = 🇱🇸 (the flag of lesotho). Unpaired regional indicators display as crummy placeholders at best, they're not just modified, here's with an interstitial space: 🇱 🇸

"Family" emoji take it one step further (and into the "hack" realm imo), they're a bunch of independent emoji (possibly with their own variation selectors) "joined" by ZJW.


> Surrogates are actual codepoints, the range U+D800 to U+DFFF is reserved exclusively for that use:

Yes ... and no. In a UTF-16-encoded string, surrogates lose their status of code point (which they would have in a UTF-32- or a UCS-2-encoded string) and are mere code units instead.

The fact that the natural number they represent is reserved as code points is only a trick to band-aid systems that are trying to interpret buffers as UCS-2 although they are UTF-16.

> Not really. It's a base codepoint plus some sort of combining/variation selector, not unlike e +‌ ‌́ = é

I am pretty sure the combining marks and all these other things are technically still code points on their own. They're not glyphs, though: a glyph can be represented by multiple code points, as you mentioned one base code point (or two as in the flags) and potential combining marks/selectors/etc.

And then the families are indeed multiple glyphs with ligatures, AFAIU.


The ECMAScript spec is weird. One example: in ES5 RegExp.prototype was itself a RegExp, in ES6 it became not a RegExp but just an Object, and in ES7 it stayed an Object but all of the RegExp.prototype methods have to specially check if `this` is RegExp.prototype, thereby pretending to be a RegExp, without actually being one.

Why does the spec have this churn?


I suspect it might have something to do with allowing developers to extend the built-in types - I know that changes were made between ES5 and ES6 to allow subclassing Array, could well be the same issue. See Symbol.species for instance.

Wait, it's actually most likely because in ES6 you can use `new RegExp()` as well as `RegExp()` to create a new object.


That’s nice, lately I start forgetting what’s ECMAScript and what’s Babel/Webpack feature.

Sometimes it’s nice to have a refresh.


Can someone please explain why for…of needed to be specialized for its async form?

I get why async generators & iterators won't work with plain for…of, but don't get why following is invalid:

    for (const i of [1, 2, 3])
      await getThingAtIndex(i);
The very last example in the article uses similar code, but I feel iterating over array of promises misses the point (as opposed to actually receiving async iterator with `async next () {…}` via `Symbol.iterator`).

Or am I missing something here? Thanks.


What you've written isn't invalid, it just does something different. Async iterators effectively yield awaitable promises instead of values.

Iterators, normally, are synchronous. Generators immediately yield control to whatever is consuming from it. An async iterator yields things that will eventually yield control.

The best example I can think of is disk IO. In Python you can iterate over a file handle, but in JS you can't (without blocking). Async iterators would enable this: each yielded promise would resolve with the next line of the file.

Edit: the spec is pretty interesting: https://github.com/tc39/proposal-async-iteration/blob/master...


Your example is equal to

  await getThingAtIndex(1)
  await getThingAtIndex(2)
  await getThingAtIndex(3)
whereas the async for loop is equivalent to

  await Promise.all([
    getThingAtIndex(1),
    getThingAtIndex(2),
    getThingAtIndex(3)
  ])
Your example is valid syntax, but it executes each call synchronously.


The article says `This feature adds a new “for-await-of” loop that allows us to call async functions that return promises (or Arrays with a bunch of promises) in a loop. The cool thing is that the loop waits for each Promise to resolve before doing to the next loop.`. which seems to be in contention with what you say.


i personally think javascript ECMAScript committee needs to take a break and chill for a bit. they are changing too fast. it's to the point one forgets if a certain new syntax is available in the whatever version they are and need to check the manual several times. it was fine from es5, 6,7, but if this keeps up at this rate, it's getting too much.


> one forgets if a certain new syntax is available in the whatever version they are and need to check the manual several times

You know that the current standard practice in JS-land is to always target the latest version, transpile specifying what browsers/runtime version you target, right? ...heck, even the way to config things like Babel sort of implicitly assumes this. Or pick something like Typescript that is practically always a superset of the current bleeding edge.

...it took me a while to grok the philosophy, but the only way to survive in modern JS-land is to fully adopt the "always bleeding edge" mindset. Any other way to think about it will end up driving you insane, as all the tools, framework etc. assume the "let tools handle versions compatibilty and always code for latest version" :P

JS is a "tooling ecosystem language", you don't just "write JS" you "write JS using tools X, Y, Z etc." The tools you use practically define the language. You can even say "fuck standardization" from time to time if you like need full-featured macros for a project and pull in some tool that adds this feature to JS and just use it. This kind of theoretically infinite power in the end made me "stop worrying and love the bomb" :)


Seems you're tired of the entire JS ecosystem.

JS itself was in need of updates as it was completely left behind against many of modern languages despite still being in the center of the web.

You see how there are so many AltJS and transpilers because plain JS was dragging behind.


Yet we're always transpliling JS anyway, all the time, for browser support. Might as well just bite the bullet and leave it as a compilation target and use something better, with less foot-guns.


Completely agree!


I really don't understand the need for the infix exponentiation operator. Maybe it's because I just don't do calculation heavy JS code but it seems like nothing but a clever little trick you can impress colleagues with. I would hate for JS to evolve towards needing something like https://www.ozonehouse.com/mark/periodic/


I see JS becoming more and more complex, while elm becomes more and more simple, yet it can do a lot.


Fixed header and footer wastes enough screen space on a laptop that it makes it annoying to read.


> Until now, if we want to share data between the main JS thread and web-workers, we had to copy the data and send it to the other thread using postMessage . Not anymore!

This would have been great for a complex Chrome web extension (Gmail "AI" plugin) I developed previously. It relied on quite a significant amount of postMessage'ing back/forth where I had to create a hand-built queue system (on both ends) with redundancy and debugging. And even then I still had a endless series of race conditions and silently failing bugs that were difficult to debug :/


Question from a backend dev who need to do some JavaScript from time to time.

What version should I use? With backend stuff I am usually responsible for installing the versions of the code / language / libraries that I use. With browsers what can I reasonably expect users to have in their browser?


JavaScript on the client is usually transpiled and polyfilled to work with older versions with something like Babel. You'd use something like babel-preset-env, which allows you to specify the level of compatibility you're targeting. The default configuration assumes that your clients support ES5.


> What version should I use? With backend stuff I am usually responsible for installing the versions of the code / language / libraries that I use. With browsers what can I reasonably expect users to have in their browser?

this is a tricky question. You need to test each feature you want to use on targeted browsers, really.

Or use a specialized compiler that will compile your javascript to a former version of the language (a tool like babel does that).

A site like https://caniuse.com can help you identify which feature works on what browser.

But at the end of the day, it's like DOM API, your scripts need to be tested and errors monitored in production.


Thanks, that site looks useful.


Depends on your users. For B2B or internal applications it's probably acceptable to say "IE 7 is not supported" and write modern ES6 without transpilation. If it's for a more general audience then using a toolchain like webpack+babel might be safer and more suitable. You'll also need a toolchain for larger apps to be able to use the module system (import statements) for now.


Typescript is great transpiler of the latest version of the language (and some Stage 3 proposals for the next version) to older versions that run in most browsers.

It also gives you the option to bring in the benefits of static typing, which makes organizing and managing projects so much better.


In the ES2018 section, there are 2 #8s.


Thanks. I didnt realize some of those 2018 examples just got in. Im really enjoying the object helpers


Nice almost as good as PHP. I loved the whole get keys segment.


Almost as good? I'm really curious what PHP has over JS at this point...


Simple execution model for server-side apps: each request is practically a separate process, sharing nothing with the others... so basically you can even have an infinite loop in the code specific for one request, some code calling an extension that leaks memory in the code for another etc. The bugs keep piling, but your app keeps mostly workin' ;)

It's in a way like lambda/serverless but on any random server.

I'm kind of sad this model of "shared nothing" + "something just runs your code" has never carried on to other more enjoyable languages...


> Simple execution model for server-side apps: each request is practically a separate process, sharing nothing with the others

It occurs to me that this isn’t a technical impossibility with JavaScript. It wouldn’t be difficult to write a library that spins up a new node process for each request. It’s just less efficient and goes against the grain of the language. (Having asynchronous IO)


PHP is a bit easier to reeason about IMO.

Even as a seasoned software engineer who has written his fair share of js, the js community finds ways to confuse me.


I am pleased to see that features from E continue to be smuggled into ES, like finally-style asynchronous cleanup and template literal enhancements.

Edit: Downvotes can't undo our improvements to ES. We will drag JavaScript and its users, kicking and screaming, to a capability-secure future.


Of all the languages that have these features, how do you conclude that ES is "smuggling" from E, a language I've never heard of before?


I get to be part of the discussions since I hack on E-related things.

Mark Miller worked on E and then on Google's Caja (stands for 'Capability JavaScript'). He has been leading an effort to port features from Caja into ES via the 'Secure ECMAScript' initiative: https://github.com/drses

Doug Crockford worked on E and ported E's data mini-language, TermL, to JS and it became JSON. He's also done many other things in the JS community. The json.org website still links into the E archeological site from its front page: http://erights.org/data/terml/embeddings.html

The WeakMap feature of ES doesn't mention it in the spec [0], but the 'sealer/unsealer' capability pattern can be implemented on top easily, and in fact having WeakMap is equivalent to having E's brand-maker or Monte's makeBrandPair, as discussed here: https://groups.google.com/forum/#!topic/cap-talk/4hPYemjPK_Y

[0] https://www.ecma-international.org/ecma-262/6.0/#sec-weakmap...


> Mark Miller worked on E and then on Google's Caja (stands for 'Capability JavaScript'). He has been leading an effort to port features from Caja into ES via the 'Secure ECMAScript' initiative: https://github.com/drses

So nothing to do with current ES, and if anything a dead effort, the "Secure ECMAScript" repository hasn't seen an update in 5 years.

> Doug Crockford worked on E and ported E's data mini-language, TermL, to JS and it became JSON.

Do you have any evidence for those assertions as explanations, rather than the rather obvious and much simpler "he was looking for a data format to communicate between frontend and backend and javascript literals could be trivially and quickly eval'd from the client"?

> The json.org website still links into the E archeological site from its front page: http://erights.org/data/terml/embeddings.html

Insultingly misleading. json.org links to that page from a section on json implementations in various languages, because that page has a section on "json in terml". In fact, that's the exact label of the link on json.or. It also links to pages for labview, M or PascalScript, that doesn't mean JSON traces its roots to any of those.

> The WeakMap feature of ES doesn't mention it in the spec [0] but the 'sealer/unsealer' capability pattern can be implemented on top easily

You can implement stuff on top of existing features, news at 11.

> and in fact having WeakMap is equivalent to having E's brand-maker or Monte's makeBrandPair

Of course WeakMap is also and more directly equivalent to having weak maps from dozens of preceding language.

Your comment distinctly looks like you're just seeing everything through the E lens because you really want it to be relevant, and hammering the ES peg in the E hole no matter how the fit is.


I'm not gonna bother quoting you to fisk you. Your approach towards my evidence is weak overall. I'm gonna throw more evidence at you.

Miller and Dean Tribble have been working on WeakRefs lately: https://github.com/tc39/proposal-weakrefs

Crockford's personal site no longer has the original JSON description, and neither does the Wayback Machine, but his site does still have a section on E: http://www.crockford.com/

In this talk [0], Crockford discusses the origin of JSON. He mentions working with Chip Morningstar (this guy http://habitatchronicles.com/2017/05/what-are-capabilities/) and has oblique references to his prior work. He doesn't bother to explain what he was doing during the 90s here, and at this point social interaction becomes required to learn more.

'hammering the ES peg in the E hole' is wrong. They're hammering the E peg into the ES hole. Not all of us believe in this approach; some of us think that JS is irredeemable and that we should keep iterating with languages like Pony and Monte which expand on the original E concepts.

Like I said, kicking and screaming. I don't know why you all kick and scream so much, but you do.

[0] http://www.youtube.com/watch?v=-C-JoyNuQJs


I've got nothing against E. But I have some ideas about why "we all" kick and scream so much. I think it's got something to do with the way you're packaging your message. If you start with an adversarial tone, there's a good chance the rest of the discourse will go that way too.

And I don't even like javascript.


Are you trolling? What's E?



E is dead, long live JS




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: