Hacker News new | past | comments | ask | show | jobs | submit login
ECMAScript modules in Node.js: the new plan (2ality.com)
210 points by ingve on Dec 26, 2018 | hide | past | favorite | 125 comments



Meanwhile, Deno (alternative server-side JS built from scratch by the creator of Node) keeps chugging along:

https://github.com/denoland/deno

It will be interesting to see which happens first: developers migrate to Deno or Node finalizes its ES module implementation.

I think both groups are doing excellent work, but the Node team seems to be at a disadvantage having to continue support for some decisions made early on in Node:

https://www.youtube.com/watch?v=M3BM9TB-8yA&vl=en


This is my first time seeing it but so much of it looks great. I don’t think I’m a fan with how it handles dependencies, though. It feels like they’ve just reinvented GOPATH.

“Deno caches remote imports in a special directory specified by the $DENO_DIR environmental variable. It default to $HOME/.deno if $DENO_DIR is not specified.

...

Production software should always bundle its dependencies. In Deno this is done by checking the $DENO_DIR into your source control system, and specifying that path as the $DENO_DIR environmental variable at runtime.“

https://github.com/denoland/deno/blob/master/Docs.md

The suggestion to import and reexport dependencies in a central `package.ts` file sounds absurd to me.

Clearly, I’m not a user and am just now reading this for the first time. Has there been discussion about this in the community? Are there plans to correct it? Go has been digging out of this hole for years, I can’t imagine why they’d start in the same place.


Disclaimer: I contribute to Deno.

Contrary to Go, when using Deno you don't need to invoke any command to install a package. You just put your import statement with full URL (you can pin version there, eg. https://unpkg.com/liltest@0.0.5/dist/liltest.js) and Deno downloads it on first run and caches for later use. Changing $DENO_DIR is somehow equivalent to changing node_modules location. This eliminates need for package manager.


I'm tempted to apply a Greenspun-ism, that by "not having a package manager", you're going to effectively have a half-implemented, bug-ridden package manager anyway. :-)

More seriously, dealing with the version hell that is inherent in dynamically assembling (your project has it's own unique dependency graph) publicly shared, uncontrolled libraries (only Google has a true no-version-numbers-ever monorepo) is nontrivial and so it seems like surely there has to be something being "effectively a dependency manager", though perhaps it is just semantics of embedding that in the runtime "because imports are urls in the source files and not a separate Json file"...not sure what fundamental benefit that brings though?


I feel that the Deno project's goals to get away from NPM/centralized registries is causing developer experience pains that it feels like you're trying to sweep under the rug.

Maybe it's time to take a look at building a replacement package manager.

The convenience of having a single package manager that handles name resolution, versioning, governance, and security is why NPM is as prolific as it is today.

That being said, the reason NPM even received any traction from the start was because Isaacs bundled it with the Node distribution.

In my opinion, this is the clear chance for Deno to provide its own, better package manager, built on top of IPFS/Dat or some similarly decentralized protocol, and correcting the many mistakes that have been openly accepted by Isaacs and others in the lineage of Node/NPM.


You don’t need package management dependency hell just like you don’t need frameworks. This pain is self imposed due to a desire of perceived convenience or the lack of competence to see otherwise.

When your total dependency graph is less than 8 packages you can easily manage this manually. This especially true since there are bots that can automatically check for version updates.


I don't have knowledge about early NPM stages, but from my perspective the goal right now is to get rid of need for package manager.


How does Deno need a package manager at the moment? I don’t understand your perspective.


I think the point here is, that Node.js needs one and Deno is created to get away from that.


How exactly does Node.js require a package manager?


const packageManager = require('package-manager'); // QED


`const npm = require("npm")` actually works AFAIK


Well, it doesn't technically require it...


Exactly.

Deno doesn't need to solve for this, it's already achieved it. An interpreter has nothing to do with a package manager. One problem that Deno is actually solving is the tight coupling of NPM and Node. Commonjs is very much purpose-built for NPM modules.

It's a smart call for the team to tackle this problem as it's one of the biggest pain points of using Node and the biggest threat to its long-term sustainability. But I fail to see how no package manager is a viable solution. As someone who's used Go and Node for the majority of the time they have existed, trust me when I say that a package manager is a good thing.

Node is an incredible ecosystem. We just need a better implementation of its package manager.


> import { test } from "https://unpkg.com/deno_testing@0.0.5/testing.ts";

This feels both very pragmatic and frightening at the same time.


What bothers me most about it is the lack of a checksum, which is something Go modules support. I think that’s a mandatory feature to prevent certain attack vectors. Other than that, I have no problem with this approach.


Package validation (using a checksum or signature) is definitely on our radar. We just haven't gotten around to it yet.


I'm not an "expert" but that feels just as insane as the npm argument people make. I'd love to hear from someone more in the know as to why they aren't the same.


They really aren't if you think about it. Going straight to a URL for a version of a dependency is the same as pulling it from a registry, except it's decentralized from a single source (NPM) and removes the extra hops in between the package vendor and the package consumer.

On the flip side, that extra hop adds a ton of convenience in the form of name-resolution, security and governance. It's the age old double-edged sword of centralization.


Maybe if you turn that url into a hash, than just use the hash to check if the package has a local copy already, it wont be so bad. But you will need to add the package version in the URL so that you know you will always have the package you really want in your local cache.


because npm's is implicitly in the build pipeline. You won't run 'npm install' on production! you run that on development/certification and then push the validated image to production.

With deno, there is no distinction anymore, kinda of. You still can send to production images that have the deno cache. Only thing that changes is the default. Previously you would have to explicitly run 'npm install' on the production host, failing that the code fails. With deno you still can choose to push to production an image without the caches (same as 'npm install' in prod), but now the default is that untested code in QA will auto install without hash check!

in summary: absolutely no practical change (i.e. no new feature impossible before) other than production defaulting to installing remote dependencies of proven-not-tested functions.


Without SRI or similar this is very frightening.

edit: they're thinking about it:

https://github.com/denoland/deno/issues/200

Security shouldn't be an afterthought.


Let's throw everything out of the window and start from scratch. The JavaScript way of doing things is why I don't like to invest much in this ecosystem.


Being a daily TS/Node dev for a while now, I'm super excited about Deno. Any idea of when it will be production ready?


Disclaimer: I contribute to Deno. IMHO it will take 1-2 months before someone deploys simple production services. We're already working on standard library, which you can see here: https://github.com/denoland/deno_std


Awesome, how has the experience been on the project? I'd love to get involved as well, are there any contribution guidelines or starting points?


Experience has been awesome. This tech is really challenging for me, but at the same time amount of knowledge I gain is huge. Deno community hangs around on gitter, here: https://gitter.im/denolife/Lobby

Question about contributions has been raised on gitter several times already and the answer was to look for "good first issue" labels in GitHub repo. However there are not many of them right now.


Does Deno support native ES modules or does it implement a require() shim and rely on TypeScript transpilation?


PR for this is here: https://github.com/denoland/deno/issues/975

It's expected that this should land in next month.


Thanks for reminding me about Deno. It really feels like some of Node’s homegrown solutions to things like modules and async is starting to become a burden.

> It will be interesting to see which happens first: developers migrate to Deno or Node finalizes its ES module implementation.

I guess that depends on what you mean with "happens". Only one of them has a clear point where they're done happening.


Hehe,

I want a Deno Lambda runtime :D


You can do this now with Lambda Layers and the Lambda Runtime API, right?

https://aws.amazon.com/blogs/aws/new-for-aws-lambda-use-any-...


"The runtime bootstrap uses a simple HTTP based interface to get the event payload for a new invocation and return back the response from the function."

Interesting. I had the impression it would take more to get your language of choice up and running.


Let me state some things which I think are true:

1) A vast majority of developers hate '.mjs'.

2) In 5 years, most developers will have moved to import statements and stopped using CommonJS' require().

3) Importing from a directory using index.js is both common in popular projects, and a useful way to organize code like components.

4) We've already been waiting years for a standard solution and many of us are getting annoyed at the delays.

5) With transpilers the norm nowadays, it seems NodeJS could make a clean break from the past and simply provide tools to help convert older projects.

Am I assuming too much?


To be honest I don't mind `.mjs` - we renamed everything in our system to that about a year ago, but I do think it's important for it to be consistent with browser standards.

The index.js thing is nice, but I think consistency with browsers is more important and I'd be willing to lose it.


> I do think it's important for it to be consistent with browser standards.

There's no browser standard about .mjs. In fact the web doesn't really notice file extensions; it goes by media types and attributes. A browser knows that something is JavaScript because it is served with the application/javascript media type (or similar), and it knows it is a module because the <script> element has a type=module attribute.

A lot of the talk about "browser equivalence" is extremely loosely worded and leads people to think that .mjs is a browser thing, but it's not. It's a Node thing. What they want is for code to work the same way whether it's running on Node or running in a browser. The problem they face is that Node hasn't got the type attribute to decide whether something is a module or not, so they can't use the same mechanism the browser has.

The solution they have devised is to assume that .mjs is a JavaScript module and .js is not. The consequence is that if you want the same code to work in Node and on the web, you've got to change what happens on the web to follow the Node idea of naming things .mjs. Otherwise when you import a module named .js – which works in the browser – it will fail in Node. So this isn't about making Node match what happens in the browser, it's about making what happens in the browser match the Node approach, which is not a standard in any sense.


Yes that's my point. As a Node dev, I have no problem changing my extensions. However the browser environment should not be pushed to change and what works on the browser should largely work with Node.


I actually don't see why the whole index.js thing is nice. What happened to prefering explicitness? All it does is save a couple of characters, it doesn't make anything clearer. It's just one more noob trap that's neither here nor there for experienced devs.


One nice benefit was that you could change from a module being defined as a single file to making it a folder with the logic split across multiple files without any other files needing to be changed


Mehhhh, to quote Joel Spolsky: "This is the kind of thing you solve in 5 minutes with a macro in emacs".


Yes. You end up with a million index.js files in your project and finding the right one is a PITA.


...? That's what directories are for.


The problem with .mjs is when you want to share code between the server and browser, handling a different file extension is a PITA.


Browsers don't look at the file extension, they use Content-Type [0].

To share code between the browser and server you have to update the server config to set the Content-Type header to text/javascript when serving .mjs files.

According to the WHATWG HTML spec [1]:

> Servers should use text/javascript for JavaScript resources. Servers should not use other JavaScript MIME types for JavaScript resources, and must not use non-JavaScript MIME types.

Although I'll admit I was surprised to see them suggesting you use text/javascript instead of application/javascript. I've been using application/javascript for years, since I don't have to support old versions of Internet Explorer.

[0] https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_...

[1] https://html.spec.whatwg.org/multipage/scripting.html#script...


I wonder why they kept the ugly .mjs extension. This is a hold-over from an earlier plan, and doesn't seem necessary in this iteration.

This new plan seem to be going down the path that `import` is for modules while `require` is for everything else. If that's true, then they shouldn't need a special file extension for modules - it should always be clear from the context which is which.

The only exceptions I can think of are executable scripts (which could use a command-line option) and packages in node_modules (which can use a special package.json field).


They say that it's required for ecsmascript modules that neither import nor export (for instance, modules that place things in the global namespace), but I would argue those are mostly shims for a pre-moudule era and they should just be `required()`. Or perhaps they could use some special comment at the top to let Node know it should be treated as a module even though it doesn't look like one.

Otherwise, just work down the `import` tree from index.js and anything that is imported is a escmascript module, everything that is required() isn't. Of course, this is just an armchair quarterback opinion, but I'm curious to know why that approach wouldn't work.


>I would argue those are mostly shims for a pre-moudule era and they should just be `required()`

Modules are always strict, so loading something intended to be a module as commonjs could break the code within, which expects strict mode.

Consider the following source text: `delete Object.freeze({ a: 1 }).a`

It will return `false` in sloppy mode, and throw in strict mode. So loaded as commonjs, it will do nothing, and as a module it will be an uncaught exception.

>perhaps they could use some special comment at the top to let Node know it should be treated as a module even though it doesn't look like one.

This was discussed, but one of our goals is to be able to know what type of file something is before reading its contents. our ESM implementation, unlike CJS, doesn't assume that everything it uses comes from the local fs.


Right. In other words, if you `import` a file with that line, it should crash. If you `require` the file, it will do nothing. The context can determine the file type unambiguously, and you don't need an extension.

> This was discussed, but one of our goals is to be able to know what type of file something is before reading its contents.

The file extension came from an earlier plan where `require` and `import` could be used on both types of Javascript. Now that the plan uses `import` only for modules and `require` only for scripts, you can get rid of the extension.

> our ESM implementation, unlike CJS, doesn't assume that everything it uses comes from the local fs.

Node is `import`ing from HTTP, it can use content-type like browsers do. This is only a problem for local file systems, which don't have content-type. For that, you can just assume everything is a module. If people need to load CJS, they can use `require` (as the proposal above already says).

Bare import specifiers like `import { foo } from 'lodash'` aren't URI's, so you don't need to maintain compatibility with the browser. For this, you can just use the existing [package.json/module](https://github.com/rollup/rollup/wiki/pkg.module) field to determine whether to enter the ESM world or the CJS world.

I know Node rejected the package.json solution back when they were trying to make `import` and `require` interchangeable, but now that they aren't, I think the original performance worries should no longer apply.


>In other words, if you `import` a file with that line, it should crash. If you `require` the file, it will do nothing

This is the situation we don't want to be in, where the consumer is choosing. The consumer doesn't know. Even if you, a human, read the file, you don't know what the author's intention was.

> you can just use the existing pkg.module field to determine whether to enter the ESM world or the CJS world.

We really can't though, because its already full of files that are intended to be used with rollup/babel/etc, which have incomplete implementations of esm.


> This is the situation we don't want to be in, where the consumer is choosing. The consumer doesn't know. Even if you, a human, read the file, you don't know what the author's intention was.

If I am importing a file URI, then I am the author, and I know what it is. File URI's are primarily used for linking together stuff in the same git tree. If I am using third-party code, it will be a bare module import specifier from node_modules, not a file URI.

I agree that there are weird situations where this isn't the case, but you don't need to worry about those. Why? Because module support in Node is a new feature, and therefore opt-in. The people who choose to opt-in know what they are doing, and are probably running a pure ESM codebase already. The goal should be to make something useful to the 99%, and let the other 1% keep doing it the old way.

If you are already using Rollup, then this will be plug & play. Rollup already assumes that everything you `import` is a module, and puts them in strict mode.

> We really can't though, because its already full of files that are intended to be used with rollup/babel/etc, which have incomplete implementations of esm.

That's a legitimate concern. Even if using "module" itself is too risky, but the approach itself seems solid. If I want to publish something to NPM that supports native ESM in Node, it's easy for me to add something to package.json that indicates my support, without renaming all my files.

My packages on NPM have both CJS and ESM entry points. The ESM entry point is for tools like Webpack & Rollup, which do not understand the .mjs extension. I tried using the .mjs file extension for this, and it just broke things. There is no way we can re-write every Webpack and Rollup config file in existence to be compatible with this.


Even in the same project, you might have multiple people working. The specifics end up not mattering, because the goal here is to always be unambiguous by default. If you want to have your own setup in your project, you can use a loader resolve hook and come up with your own rules.

Assuming that everything you import is esm is doable, but we want cjs to continue being a first-ish-class citizen, since so much code uses it.

Right now you can opt into a dual mode package by having a file.js and a file.mjs and setting main to `./file` without an extension, and it will "just work". You can also have index.js and index.mjs without a main field in package.json.


This disagrees with my experience working in this space every single day.

Most projects in the Node ecosystem will continue to use CJS. If you are writing "Node software", like web servers, there is no reason not to just use CJS. It will remain a well-supported and well-used option for as long as Node stays relevant. CJS stays a "first-class citizen" by not changing anything, so you get your wish by default.

On the other hand, activating Node ESM support is a conscious choice. This will typically happen on a project-by-project basis. When I say "project", I mean a single GIT repository, a single NPM module, a single Web site, or something like that. If your organization is big enough to have multiple teams (mine is), each project typically has its own team.

The interfaces between these projects are usually package.json files. This is true even for giant code-bases like Babel that live in a single GIT repo but have dozens of tiny sub-projects. Within a project, people use local file references like `import('./util/foo.js')`. Between projects, people use bare import specifiers like `import('lodash')` or `import('@babel/parser')`.

So, I think it's safe to say that a team decides to switch on ESM support, they know what is going on within their own little world. Their `import` statements can be ESM-only for local files, and they will have zero trouble. Meanwhile, if bare module specifiers like `import('lodash')` rely on package.json metadata, nobody needs to care what the other teams are doing. Any project can enable / disable ESM support at any time, and nobody will know the difference.

This all goes back to an old organizational principle that the boundaries within a company often manifest themselves as boundaries within the software architecture. This has to do with the communication overhead of coordinating changes between teams. It's easier to codify the organization structure into a versioned interface than to face the chaos of just "winging it". In the Node ecosystem, this translates practically into versioned package.json files and bare module specifiers. If your requirements don't take this into account, then I am genuinely curious in hearing what your workplace culture is like.


I disagree. Knowing what format a module is written in is no different than knowing it's API. You can't blindly start using a module without knowing stuff about it and how it's intended to be used


Node disagrees. You can `require('./a')` without knowing if its `a.json` or `a.js` or `a.node`. All you need to know about `./a` is the interface it exposes.


Right, that's an antifeature that destroys performance for no real gain (packages are the only instance where the file name is abstracted from the importer).


I wouldn't miss this feature, actually


Exactly! You have to know whether something is a script or a module before you can parse it, but with this plan, that's always clear from the context.

This is exactly how browsers do it, by the way. If you `import` something, is a module, period. The `<script>` tag is the only place that supports both, so they use a `module="true"` flag to indicate which is which.


> This is exactly how browsers do it, by the way. If you `import` something, is a module, period. The `<script>` tag is the only place that supports both, so they use a `module="true"` flag to indicate which is which.

This is not entirely true. If you import something and it's served with a content type associated with JavaScript modules, then it's interpreted as a JS module. But if they are served with a wasm content-type, they may in the future interpreted as wasm modules. In node there's the additional content-type of "CommonJS file" which has to be handled somehow as well.

Non-module script tags aren't really relevant because node itself never supported scripts (at least not what the browser and the ECMA standard calls scripts).


Small correction: it’s <script type="module">, as distinct from <script> or <script type="text/javascript">.


If it's not needed, then I'm also questioning it. Requiring that extension breaks a lot of previously "very nice" things.

Being able to create a file like `thing.js` and require it by doing `require('./thing')` made it easy to be able to switch that out with a directory later and put an `index.js` in there without having to change any import statements. That is something i've used countless times and it tends to really improve my development experience since modules can grow more naturally and it avoids lots of one-file directories, and with the `.mjs` it's entirely gone.


The .mjs extension isn't required in browsers, but absolute paths are.

So even if the .mjs extension requirement is dropped, I don't think the implication is that './foo' can successfully resolve to './foo/index.js' as there is no such thing as a directory in HTTP and you don't want browsers to make multiple guesses for which path exactly exists.


Isn't the absolute path (not exactly true, since dots are supported) thing just a matter of punting till later? If I recall correctly, they couldn't agree on resolution rules and perhaps more specifically how node-like rules would translate to the web, so they punted and went for an "absolute or bust" kind of approach till someone comes up with a better idea rather than painting everyone into a corner. Mapping identifiers and path resolving functions have been suggested as ways to make identifiers "nicer" I think.

This seems reasonable. I reckon the reason why node's behind on figuring out how to make ESM work is because of all this implicit loading magic, and a reluctance to break existing code. (A commendable goal I think.)


> you don't want browsers to make multiple guesses

The webserver does that work.


But that would mean that every single webserver would have to implement that logic. Consistently.


Except no? Only servers where stuff making requests expects the server to behave that way. The interesting thing about this is that every server in the world already works like this: they expect the person calling the server to know how the server works.


You know what, I had completely forgotten about that, and it completely changed my view.

Sure, we can have webservers serve up different thing based on parsing the url (a GET for `/users` could return `/users/index.mjs` if it wants), but having that required by the spec would be a mess, and not having it in the spec but still allowing it in node would cause annoying bugs when trying to use a library on both sides.


This is a pretty ubiquitous web server behaviour already.

If you link to /foo and it happens to be a directory, the server redirects you to /foo/. It then looks within the corresponding directory for an index file and serves that.

Most web servers have this as the default configuration, but the default list of potential index files doesn't include index.js (it's usually just index.html or index.html index.htm default.htm etc). It's usually a one line change to include index.js. This has worked fine for decades; there's no need for including this in the JavaScript spec. – it's up to whoever wants their server to respond that way to configure it appropriately.


But if node.js were to use a different default of how to do filesystem lookups (look for foo.js, then fallback to foo/index.js if not found), then you'd start seeing modules which wouldn't work on the web without the web server setup to use that exact same system. That's now an implicit contract.

This is in all honesty a small problem in the grand scheme of things, and it's one that can be equally solved by better editor tooling (for auto-refactor filenames abilities that also change all imports, since imports are for the most part statically analyzable now)


> But if node.js were to use a different default of how to do filesystem lookups (look for foo.js, then fallback to foo/index.js if not found)

They won't do that because they are clearly intent on having code work between environments. They repeatedly tell people this whenever people complain about .mjs. So we can discount this as a possibility.

> you'd start seeing modules which wouldn't work on the web without the web server setup to use that exact same system. That's now an implicit contract.

The inverse is true for their .mjs solution, which is essentially "Use Node's .mjs on the web or shared code will break in Node". This requires adopting the Node naming convention on the web and altering web servers to serve .mjs files as application/javascript. The .mjs file extension isn't a standard and doesn't make sense on the web; ESM modules are both of those things. If you think changing web server configurations for an implicit contract is not acceptable, you should be against .mjs.


It isn't required when importing something, its required for the filename. You can continue to do `import './thing'` where you have `thing.mjs`.


That's not what the linked article says, and even if it is true, it still won't do a lookup for `index.mjs` inside the `thing` folder, so it still hurts that usecase.


It sounds to me like it is a temporary holdover from the earlier plan to speed up the implementation of Phase 1. The idea is that it's better not to add the complexity to parsing while the minimal kernel is figured out, and then layer it on in Phase 2 once the most important bits are working.

I do find it a little annoying personally, and it will hurt my personal adoption slightly...but I understand why they would tackle it in that order.


I'm sure there are valid reasons but I disagree with the extension solution. That is not in any way meeting the stated goal of

> One of the goals of the Modules Team is “browser equivalence”

There are already tons of browser libraries not using the .mjs extension.

Not clear to me why just using 'import' makes it a module and 'require' not. I'm sure that's spelled out somewhere


>There are already tons of browser libraries not using the .mjs extension.

Browsers don't care what extension you serve something as, so we can't really directly match that in the default behaviour. What you can do is use a resolve hook to tell node to interpret certain files as esm even when they don't have a `.mjs` extension.

>Not clear to me why just using 'import' makes it a module and 'require' not.

import doesn't "make something a module". if you try to require something that resolves as being esm (rn we check that with .mjs) it will throw, because esm resolution is async and require resolution is sync. You can't require esm but you can import anything.


Why does import have to work with cjs? It doesn't in the browser. Why does require have to work esm?

Why not just have a package up it's major semver and those packages that want to start using it need to import it? That seems no different than the browser. AFAIK you can't import a non-module and you can't non-import a module in the browser or am I wrong?


>Why does import have to work with cjs? It doesn't in the browser. Why does require have to work esm?

`import` doesn't have to work with cjs, but we want to do so because there is so much code in the ecosystem that is cjs.

`require` won't and can't work with esm, because esm resolution is async and require resolution is sync.


Let this be solved in userland through compilers, like everyone already does today. The stubbornness from current node maintainers on this subject is astounding. Years of feedback, like you see in this thread, ‘but-what-if’ and ‘you-dont-get-it’ handwaved away.


> Not clear to me why just using 'import' makes it a module and 'require' not. I'm sure that's spelled out somewhere

I'm probably wrong but I think it has to do with parsing rules. If you try to load a module as CJS and hit incompatible syntax (e.g. keywords such as import or export) then it has to restart parsing from the top. Likewise, if you parse the file as ESM but hit CJS syntax such as `exports.foo = bar` then again it has to restart.

There are probably a bunch of similar issues but if I recall correctly it basically came down to parsing and performance, you don't want to load the same thing twice essentially. With a special extension you don't have to even load the file before deciding which parsing rules to use, supposedly making this more efficient.

I've heard several proposals to resolve this issue, but my favorite two are probably the unambiguous module syntax and making import/require only compatible with their respective module formats. The first would be a breaking syntactic change, in that every ESM module would be required (no pun intended) to include the top level `export` keyword at least once, regardless of whether or not the module actually exports anything. The second would be to make node's `require` function only compatible with CJS modules and make the `import` keyword only compatible with ESM. Both of these have drawbacks of course, but I'd like to think they're mostly temporary and rather easily overcome. The unambiguous module syntax less so perhaps, but still.


the other issue was that a module that doesn't import, export or require anything and doesn't specify strict mode will get parsed differently depending on whether it is a CJS or an ESM and their is no in band way of telling which is which


That’s right, thanks for the reminder! Some syntax is ambiguous yet results differ. I think that’s why I liked the unambiguous module syntax very much, it made things explicit, without much cost.


> I disagree with the extension solution

As do I. I loathe it. But I'm struggling to come up with strong rational reasons why. Part of me says that hurts interoperability, but I think that's mostly rationalization.

Why do I care so much about a file extension? Because I really really do.


The reason I care is because effectively node is imposing it's standards on the browser. Effectively node is requiring all browser based modules to use .mjs even though they shouldn't have to care. Except they will care because they'll start with .js, the make an npm package, then someone will want it to run in node, then it has to be renamed .mjs, then all non node projects are effected only because of node's chosen solution. It seems bad to me for node to effectively bleed it's problems on the entire JS ecosystem


The Michael Jackson Script is an absolute joke and had arisen solely from NodeJS attempting to solve its own require legacy. It’s not the business of any other party to do such a ridiculous thing.

.mjs is an aberration, a solution to a poor implementation of one web server, not a solution. NodeJS need to do the hard work of solving their own problem and stop attempting to manufacture “real world solutions” with .mjs.


If you feel you can come up with a better solution for module syntax ambiguity please feel free.

Over the last three years we (including hundreds of devs like you who come in guns blazing calling us all idiots) haven't been able to come up with something that works better.


That is a self-inflicted problem. Step away from the ambiguity by dropping cross-compatibility as everyone has been begging for.


Proposing or fixing a bug in a third party product should not affect a web standard. If you want my personal help to assist the NodeJS fix, yeah, I would be good there, but I can’t fix the toxic bikeshed culture that you obviously enjoy. MJS is NodeJS solution. It is not a web solution.


You could support .es file types which would work just as well and is more succinct than .mjs, but AFAIK the group won’t because of “branding” issues. Bull. Use .es. Standard semantics for ECMSScript which is the name of the language, no games or pretence.


Node will likely end up with multiple ways to disambiguate modules. Until then, you might dig https://npmjs.com/esm. No .mjs required.


Since I've worked on International Standardization before I sort of know how the sausage is made, but you always hope for a better sausage.

Anyway - browser equivalence? That, in a lot of things, especially where URLS is concerned, means HTTP equivalence.

A file extension having a specific meaning is not HTTP equivalence or browser equivalence, it is at best Windows OS equivalence where not having mime types understanding meant you had to use the file extensions to identify what 'type' of thing something was.

A hack solution that was maybe necessary (I don't know Windows internals enough to disagree) and that caused me a lot of problems in the years 1999-2009 approximately.


Browsers don't care what extension you serve something as, so it doesn't break the interop we want with browsers if you put your files as `.mjs`.


my point exactly, browsers care what mime type you use and the extension is meaningless. I thought I was pretty clear on this matter?


Both old "script"-style JS and the new module kind have the same MIME anyway


Instead of forcing this joke of a file extension, it would be far better to make a breaking change in a semver major release and do away with commonjs.

There is no usecase for having two module syntaxes at the same time. All the big projects already use babel or typescript. The community would catch up eventually.


I completely agree. As I pointed out below, following the path you suggested, if the filetype has to change to fix the require regression, they could use .es. Would it be surprising to use .es with ECMAScript? No.


Requiring users to adopt a new extension is a massive change and as an end-user I have nothing to gain from this - I'm already using babel and typescript along with a huge part of the community.

If this creates regressions like you said, let users deal with their code that deletes things from frozen objects. Allowing bad behavior to continue and stall the module evolution is questionable at best.


I think it can be a bit difficult for users of compile-to-JS setups to understand why this is better. As of right now, one can just `import foo from './bar';` and the compiler just makes it work. This has become fairly tried and true. What might be confusing is that node's internals, and perhaps their design goals do not align with this paradigm. While I'm certain that they have reasons, it's going to be awfully hard to convince developers. (It already has been, look at the backlash on `.mjs`.) The solution? Continue to use TypeScript/Webpack/ESMloader forever. I personally like the semantics of importing the way it currently works. If it never works like that natively? Fine.


The issue is that transpilers didn't implement correct ESM (both ts and babel have opt in flags to enable more-correct-mode, but its still not 100%). they implemented the syntax but forgot about the actual semantics like dependency graph verification and such. Node is an actual implementation of js and so we must perform these tasks, even while transpilers don't.


Honest question: Does anybody actually care that it doesn't?


It matters. Projects that don't support IE ≤11 could ship ES Modules to the browser today. However, you can't ship most code that's written as "es modules" because this code is typically written for a transpiler rather than a real runtime modules environment.

Some of this is a limitation in browsers, like support for bare module specifiers. That is, you should be able to write code like `import {foldl} from 'lodash';`. Work is underway to support this in the browser without requiring multiple round trips to the server to guess where 'lodash' should be loaded from: https://github.com/domenic/import-maps


I don't think its as much a matter of people caring or not as it is that node is implementing ecma262 esm, not babel esm or typescript esm, etc.


What happens if people seem to prefer the babel-esm semantics? I'm not saying that babel's semantics ARE better, but what if developers prefer them?


then they can continue using them, and not use the real esm semantics? i don't think anything would change for them.


This is a pretty good breakdown but to the author: i would like to see a better explanation of defaults. Node will ship with the defaults needed to perform tasks logically and unambiguously. This however doesn't mean that a user's use case is locked away. ESM in node has hooks which allow people to override resolution behaviour, which means you could put esm in `.esm` files, or cjs in `.sadness` files, or have all files in a specific directory be esm, or read package.json and pull metadata out of it, etc.


Is this documented anywhere?


I'm not strongly opinionated or informed about this, so why does Node need file extensions when Other languages don't?

Is it because of non-JS imports? I know that when Ryan dahl announced Deno, he mentioned it as an obviously good change, but, I'm not sure what the problems with extension-less imports are.

(Edit for context: at work, we either use Closure or put everything under a global root, and only import JS, so maybe I've been shielded from the state of the world.)


> why does Node need file extensions when Other languages don't?

Node allows you to require things that aren't commonjs javascript. (json, native extensions, etc).

> when Ryan dahl announced Deno, he mentioned [removing extensionless resolution] as an obviously good change

I think a lot of people disagree with ryan dahl in this case (I certainly do). The ability to have a dependency expose an interface without knowing what type of thing that dependency is is very powerful, and i think it is one of the reasons people like require so much.


> The ability to have a dependency expose an interface without knowing what type of thing that dependency is is very powerful

Is it? I honestly can't say I've actually used that feature -- although I can say I have gotten burned by it by accidentally importing the wrong thing.


JavaScript is environment agnostic, such that JavaScript doesn't know what a directory is. Many other languages have a standard library that define these things, but JavaScript does not deliberately to avoid presumptions upon specific environmental constraints that apply in some conditions but not others. A directory is a convention of a file system. When executing in a web browser there is no file system.

If they wish to achieve convention and function compatibility with code that works in the browser certain constraints are necessary to prevent assumptions that work in one place and not another. Also, consider the work that happens when resolving files from a directory:

1. You have to determine if the thing is a directory or a file.

2. Get list of files in directory.

3. Read each file

4. Determine if that file is a ESM.

5. Load modules only if the file is ESM.

These steps would not work in a browser.


I appreciate the work going into this but the timeline is a bit disappointing.


Even with new features, node chooses to do things differently. Take worker threads for instance. Node added worker support much later than the browser, and chose to mysteriously use an API that's significantly different.

I wish interop with the browser becomes more Central for upcoming features. Where the browser is lacking, node can supplement, but to supplant those APIs is just inconvenient


Is there any reason we can't just have a `'use esm';` declaration at the top of a file instead of a janky filename extension?

It's just as arbitrary, but there's precedence with `'use strict';`, and instead of renaming all of my files I just prepend them appropriately.


The `esm` loader allows specifying a parse goal pragma as one of several ways to disambiguate source – https://npmjs.com/esm.

Node will likely end up needing more than just an extension to disambiguate as well.


Did I read that correctly we are going to lose the ability to require paths and modules eg import {} from './services' or import {} from 'lodash'?


First: none of the exact rules are written yet. But current discussions seem to suggest "Yes, this will go away" for "./services" being resolved to anything but an absolute location ending in "/services". But importing so-called bare specifiers ("lodash") will almost definitely be supported.


> current discussions seem to suggest "Yes, this will go away" for "./services" being resolved to anything but an absolute location ending in "/services"

to be clear, this is still heavily in contention. node made a good choice to support requiring something without knowing exactly what type of thing it is, and a lot of people feel we shouldn't throw that away.


Can you give an actual use case for this? Because it seems insane. How can you use something without knowing what it is?


No you did not.


from https://nodejs.org/api/esm.html

> For now, only modules using the file: protocol can be loaded.

Is it the plan to allow modules to be loaded from any protocol? Because that seems dangerous, from a security viewpoint. I've been trying to find mention of anything that mentions SRI support, or something similar, but I haven't found it yet.


No one will use this Interoperability hack in light of the fact that Rollup and Webpack perform this for you seamlessly:

    import {createRequireFromPath as createRequire} from 'module';
    import {fileURLToPath as fromPath} from 'url';
    const require = createRequire(fromPath(import.meta.url));
    const cjsModule = require('./cjs-module.js');
Bundlers will not likely be able to produce efficient static code for that bizarre interop code.


can a script using es6 modules start with something besides a import statement ? then it would be easy to know if to use es5 or es6 module syntax. why was the es6 module system made to break the es5 module system ? why not just use es5 modules ? why is es5 modules so slow? are es6 modules faster ?


Can anybody explain to me why ES didn't simply adopt the require syntax from Node.js?


Import statements must be asynchronous in the latency-controlled environments (mostly browsers). CJS is by definition synchronous and has different order semantics when treating execution of an asynchronous loader.


But it could simply be implemented using the same routine in browsers like `import`. I don't see why `require` can't be asynchronous. There's an implicit join after the last import. Maybe it would have been enough to restrict calls to `require` to the top of a file.



Webpack with css, less, sass assets with React as a SSR library. What's the current solution now ?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: