Deno continues to seem very interested in making marginal improvements to the Node/npm ecosystem that (if adopted) would be massive breaking changes for everyone. It's hard for me to imagine many people being interested in this.
The parent poster's point was that it was not reasonable to consider moving off any platform that did not offer significant improvements. Now I realize that I did not painstakingly spell out the point I was making but it was not that React === Node, rather since upgrading part of your stack to a new version can have similar difficulties to moving to another language maybe moving to a different language isn't the big problem the parent poster implied.
There are multiple times during long running sites when you may need to do significant rewrites of things to move between vendors, to upgrade versions of libraries, frameworks or languages or similar things for incremental benefits.
The risk in upgrading React is lower than the risk in changing your server runtime. React still to this day guarantees a lot of backwards compatibility and the upgrade path has been thoroughly battle tested inside Facebook.
Moving to Deno is full of unknown unknowns. It’s very early.
I know it's hard to know this since not everyone has access to JSR yet - but it's incorrect to say that adopting JSR (as a module author or consumer) will result in breaking changes. JSR is additive to npm, and can be used alongside npm in existing projects.
Node.js / npm module consumers will be able to use packages published to JSR with minimal differences from packages published to npm. Here's what that will amount to:
- The install command will be different - "npx <jsr command> i @foo/bar" versus "npm i @foo/bar")
- some of the configuration will end up looking different when you inspect it. JSR's npm integration makes use of npm aliases and requires a minor configuration change to .npmrc (that will be done for you by the command to install)
But at the end of the day, JSR is integrating with npm using its existing extension points, not replacing or circumventing it. As a practical matter, your Node.js code that imports dependencies from JSR will look the same as code that imports dependencies from npm. It's all coming from the node_modules folder at runtime.
Whether or not module authors choose to use JSR is more about DX than platform support (JSR works well in Deno and any kind of project that uses a node_modules folder). If as a module author, you would find it useful to:
- Publish actual TypeScript source files to the registry rather than build artifacts
- Have API docs auto-generated for your package from source code
- Use a registry with an explicit goal of being usable across JS runtime environments (rather than being tied to Node specifically)
Then JSR will likely be worth checking out. We'll be opening up JSR to everyone to try soon, so hopefully you can take a look and decide for yourself then :)
The disappointing part is that all their choices are the correct ones. I want a fresh start for the server side JS ecosystem that’s all ESM code, no NPM, etc etc. But you’re right, the momentum simply isn’t ever going to be there to make people switch to such a world.
I comfort myself with the knowledge that unlike Deno and Bun, Node isn’t propped up by VC funding. I can’t justify investing in either because I don’t know that they’re going to be around in the long term.
There's a snag in making the correct choices of course.
What are correct choices now, turn into incorrect choices tomorrow.
Node/Npm is filled with incorrect choices right now that they're keeping around to keep backwards compatibility, and there's no way they're going to do a quick break à la Python 2 -> 3, instead we have a slow transition leaving us with:
- CommonJS instead of ESM
- No permission system
- Callback based APIs (Which have luckily been getting replaced with promise based APIs)
- No built-in node-version manager (e.g. nvm)
- etc...
But as the language evolves, we gain new capabilities that the runtime will take a really long time to adapt to. Looking ahead, these feature will likely change what we want in a runtime's APIs.
- Promise cancellation
- Better stream APIs
- Explicit Resource Management (the `using` keyword, like D's `scope(exit)` and Go's `defer`)
- Records and Tuples (to make certain data immutable)
Now that I got used to pnpm and yarn, as well as monorepo tools (nx), I feel like my preference would be to continue improving npm and node without forking.
I mean, there's yarn, pnpm and other package managers.
Then there's alternate JS runtimes, e.g. Deno or Bun.
Both of which are primarily interesting to me because of first-class TS support without transpiling.
I feel that the ideas of all these same-but-different approaches are in dire need of a shared roof and efforts on standardization and interoperability.
Not sure if corepack[1] is a step in the right direction, because it basically adds another abstraction layer on top of the runtime to choose and install the package manager...
All of the mentioned tools bring laudable and serious effort to the table, but I'd love for convergence to happen as soon as possible.
I care a lot about using great tools, but I agree. The thought of having to hold another widely adopted backend JS runtime in my head to be an effective ecosystem developer is not pleasant. I would rather spend that concentration thinking about tools that complement JS/NodeJS in massively more meaningful ways, like Golang or Rust.
I have a tremendous amount of respect for the Deno team, their point of view, and what they have done and continue to do. In an alternate reality, maybe everyone would be happier if they had displaced NodeJS. (I don't know.) But from my point of view in the trenches, seriously imagining the widespread adoption of another major NodeJS-incompatible runtime (the browsers are bad enough!) is almost depression-inducing. I just finished migrating my monorepo source code to ESM (except for the lingering .cjs config files for commonjs-only dependencies). Holy hell, did that make me long for Laravel. It does not have to be this way. Enough already.
NX is a double-bladed lightsaber. I love it when it is working. The problem (and I say this with appreciation for the difficulty of the task) is there are too many bugs. Bugs in your monorepo tooling are scary and dangerous, especially for young or lower-maturity shops without very comprehensive QA.
A few months ago, I managed to get all the way to final production artifact verification on an iOS app after a casual NX upgrade and multiple major monorepo-level commits to discover that the "fileReplacements" configuration for the application environment variables had stopped working. Boy did that ever stress me out, because that would be a disaster to release, and it is so low in the tooling stack that you can be lulled into expecting that sort of thing to just...work because the team supporting it understands the consequences of a breakage in that sort of functionality. This is the type of bug that results in incidents that developers write their "it-finally-happened-to-us" blog posts about. I mention this because it was a particularly memorable incident for me, but there have been other bugs that were less dangerous and more just unpleasant, but any failure to properly compute affected packages in a production build could get very gross. This feels like a memory, but it might just be a fear.
I still voluntarily keep NX and like a lot of things about it. The engineers seem to care a lot and have made some impressive features. But I stay on my toes, and my attitude has shifted away from excitement about the latest-and-greatest to a focus on keeping my options open and minimizing lock-in.
It feels like the NX decision-makers are trying to do much, too quickly. They are building and releasing with notable speed. How many production releases are there for each NextJS production release? And NX is at a lower level of the tooling stack, with a far broader range of integrations. I state that merely as an off-the-cuff anecdotal observation, not as some greater opinion on release cadence. But this does feel like "a more releases, more bugs, wrong area of the tech stack for that" type of situation.
Re nx, i can't tell if it has improved since 2 years ago, but i feel it being an inconvenient abstract layer at times;
OTOH: I rarely encountered nx-specific issues at work, and all of them have been caching-related.
It's mostly doing well at organizing a monorepo with lots of code in different languages.
Just recently I saw something in this direction.. Ah yes, here it is.
WinterCG - Web-interoperable Runtimes Community Group
> This community group aims to provide a space for JavaScript runtimes to collaborate on API interoperability. We focus on documenting and improving interoperability of web platform APIs across runtimes (especially non-browser ones).
I see Cloudflare, Deno, Node.js, Netlify, Vercel. I guess Bun hasn't joined yet, but they seem to be intentionally implementing web-compatible interfaces.
Npm is already broken beyond repair. It doesn't enforce anything, so the result is a total mess. It's the Wild West of Javascript development. Even if people are not interested now, they might be interested later when and if Deno gathers momentum and they become fed up with fighting the npm mess.
The last time I pointed this out, some npm dinosaur said npm allows publishing of any type of package so it cannot enforce a structure. Wow, really, that’s exactly what I’m saying. Whose fault is that? So the result is that nobody knows how to publish anything so npm is in shambles.
You can publish a package that has zero files in it, even if it mentions them in main/exports. That’s a very basic check they could do, but they don’t.
Ideally you wouldn’t be able to publish a type=module file that contains “require”, but if npm doesn’t even want to validate the existence of the file, we can never get to how to validate anything else.
At the very least warn the user that they’re publishing a broken package, but still allow it if you must.
Is JSR or any other package manager any better? Do pip, cargo, creates, go package manager, gems, nuget, etc enforce anything? AFAIK they are all exactly the same. Any person with any level of experience and any API design can publish a package.
Pip and gem don't install multiple versions of the same package in deps folders like npm. You need to pin your versions. Installing a package does not normally trigger a neverending cascade of dependencies like it does with npm, cargo or go.
Npm is worse in every way because it doesn't even enforce documentation be shipped with the package nor does it provide easy means to do so like Perl pods or Python docstrings or docs. The result is that most modules do not come with any docs or if they do, each and every one uses its own structure and tools contributing to the general mess.
I wish they never allowed package to be published as ESM only which makes them unusable for many nodejs users. But I feel it's more a JS ecosystem issue, than an NPM issue.
I worked on a project with "forced typescript" and poor code review practices (since fixed). It ended up with overuse of `any` and all the same problems you run into in a pure JS codebase. I had to fix most of that which was a massive undertaking. Whoever wants to (and knows how to) use typescript already use it without being forced to. I don't think other people will benefit from that at all.
Eh, I feel like there’s some part of the JS community that gets a kick out of snubbing Typescript entirely, so when I import those packages I get to provide my own types. I just don’t want to facilitate that, so a ‘TS only’ repository would save me the trouble.
Like, it’s not a huge issue, but I just don’t want to spens the time thinking about that stuff.
I don't understand. It you don't want to use packages without types, cannot you do just that already?
I guess if you have to use package without typing it's because no good alternative with typings exists, which I'm not sure how creating a separate community will help with
Yes exactly, I use pnpm as well, which is why I specically mentionned the registry, as I can absolutely see how npm the CLI tool can be improved indeed
I think there's a lot of room to innovate in the JavaScript package manager and registry space.
• support for optional features and dependencies tied to them, ala cargo. How many npm vulnerabilities are introduced thru the command line parser of a package you only use as a library?
• step away from packag.json. It's fine to keep it for the registry or even runtime, but development should use a file that supports comments.
• some automated version compatibility enforcement via typedefs or registry side tests.
• packages with more than one artifact so types, original source, and platform native artifacts can be published in one package but download only when needed.
• that’s a solved problem: packages shouldn’t bundle CLI tools. Sindre Sorhus has been separating “CLI” from “API” for many years. “Unused dependencies” in general isn’t a solvable problem because a dependency might only be used if a certain parameter is passed. By the time that piece of code is reached, you either have it installed or it’s too late.
• that’s an issue that never got solved for the same reason: “package.json is never going away so we won’t add more ways to do the same thing” — ok so node/npm is going to suck forever, fantastic.
I think feature dependencies are better modeled than simple package splitting.
> For example, let’s say we have added some serialization support to our library, and it requires enabling a corresponding feature
> In this example, enabling the serde feature will enable the serde dependency. It will also enable the serde feature for the rgb dependency, but only if something else has enabled the rgb dependency.
Npm has optional dependencies and there are libraries that error when you use a code push that requires them. Features would formalize that pattern and improve it with clear errors, and composability.
I felt like I wanted this some times, but always concluded that the "scripts" commands should be self-explanatory.
There are also plenty of metadata fields.
And for detailed descriptions, why not just but a README.md next to it?
I think here, D. Crockfords original motivation for not allowing comments in JSON pays off perfectly well. If package.json allowed comments, they'd already have been abused as metadata.
Also, as someone who sometimes has a hard time coding and naming things concise and to-the-point: there just should be no need for comments in this file.
I know what I'm looking for in there and in which fields. Comments would clutter the file and encourage silly comments such as "this command starts a local development server, compiles your code using HMR, writes its output in folder XY, listen on port Z, etc etc". This can go into a README. At best, the commands should be self-documenting.
This is a good example and I pondered it while writing the comment you replied to.
But my conclusion in this case is:
If there is some issue with breaking changes that prevents updating, or a deliberate decision for a certain version for a certain reason, it should be explained in a README and in the former case, the package manager should point out the conflicting dependencies in case you try to update.
> support for optional features and dependencies tied to them, ala cargo.
For those of us who relies on trunk-based-development and feature flags, lack of this feature has been major pain! I can't remember how many times we had to keep a feature in a branch because it needs a change in package.json.
The heavy emphasis on “deno.land/x” every single time it appeared was kind of puzzling too, I found it quite distracting compared to the rest of the text.
Op should also probably consider running future posts through a spell checker. There were quite a few obvious misspellings and words that I had to double take on because they were the completely wrong word to use in that context and some other word should have been used instead
Package manager is clearly an incorrect title (issue is on the hn submission side), should be "package registry" package managers are a dime a dozen these days.
I'm a bit mixed in this. I get the desire to be interpretable with npm. There's a massive ecosystem out there. I kind of liked the everything is a fixed http endpoint. I also get the desire for compatible versions across need dependencies.
I have access, but haven't worked with it yet since my personal time has been spent more on rust recently. Should probably convert one of my libraries to Deno first and see where the pain points are both with jsr and deploying to npm as well as checking npm compatibility.
A lot of what I've worked on has been feature complete and only updating dependencies now and then. So rolling into Deno testing and formatting could be interesting. I've been using Rome/Biome for a lot of this the past couple years.
TS/Deno first is definitely easier than the other way. I've also found Deno to be great for shell scripting.
Could you provide why not? I have a utility library (Project A) and whilst messing around with TurboRepo, I noticed I could point to the source files in my package. I haven't progressed down that route, but something about it seemed nice.
So in other words, if Project B depends on Project A, either I could build Project A and then build Project B or I could just build Project B (which consumes Project A as a typescript package).
Kinda confusingly written but I would love to know why that is an issue.
One thing that comes to mind is different TypeScript versions.
JS has ES6 and ES2020 etc and that’s enough to deal with, Typescript is younger and has been moving faster, compatibility might be difficult around the edges.
Longer term I agree with you though, I’d much rather see TS source than JS plus types when I’m looking at the source code of a module.
I don’t know why you’re being downvoted, my experience has been that minor version TS releases usually create a cascade of hundreds of type check failures even though the syntax itself is backwards-compatible.
It would be impossible for a project pulling TS sources directly to achieve stability (or even compile all dependencies with a single version of TypeScript in the first place).
In Deno with JSR, only the public API gets used for type checking because publishing enforces that the public API can be determined without type inference. So it's similar to declaration files (d.ts files) and you wouldn't see errors that occur at the non-declaration level unless someone explicitly opted out of those publishing checks (which is heavily discouraged).
I tried to do exactly what you say last week but I couldn't get it to work. The turborepo documentation seems to indicate its possible with the whole concept of "internal packages" but it all depends on the compilation step of Project B being configured to compile and bundle workspace dependencies, and a simple "tsc" wasn't doing it for me regardless of the different tsconfig settings I tried.
I made a private repo who's package JSON was `private: true`, `type: module` and `main: src/index.ts` and `module: src/index.ts`. I didn't progress down that route as stated, but I was able to consume that package. Maybe none of what I said here mattered in which case the only other thing that may make a difference was that I was using pnpm for the first time with workspaces.
The only downside that I see is that the runtime needs to be able to compile older versions of TS with all the quirks of that specific TS version, so likely it would need to ship multiple TS compilers.
If only the ECMA Typo proposal would finally be implemented, we wouldn't have all these issues.
Usually it is hard to understand the underlying design, I guess all intentions and plans are not well communicated or they are held private perhaps for monetization or other purposes. I would hope to see some support, at least from everyone ever used node or npm. It is the same author who created node at first, obviously the team is trying to innovate, from angles we probably have no vision. Like many other resources this one would also be optional for you to use or not. I am very happy how the many options are shaping up and getting perfected in their ways. Looking forward for the future excitingly.
Why? I’m not primarily a JS/TS dev, but when I’m dealing with the JS ecosystem, operating across two languages is one of the pain points (e.g. clicking a function in my editor and getting the typedef instead of the code). Now that runtimes like Deno and Bun support TS natively, why shouldn’t we avoid that completely?
Perspective from another not primarily js/ts dev: in this context, js is the equivalent of a .jar full of .class files. Instead of a -sources.jar you'd have a sourcemap. The original typescript would be more like a repository dump, and chances are the build depends heavily on an exotic fork of python or something equally unexpected.
I get the analogy, but I don’t see what this actually solves in the JS/TS case.
The jvm can’t run Java source files directly and needs a build step, so to the extent that the build step is not portable, it makes sense to ship the built version. Jars are also used for dynamic imports where doing the build at runtime would be expensive.
By comparison, runtimes like Deno can run TS directly now. There is no separate build step required. After all, it’s a scripting language! The fact that it has a compile step at all is just baggage at this point.
Can run one specific version of TS, and JS that has been compiled from any version of TS.
That ability might be very valuable for the development loop, but it cannot be intended for dependencies because source compatibility is very low in the priorities list of TS.
In JVM words: where scala seems to have given up on cross-version compatibility of the complied code, forcing cross building upon library authors and dependency registries but succeeds in sufficient source compatibility to enable that cross building, TS, due to being bound to baseline JS as output anyways, does the inverse. The output works across versions (how could it not, when it's meant to work across languages), but with that a given it can take its liberties with input (source) compatibility.
It's being renamed to a no-slow-types lint rule that occurs on publish and in deno lint. Essentially it enforces that the types of the public API can be determined without inference. This enables a few things:
1. Type checking ts sources is really fast when Deno determines a package does not use slow types. It may entirely drop packages and modules that aren't relevant for type checking. It also skips parsing and building symbols for the internals of a package (ex. function bodies).
2. Documentation is really fast to generate (doesn't need a type checker).
3. A corresponding .d.ts file for relevant typescript code is automatically created for Node.
You can search for "no_zap" under the `denoland` org if you want to have a bit more context.
I believe it's a form of type checking where, simplifying, things like function bodies are removed and type checking is only done across top-level items.
If you would read the article, they’re coming to replace the registry as well instead of being a new developer front end npm, yarn, etc) against the same npmjs registry everyone is used to.