Hacker News new | past | comments | ask | show | jobs | submit login
Node.js 16 Available Now (nodejs.medium.com)
305 points by ilkkao on April 20, 2021 | hide | past | favorite | 173 comments



The new stable timers API in Node 16, combined with top-level await, means that you can now easily sleep in an ESM Node script, like this:

    import { setTimeout } from 'timers/promises';

    await setTimeout(1000);
    console.log("awake");
(But note that you'll have to activate ESM mode to write this script, e.g. by writing it in a `.mjs` file instead of a `.js` file or by adding a setting to package.json.) https://redfin.engineering/node-modules-at-war-why-commonjs-...


You can also do this with a wrapper around setTimeout, if you're in an older environment:

  /\*
   \* A sleep function that returns a promise.
   \*
   \* @example
   \* Sleep for 100ms
   \* ```
   \* await wait(100);
   \* ```
   \*/
  export async function wait(ms: number) {
      return new Promise(resolve => {
          setTimeout(resolve, ms);
      });
  }


... or even more easily by:

const wait = require('util').promisify(setTimeout)


That's interesting. util.promisify normally only works with functions where the callback is the final argument, which isn't the case with setTimeout. I didn't know you could customize promisify behaviour, but looking at the docs apparently you can just by setting a property on the object using the util.promisify.custom symbol [1]. That is what setTimeout is doing, which is why that code works.

[1] https://nodejs.org/dist/latest-v16.x/docs/api/util.html#util...


You don't need `async` here, since your function explicitly returns a promise.


I do that too. In ts, it helps a tiny bit - it enforces that the function will return a promise. So if that ever changes, the function TS compiler will be your backup anchor.


While this is true, I've found over time it's easier for me to quickly see in my code if I need to `await` on something if the function is an `async` function and not just that it returns a Promise.

I prefer

async function foo() { return await new Promise(...) }

as opposed to

function foo() { return new Promise(...) }

They're the same thing for the most part but the latter I have to potentially dig deeper into the function to confirm it returns a promise compared to the former.


The problem is that this is more than just a syntactic difference. There's a chance it will take two ticks to resolve instead of one.

For documentation purposes, I recommend block-commenting the `async`:

const foo = /async/ () => new Promise(...);


I did not know this and appreciate the insight! Will definitely store this back of mind and try and remember to use block comments when needing to explicitly return a promise.


That's not true if you just return the promise instead of using return await, right?


No. `async` on a function automatically defers it even if the return value is immediate.

> async function foo() { return "hello"; } foo().then(console.log); console.log("after")

after hello


const sleep = ms => new Promise(r => setTimeout(r, ms));

It doesn't need to be that tricky.


I get that the arrow function notation is less verbose, but I disagree with calling function syntax "tricky". It really doesn't matter much. You need the type on ms though (if using typescript), and export.


What’s the overall feeling on using .mjs files?

My initial thoughts were “urgh I just want to have .js files in my projects” but I’m wondering if I’ll warm to them given that they make things a lot easier.

Are people eventually just all going to use .mjs files for everything?


If you have package.json file you can still use .js file with 'type: module' in package.json.

https://nodejs.org/api/esm.html#esm_enabling


Yeah but it seems like it might be easier to just start using .mjs everywhere, no?


I don't think so.


I know it’s a bit of an apples and pears comparison, but which part of changing the file extension feels more difficult than editing package.json?

I’m thinking about new projects rather than converting existing projects.

One benefit I can see to .mjs is that if all extensions are .mjs it’s clear what type of project it is without the need to open up package.json.


With the context you've added now, I might agree. Existing projects would be a pain to rename all files compared to adding a single line to the package.json .

About new projects, I'd have to read more about this new extension, for some reason it feels like a temporary solution that'll just get merged to the normal js extension in a future release.


Yeah it totally has the feel of a temporary solution.

From what I’ve read though, there doesn’t appear to be much of a plan apart from the current solution.

My initial reaction is that I want to keep all my files with .js extension.

One thing that could potentially be problematic is for files that need to run in the browser and on the server.

Really curious to see where the community’s tea leaves settle. My guess is that people will eventually just start using the new extension.


Can I have an entire stand-alone app (not a module to be published on npm) as a module in package.json? I understand it can no longer use require() anywhere, that's ok.


The question is, why bloat the standard library with something that would take a couple of lines to implement yourself or as a library? More languages should release new features as libraries instead of forcing it into the "global" space of the standard library.


I think this is the only "Javascript has too few third party libraries" take I've ever heard.


Never said it should be a third party library, the core team can release libraries as well if they wanted to you know.


I think what you are arguing (which I agree with) is that the Node ecosystem would benefit from a set of "core" libraries that could just be installed as a separate NPM module, that are blessed by the Node core maintainers, but aren't part of a specific Node version.

Deno basically does this with the Deno standard library, e.g. https://deno.land/std@0.93.0, and I agree that I think it's the right approach.

There is nothing "special" about wrapping setTimeout as a Promise, indeed pretty much everyone has done it at some point, so would be nice if there was a single, blessed standard version that I could just add as @node/standard in my package.json as long as I was on any supported Node version.


The core team does release a library. It comes with every install, and that's why we call it "standard".


The original point was that not everyone want all these syntactic sugar "features" shipped with the core runtime...


What’s the syntactic sugar feature here? Top-level await is part of the language spec.


One into which you have to opt in, yet.


I don’t think that’s true. It’s supported in any ESM module, per the ESM standard, without any config or runtime flags. If you mean using ESM to be the opt-in mechanism, that’s not a meaningful distinction. CJS modules and require have always been synchronous and likely always will be. Changing that would break too many things.


Because I don't want to have to download some yahoos library just to have a timeout that behaves like a promise.

I prefer my batteries included. Also importing from "timers/promises" isn't a global namespace.


I can never remember if i'm using promise-sleep, sleep-promise or any of the other available packages. Or if they stop working on a node upgrade. Also nice if people use and do the same simple things across projects.

The most common stuff I like to be part of the standard lib, either node or javascript, but it works fairly well now I guess so no biggie.


This one is not that bad. At least it takes AbortSignal, so it can be cancelled. Bonus points for that.

That said, I'll probably spend more time searching the node docs for how to import/use it than just implementing it myself.

OTOH I use promisified timer in almost all of my web scraping scripts, to reduce the load on the server, so I'm glad I'll be able to drop this thing from my utility library.


That kind of reasoning got us into the whole pad-left misery.


No, mutable package management is the main reason, together with "humanness" as the author pulled the package from the registry. The core team would hardly pull a package like that.


Like left pad


Large standard libraries are good. The more things are in the standard library for a language, the more likely it is that other 3rd party libraries play nice together.


Are there anyone in HN community using Node.js for mission critical backends? Even though I am perfectly happy to do that and does, especially with Typescript support, I have seen increasing number of backend devs who are more comfortable to use a static typed stack like Java or Go. Wonder if Node.js will ever get wider adoption like Java got.


We use it for a mission-critical backend in TypeScript. After 20 years (was previously primarily a Java programmer for the first part of my career) I feel I have finally hit environment "nirvana" with having our front-end in React/TypeScript, backend in Node/TypeScript with API in Apollo GraphQL, DB is Postgres.

Having the same language across our entire stack has huge, enormous, gargantuan benefits that shouldn't be underestimated, especially for a small team. Being able to easily move between backend and frontend code bases has had a gigantic positive impact on team productivity. Couple that with auto-generating client and server-side typescript files from our GraphQL API schema definition has made our dev process pretty awesome.


> was previously primarily a Java programmer

same here, except Apollo GraphQL, we still use REST. it is indeed nirvana. wish the community finally settle down on a stack at least for a decade.. shifting the backend every few years doesn't do good for dev productivity.. (had to recently use Go due to peer pressure. while performance wasn't a concern it was purely due to the "feel" that Node.js is not good enough for serious backend work - lack of multi threading , potential future performance and scalability etc.)


> lack of multi threading

That aspect of Node.js can actually be a very good thing for serious backend work, as it encourages process decoupling using API's. Decoupling ends up scaling better later on: you build a distributed system that can be scaled out horizontally across a fleet, rather than running up against vertical scaling limits of how many threads you can get running on the same local hardware.

Whenever I've seen applications that use worker threads for backend systems they end up regretting it and wished they had decoupled into a separate process that could have been scaled independently onto other hardware over the network. Spinning up new threads on the same machine is a temporary crutch that bites you later on in your growth.


I’ve seen plenty of serious backends that use multiple threads and are horizontally scaled.

I don’t think this is a particularly compelling argument for the lack of multi-threading. The worker thread pattern is fairly useful and there is no reason a well architected application couldn’t use both horizontal and vertical scaling. Independent scaling is only really useful if there is a large variation in the amount of work done by the workers per query. If it’s well bounded, then it might be easier to just horizontally scale the entire backend.

There are also other use cases for threads that don’t fit into the worker pattern. For example, background tasks that happen outside the serving path.


Same here, same stack. Was considered a big risk when we started 3 years ago. The code sharing possibilities, much easier team transitions (couple that with client development on all platforms via React Native), much easier hiring. I'm not sure that out of all contenders Javascript/TS as a language was worthy of such a role, but here we are, and it's working.


> huge, enormous, gargantuan benefits

Would you mind enumerating them? I have an idea of what they are but curious about other perspectives.

My sense is that code sharing is not that common between a Typescript frontend and backend. You mostly need generated request/response data types but I don't think there's that much shared behavior because you can't import any of your backend-y logic (database, auth, external APIs) transitively into your frontend.

I think primary gain is what you've hinted at: 1 ecosystem and it's easier to onboard fullstack devs.


Notion shares a large amount of code between the front-end and back-end. We have many algorithms, collections, helpers, etc that we share. Here's an example, we have a shared loadPageChunk function that takes a cursor and a loader implementation, then traverses our data graph to gather the data needed to render part of a page.

    // shared code - implement the algorithm
    export async function loadPageChunk(
     args: LoadPageChunkArgs,
     loadRecordValue: loadRecordValueFn
    ) {
      // ...
    }
    
    // client code - use the algorithm, provide client-specific IO
    // eg on Android we'd use Sqlite.
    const records = await loadPageChunk(cursor, SqliteService.loadRecordValue)
    
    // Server code - same, but use the server's data stores.
    // Behind the scenes, these loaders batch, etc
    const records = await loadPageChunk(cursor, useCache ? CacheService.loadRecordValue : PostgresService.loadRecordValue)
Even if we only shared types, there's a significant benefit. We try to push as much logic into the type system as we can; for example we use discriminating unions to define different groups of related types. Eg, we have a union type called ContentBlock that has all the specific block types that can have children, `Page | Text | Column | ...`, sharing this type and its helper functions like `isContentBlock(block: BlockValue): block is ContentBlock` means both our front-end and back-end code rely/expect/enforce the same invariants.


One example: input validation logic is something that is nice to run in the front end (for performance and immediate responsiveness to users) but that you also need to run in the back end for correctness.

Previously I've done this with 2 implementations (JS on frontend and Java on backend), but then keeping the logic in-sync is a nightmare. With a single language you can just share the library.

You are correct, though, the biggest benefit I see is not sharing code, but making it trivially easy for a front-end dev to add a small piece of backend code that they need without needing a back-end dev to do it, and vice versa. It just makes the overall team much more productive because there is very little "waiting on the back/front-end" to do it.

Oftentimes we'll have either team write up the schema for a new endpoint using the GraphQL schema definition language, then from that we autogenerate the TypeScript types, then usually the front-end team creates a simple mock in the backend so that they can fully implement the UI, meanwhile the backend team works concurrently on the real implementation. This process allows for much more parallel productivity than if, when something is broken or needed from the other team, you just have to submit a ticket and wait.


I’ve got a similar stack, what are you using for ORM? Started using Prisma to replace TypeORM/TypeGraphQL but it’s new and unproven. Also are you caching with Redis, any other utilities helping with that? GraphQL-codegen is a lifesaver for generating gql types and resolvers.


We are not using an ORM. I am a pretty strong advocate against ORMs, but that is a topic for a different discussion. We have a set of DAO components that access the DB using Slonik, https://github.com/gajus/slonik (overview explaining the rationale for this library is at https://medium.com/@gajus/bf410349856c ).

Our app doesn't have a huge need for caching, but we use a mix of in-server-memory caching ( https://github.com/isaacs/node-lru-cache ) and Redis when we need a global cache.


I highly recommend you not use node-lru-cache: https://github.com/isaacs/node-lru-cache/issues/63 (as you can see from the last comment on that bug, these truly bizarre performance characteristics were not solved, and I would recommend drop-in replacing the fast-lru library we use).


I've worked worked developers that have a hardline against ORMs. I see lot of their points, especially around the performance of raw SQL. But I cannot get past how much boilerplate code that has be written over and over again. And then you have to test that boilerplate code. I find that most people are far more productive using an ORMs. To me this makes sense because it's less things to type. Less is often more.

We have services that don't ORMs and I have to wonder if it's worth the cost.


While I don't write a ton of "boilerplate", the primary thing for me is writing boilerplate, if necessary, is trivially easy. What's hard is debugging when problems arise, or trying to do some slightly more complicated join that isn't supported out of the box by your ORM tool of choice.

Basically, in my opinion ORMs just make the easy stuff slightly easier, but they make the hard stuff much harder, and when you're stressed out trying to fix some critical production DB query all they do is get in the way.


We use slonik and a few helper functions. Very little boilerplate.


I'm curious how you handle migrations and schema documentation, which to me are the huge benefit of using something like Django or ActiveRecord. Do you version-control at least your forward-migrations as SQL or SQL-via-Slonik? And do you have processes in place to ensure columns are documented in a central location?


For migrations we actually use knex, but primarily just because it was in place first before I knew about Slonik.

Writing a migration tool is pretty trivially easy - all knex does is let you write an up migration and a down migration, and it keeps track of which migrations have been applied in a DB table. Main thing is that all of our migrations are each in an individual versioned file in our source repo.

We use postgres COMMENT functionality to apply comments to all of our table columns.


If you’re already doing codegen, you might take a look at Zapatos[1]. It generates types from your database schema, and provides type safe query builders (raw SQL via tagged literal, some simple ORM-like functions for basics).

1: https://jawj.github.io/zapatos/


Prisma does something similar, provides a typed schema and a query engine written in Rust.


Maybe not an ORM but a query builder: Knex has been of so much help to me, it works pretty well and it's really easy to use. Used sequelize in the past but it can get incredibly complicated, really positive change.


ORM? What is this, 2004 J2EE?

Joking aside, in a dynamic language like Javascript, especially in modern coding style which is not OO anyway, you don't need an ORM.


You write sql statements? And turn the response into classes by hand in every app you write? Ruby and python have orms too. Am I missing something?


You write classes? What is this, 2005?

Joking aside, I do write SQL statements (or use a query builder, which is not the same as an ORM).

I don't "turn the response into classes by hand in every app I write" however, because the responses are perfectly usable as they are (in a more functional style), and OO is not the best way to model records anyway.

I know Ruby and Python have ORMs, ActiveRecord, SQLAlchemy and so on. They're not really needed. Heck, I've read authors of ORMs saying you don't really need one...


This seems like a sweeping judgment your making saying don't use orms, or classes. But this seems like just an odd religious take people have so I'm out...


While some people are hard lined enough on this that I agree it sort of becomes weird, I can tell you my beef with ORMs comes from being burned before a couple of times by a super inefficient aggregation ActiveRecord did on GROUP BY queries that it 1. not only took a really long time to figure out why a particular page in our app was loading slow but 2. we ended up having to write raw SQL to fix it.

I think the answer depends on the type and load/volume of app you're working with combined with the dynamics, size, and skill level of your team(s). I'm extremely comfortable writing, profiling, query planning, and debugging SQL queries. Others aren't, and therefore having an ORM to query data in the DB with the syntax of the language you're using in your projects makes way more sense, if nothing other in order to speed your team up.


I think the mistake is thinking an orm is actually going let you be free of knowing or caring database fundamentals. That only applies in the most simple cases. An orm has other benefits though.


How much work do you have to deal with if a column gets renamed? Be honest.


1. Don't rename your columns, but if you have to

2. project search and replace on $COLUMN_NAME

If your column name is a common keyword, variable name, etc. in your code base and it's difficult to find using project search, that's unfortunate, but we organize our backend code and tests in a logical enough way that it's never taken longer than an hour to create a PR to create a migration to rename a column and update all places in code that reference it.


Renaming a column is an operation which showcases the weakness in not using an ORM. Other operations of the sort do exist, anything that operates on columns across multiple arbitrary spots in the code.

IMO an hour for a change you have so little confidence about is not acceptable when the alternative allows you to do it in a second with full confidence.


While this is fair and I don't disagree that changing a column name or possibly other schema changes using an ORM and it's subsequent migration scripts is potentially going to be a faster exercise than without one, changing a column name (or any schema change for that matter) in a database in my experience is rare. If you choose to use an ORM for reasons like it might (emphasis on might) be a bit faster and easier to change a column name than without one instead of for other more meaningful people or efficiency-oriented reasons of the day-to-day developer workflow is probably a poor approach.

Keep in mind I've installed and used an ORM in projects where the ORM is used only for migrations, but not used in application code and this is absolutely a fine reason to use one imo. But adopting an ORM for migration purposes and forcing or using in application code simply because it's installed isn't necessarily a good approach.


Like I said, it's one example.

ORMs are to SQL what static types are to programming languages. The conversation we just had was me giving you one example of the benefit of static types, which happens to showcase a huge weakness in dynamic typing:

- How painful is it to rename a class attribute?

=> It's a long search & replace exercise which results in a less-than-certain outcome.

The obvious take from this isn't that "renaming class attributes is rare".


This is a very simplistic view of the situation.

ORMs have very basic support of current SQL standards and database specific features.

This means that using an ORM reduces the power of the database choice you made.

Also things like arbitrary SQL support imply you have to manually creat return value typings.

Having to leave the nice ORM wrapper functions for arbitrary SQL means you lose all the ORM niceties like soft deleted or updated_at fields

Overall I see very little use for ORMs.


You don’t need an ORM, but (unless you’re a total masochist) you probably want _something_ to smooth the interaction with SQL.

We have a TypeScript Node.js API in production, and we wrote Zapatos to be that something: https://jawj.github.io/zapatos/


I know several people who currently/used to work at PayPal and this is their stack (the people I know were on the wallet team, but I got the sense it's a fairly prevalent thing across the org)


We have a similar setup and it works pretty good, the only thing missing is the auto generated clients and that sounds like it could be really useful. What are you using there?


Not OC but we generate the client types with Apollo codegen (https://www.apollographql.com/blog/typescript-graphql-code-g...) using the schema file generated by our NestJS backend


Check out gqless which generates a full Typescript schema without any need for strings.


We do for for managing stuff with many $1... zeros for serious top fortune companies in business critical projects. Java/Go type systems are primitive compared to typescript. Shallow or no dependencies, functional, ocaml like modules, pervasive use of algebraic types provided by ts (previously flow), several years in production, nice codebase, several successful, non-trivial major releases, constant updates with codebase worked on every day by many people, several deployments per month.

Problems I personally have with it:

1. no exact object types in ts as in flow - means they have to be emulated by destructing, sad, but you can live with it/you have to be careful

2. transpile times - but recent experiments with swc for tranpilation and deferring typecheck to run concurrently while tests are kicked off after swc finishes look promising

3. type system could be a bit smarter in few places, but no blockers so far

I'd recommend but with caution - spectrum of developer's competency is closer to php (almost anybody can do it) than the one of languages like ocaml/haskell/rust and others (where entry bar is higher). Vet your dependencies, hire competent developers and it can work very well.

Some of libraries we're using:

- https://github.com/appliedblockchain/assert-combinators - light, runtime type assertions ("parse, don't validate" style to avoid illusion of type safety at io boundaries)

- https://github.com/appliedblockchain/tsql - functional, tagged template based combinators for sql generation


Many thousands of people and companies use Node for mission critical backends everyday. When I worked at a very large publisher (for 10 years), most of the backend was moved to node and it was far better than our previous Java backend. It doesn't mean Node is better than Java, it just means it was adapted well to suit our needs.

I personally ran mission critical node services that interfaced with over 100,000 simultaneous compute nodes in AWS.


The spaceX rockets use node for parts of their user interface—which is as close to the spirit of "mission critical" as you can get


Do they? Nodejs is only backend/builders, so they could be just running javascript on a chrome-like instance, no nodejs involve at first. They probably have the services running on nodejs that poll sensors (or read messages queues) to reduce complexity but not 100% certain.


Yeah, the parent comment here is a little off.

Node is not for UIs, so they're definitely not using Node for that. It was confirmed that the UIs in the Dragon capsule (which, sorry for being pedantic, is not really the "rocket") ran on top of Chromium. It's possible SpaceX uses Node under the hood somewhere, but I don't believe that has been confirmed anywhere.

Also kind of interesting to note that it's only the Dragon capsule with humans that has controls, the capsule (and rocket) are both autonomous. The controls on the capsule are only there "just in case".


Electron’s “main” thread uses Node. You can then communicate between it and browser windows over IPC easily with their APIs.

So if they’re running their UI on Electron, it could be on Node.


Ah I was wondering why their missiles keep crashing...


Walmart (the company) switched to NodeJS in ~2013 and saw a >80% reduction in cloud compute compared to LAMP. Source? I saw the dude from give a talk at Mozilla about the transition.

Here was the meeting:

https://www.meetup.com/pdxnode/events/142646682/

* Ben Acker will share about some awesome drawings and tales of Nodejs within Walmart Labs.


That's pretty meaningless. It all depends on how bloated your code is, not on the stack. I have production PAMP apps that process requests in 1-2ms. Most of the request roundtrip time tends to be network latency. It's pretty similar peformance wise to equivalent node code. Except that PAMP is naturally multi-core capable, while node is not. It's much easier to mess up node's performance (latency) by doing too much compute, compared to the PHP app, where the multi-process model will save you, to a point.


Please forgive my ignorance, but what does PAMP stand for?


Sorry, LAPP. Just postgresql instead of mysql. Not sure what I was thinking... I still love Linux. ;)


How is it meaningless when it is literally the metric by which to determine if a stack is bloated? And I'm not going to argue with the dude that is responsible for Walmart's online shopping platform. Maybe you should talk to him.


It's meaningless to compare platforms based on just one implementation pre-rewrite and post-rewrite. If they re-wrote it into the same stack with performance as a goal, they'd get significantly less compute resource usage, too.


I've worked at several fintech companies who all used NodeJS at the backend. Engineering management decided it was suitable for the high IO demands of payments processing.


If your hosting costs are not significant (e.g. you're not a youtube, netflix kind of company where hosting costs eat up a lot of your profits) I think you should optimize for developer productivity instead of computing efficiency. Does it really matter if you spend 20k on hosting instead of 10k if you have to pay your devs 10x that? Just my opinion, curious to hear what other people think


Reducing costs was not the priority during my time there. Fintech startups had a runway that could loop the equator several times - a stark contrast to cash-strapped startups where I did contracts at. It was a real eye opener! It was always a race to get the products out, regardless of cost.


Reminds me of the very first time I went to work for a trading firm. We had been having some performance issues with a particular software system, and so I told the CTO “After looking at all this, if you give us a month I’m pretty sure we can improve performance by about 30%.” He looks at me and then says “Or how about I just get you guys twenty more machines? I can have them here in a couple of days.”


Thats very encouraging to hear. I have been trying to sell our product to enterprise companies and few Fintech companies showed interest, however they are almost 100% Java stack still. In fact I got a big list of concerns/requirements when we said we are based on Node.js - starting with XML processing and distributed transaction.. They cannot imagine using our stack for core business logic because of lack of enterprise features like distributed transaction support. That's when I realized how mature Java is in enterprise adoption compared to other stacks (With .net as an exception.). Wonder if a common standard like JDBC and JMS would help Node.js gain more adoption with enterprises


Somewhere else I saw NodeJS take the lead was parsing of vast quantities of large XML files in the streaming media world.

DDEX is the industry standard for metadata communication (artist, track, licensing, credits, etc.).

One 20-track Xmas classical music compilation album could easily have a single 150MB XML file!


I hypothesise that you’ll find a big variance in backend languages based on when the companies were started.

Companies started 10 years ago tend to have Java / Ruby backends, but companies started in the past 2-3 years will often have NodeJS/Typescript backends.


>and few Fintech companies showed interest

I said Node is dominant in web/APIs, but Fintech I don't expect to go for Node, if we're talking trading, but also for CRUD work.


Fintech doesn’t have anything against node, institutional banking are the ones where old habits die hard.


Is IO the same as requests? i.e. high request demand?


The payment processing was literal parsing of data files on the local disks. It was where modern fintech interfaced with archaic banking technology. Apparently they ran benchmarks and concluded NodeJS was the most suitable for their requirements.


Not necessarily, you could have a few large requests that each do a lot of database work at the backend, that would also be high IO. IO is just any low level network or disk activity that is typically done by the kernel, not the userspace (nodejs) itself.


We (Transloadit) have been running Node.js in production the longest, since 2008, processing many petabytes, globally, hundreds of machines when not thousands, and it has not let us down. Lot of faith in Node over here.


Wow thats very heartening to hear. With such use case have you ever considered moving to other stack, like Go? or does any part of your stack already uses more performant runtimes?


We do use Go in three places yes (launching instances, as there was a better go aws sdk available at the time; uploading to s3, long story but we want this out of our main processes and go has faster startup times; and tusd for receiving resumable file uploads, mostly because our tus.io lead loved Go so much :)

But this is (way) less than 1% of code and typically performance is not the problem with Node, even for our use case. ~Everything we build feels fast the first time. If on rare occasion it does not, it’s a matter of rearranging the building blocks, not swapping them out for something else entirely.


Plenty of us. While Node.js might not be talked about as much as Go or Rust is, it's battletested in my opinion and a breeze to work with. I'm currently using Node for a large-scale content platform that is comprised of an ingestion engine, a parser, a queue and scraper. 4 Node servers powering 40k+ sites, sitting behind a load balancer, NGINX and PostgreSQL. It's not using anything overly fancy, but is stable and easy to fix if anything goes wrong.


At AWS, we use Node.js with Typescript for several mission critical backends in our service - especially for lambda architecture, it's a strong use case


We're microservice based, 100% of our backend code is Typescript, including all of our REST APIs. Everything runs on Express. All in Kubernetes which provides rock solid uptime for us - easy to scale too. Front end uses GQL, which is backed by Apollo server for us. Everything can scale horizontally.


There's no shortage of mission critical PHP backends, so I suspect that's the same for other popular dynamically typed languages.

That is, where "mission critical" means "critical to some particular business". Not guiding rockets or critical medical use, etc.


Yes, running 24/7 as a daemon using builtin cluster module processing tens of thousands of requests per second.

Also as a data service for connection pooling databases that don't support connection pooling with supplied drivers or server-side.


After starting my career in Ocaml/C#/Java, for the last 10 years I've worked primarily in node in critical systems.

Live medical data processing, large ETL pipelines, and coordination systems that set 100% uptime as a goal and any incident would have _very_ thorough RCA, retrospectives, and accountability reports.

While traditionally these were built with things like Erlang, Java Ecosystem tools (Camel, etc), the only time that using node had serious downsides was when integrating to particular languages ecosystems as a second class citizen.

Examples:

• Kafka. Until recently the node-rdkafka wrapper had quite a few bugs compared to the java client. We ended up writing our own internal client in typescript. Performance was worse, but we had easy scalability of our producer services and we were able to track down and resolve any issues quickly.

• z3. There hasn't been an official build for node yet. We had built a wrapper to use a specific fork of z3 that we had, which allowed identification and distribution of shared subsets of problems. This worked with an etcd-like streaming consumer where you would get live-pushed keyed-problem subsets and utilize those in a local in-memory cache to speed up solvers that overlapped.

• Distributed Actor-Like framework. Obviously Erlang, Akka, Akka.NET, etc are the prior art here. It didn't take too long to have our team analyze these and build out mimics in typescript.

• Wrappers for specific C++ statistics libraries. Some of our ETL pipelines would enrich data with a pass on certain identifiers/classifiers/aggregators. For a few of these we created js/ts wrappers, but it would have been nice if they existed.

• Standard Library. We took a microsoft-like approach and just created a standard library for ourselves. Tested and with lots of features, it meant that we rarely had to reach outside of our ecosystem for Collections, Encoders, Crypto, etc, and that they were all documented to our internal standards. If there was an issue, you had someone you could ask and get an answer within the hour.

Unfortunately all of the above is proprietary and we weren't allowed to release any of it.

We did retrospectives on technology choices and limitations once a year, to learn from decisions made going into the future. Each time when Node came up, the general consensus was "we could have done it in Java I suppose, but the Typescript/Node combination was much quicker to iterate on and we felt more confident in the solution after the fact".


I'm using it for a major system in the industrial process control segment--our piece is a dashboard displaying timestream data and KPIs based on it, and detecting alert conditions and notifying people.

Were I to do it over again, I probably wouldn't choose node, but my problems with it haven't had to do with static typing or type errors--we don't use typescript. Where we've continually struggled is indeterminacy in process control and error handling and writing robust services in light of that.


Can you provide some more details wrt issues with error handling?


Not the parent poster, but I'll tell you some of the pain points I've experienced:

* Async operations destroy stack information.

* It's very easy for someone to miss an error handler and end up with your process in an ugly state.

* Default error handling is "crash the process", no matter what else is going on.

* Lots of libraries that rely on buggy native code.


All of these, plus not being guaranteed that an error will actually be thrown to be caught rather than just hanging the process. In our case, we have a variety of tasks to carry out repeatedly on a set of entities, and the only reliable way we found to do it was to spawn new processes per task per entity. Reliable recovery from errors is basically impossible in long-running processes; you need to rely on idemptotent tasks and short-lived processes that are actively reaped if they take too long (in part this is also a consequence of the single-threaded execution model for JavaScript entailing co-operative multi-tasking).


Yup! We do that at Coral!

Typescript/Node.js/GraphQL back-end with React/Relay/Typescript on the front end.

https://github.com/coralproject/talk

It's pretty nice having the whole code base share types, syntax, structure, etc. It's served us well for many years!

Some of our clients include: The Washington Post, New York Times, Wired, USA Today, and Financial Times


We were using Node.js for pretty much all critical services at Azlo; It worked really well. Other than switching from Javascript to Typescript I don't think switching was ever considered. At this point Node is a well established platform and with a huge ecosystem that's easy to leverage.


Uber was using Node for a good portion of their backend services back when I worked there (2016-2017). I’m not sure what they’re doing now, but when I left there was a project to move a lot of stuff to Go.

IIRC, the performance of Node was ok but clearly worse than Go/Java/etc. Uber was using JS not TS back then, but the real issue (at least when I started) was lack of a defined interface for the API/mobile app communication. That was eventually addressed by adopting a forked version of Thrift.


It still uses Node.js as a glue between microservices if the project frontend is web (the eats website, for examples).

It also used Node for a very core part of the app, and it was Node 0.10 to boot, but my understanding is that that's on the way to deprecation.

Microservices themselves are all in go or java these days.


All of our backend is built on node. And to be honest, Typescript is one of the best languages that I've ever worked with. And for Node's niche — IO-heavy, CPU-light servers that have complicated and rapidly evolving business requirements, respond to HTTP requests and do a lot of RDBMS/cache requests in the process — even better than Rust.


> Are there anyone in HN community using Node.js for mission critical backends?

Using Node.js for (large scale) mission critical backends, mostly in JS, but (on the topic of typed stacks) more and more of it is becoming Typescript.

> Wonder if Node.js will ever get wider adoption like Java got.

I'm not sure if it it will ever go as wide but it does seem to be going that way.


I spent years working in Java, I've entirely moved to TypeScript as a replacement in recent years in the front and back end and it's been a huge improvement.

The dramatically larger ecosystem and community is great, but notably the static typing is far more powerful in practice. I really wouldn't pick Java to improve type safety nowadays.


>Wonder if Node.js will ever get wider adoption like Java got.

Huh? Node is almost dominant for all kinds of API and web backends...


Pretty sure Substack uses node in their backend, at least based on the information from their jobs page[0].

[0]: https://jobs.lever.co/substackinc/69f5ed72-9a51-404d-9db1-20...


At https://checklyhq.com we run millions of monitoring workloads - HTTP checks and Puppeteer / Playwright scripts - each day using just Node.


Out of curiosity, what do you use for job scheduling?


AWS Lambda. With custom job scheduling logic.


A year ago I worked on an analytics project and uses NodeJS as the backend. It has been really good so far and we are happy with the performance we get.


Good to hear that..


We are using Node for a lot of mission critical backends but if it was up to me I would rather use Go :)


If you are looking for a typed stack, you can give a spin to TypeScript.


I love Node - does anyone have good experiences with using Node with Rust/Java/C++ for interop as necessary performance wise?

I know it's possible and that some teams do it, but the story wasn't great with (much) earlier versions of node. Some teams just wrote their stuff in another language and just use a child process in node to call it, serializing everything as a string and DE serializing it in the other language. The problem with that though is that you suffer a pretty decent performance penalty serializing and deserializing, and though it still might be worth it, it's also not great since some teams actually just called similarly to how you'd call a shell script.

Is it much better than that now?


Node has always supported native addons, and since 2017 it also provides a stable ABI making the process a whole lot easier[1].

That said, in my own experience it was seldom worth it to rewrite something in C++ for performance sake. After rewriting some computationally heavy part as a native addon, I often ended up gaing only ~20% more perfomance at best when compared to properly optimized JS implementation, and even that was not guaranteed since V8 improved rapidly. That was not a good enough reason to keep a whole different tool chain around, so I'd end up going back to JS.

[1] https://medium.com/netscape/javascript-c-modern-ways-to-use-...


What about with wasm? I assume there's a good interop story there like there is in the browser, and you don't have to worry about building for different platforms


WASM has its moments, as you can see in this[1] benchmark it outperforms JS and native addons on certain tasks.

Since the bottleneck with native addons is usually data copying/marshalling, and we have direct access to WebAssembly memory from the JavaScript side, using WebAssembly on this "shared" memory might become the best approach for computationally heavy tasks. I wrote about it a bit here[2].

[1] https://github.com/zandaqo/iswasmfast

[2] https://medium.com/swlh/structurae-1-0-graphs-strings-and-we...


Yup wasm is the way to go as you can just distribute the wasm binary over npm and don't have to worry about further compile steps on the consumer side, but it's not always possible to use wasm.


I have written a native C++ module to speed up JSON parsing and manipulation by a factor of 10x+


Sell it to Rockstar. Their native C++ JSON parsing seems to be 10x slower than JS/Node.


If you want to interface with Node from Rust there's a great library called Neon[0] that wraps the C/C++ Node addon api.

[0] https://github.com/neon-bindings/neon


What excites me most about Node upgrades are the introduction of new native Javascript capabilities, because of the underlying V8 upgrade. You can figure out what those capabilities are on this website: https://node.green/.

You have to scroll down all the way to "Node.js ES2021 Support" to start seeing features that work in Node 16 but not Node 14 (the current LTS version). Of course, it's possible to use Babel to bring those features into Node 14, but I enjoy leaving it out of my toolchain when possible.


Node.js v16.0.0 will be the first release where we ship prebuilt binaries for Apple Silicon. While we’ll be providing separate tarballs for the Intel (darwin-x64) and ARM (darwin-arm64) architectures the macOS installer (.pkg) will be shipped as a ‘fat’ (multi-architecture) binary.

Apple presenting some major hardware news today. Perfect time to release v16 :)


Ooh, and there's an official darwin-arm64 binary too :) https://nodejs.org/dist/v16.0.0/


Indeed! It works great.


> This update brings the ECMAScript RegExp Match Indices, which provide the start and end indices of the captured string. The

I'm curious, because I'm useless at RegEx.. But will this break current RexEx implementations ??


Apparently it's an additional property "indices" on the returned array.

> ..We propose the adoption of an additional indices property on the array result (the substrings array) of the RegExpBuiltInExec abstract operation (and thus the result from RegExp.prototype.exec(), String.prototype.match, etc.).

> This property would itself be an indices array containing a pair of start and end indices for each captured substring. Any unmatched capture groups would be undefined, similar to their corresponding element in the substrings array. In addition, the indices array would itself have a groups property containing the start and end indices for each named capture group.

> NOTE: For performance reasons, indices will only be added to the result if the d flag is specified.

https://github.com/tc39/proposal-regexp-match-indices

From given example:

  const re1 = /a+(?<Z>z)?/d;

  const s1 = "xaaaz";
  const m1 = re1.exec(s1);

  // indices are relative to start of the input string:
  m1.indices[0][0] === 1;
  m1.indices[0][1] === 5;

  s1.slice(...m1.indices[0]) === "aaaz";


I want to have control over the nodejs mainloop to integrate it with GUI toolkits in C


I've done this kind of work in node before. You don't want that.

The correct way to do this is to write a native addon that uses libuv to create your own thread (`uv_thread_create`) that does all the interaction with the GUI APIs. You then manage your own message queue to pass messages between your GUI thread and the V8 thread (using `uv_async_send` to invoke a C function that dispatches events from the queue into JS-land).


I've worked with tools that do this sort of thing and the only way to get the sort of synchronization your asking for was to essentially peg an entire CPU core to 100% utilization.


Still hoping for official promise-kill+timeout support.


What's that? Some alternative to AbortSignal to gently prevent async code from continuing all the way to its logical conclusion?

Also please be sensitive and don't use "kill" in new code. ;) We already have to deal with "abort".


Yeah, kill is very tasteless. We're migrating our code base to use the new term "fuckingAnnihilateRuthlessly" instead.


Personally, "abort" is kinda gross to me due to calling to mind certain unsavory images, but that's clearly my problem, not Node's!


> What's that? Some alternative to AbortSignal to gently prevent async code from continuing all the way to its logical conclusion?

Yes, a way to terminate down-stream. It turns out to be a pretty messy problem. I didn't know AbortSignal was in Node now. It's been a while since I revisited this issue. I should read more.

> Also please be sensitive and don't use "kill" in new code. ;) We already have to deal with "abort".

Point taken!


This is something TC39 would be responsible for, not the Node project.


Do you mean it is a language issue and not an implementation issue?


Wouldn't Observables be a lot better for this kind of use case?

"The right tool for the right job" and all that jazz.


> Wouldn't Observables be a lot better for this kind of use case?

Observables are neither a part of the language nor a part of the Node api. I suppose that was what the parent's criterion.


Correct, NodeJS native. I can't edit my post now, but thanks for clarifying.


[flagged]


Probably because black lives still matter


but I'm not the citizen of apparently racist US, then why you(?) try to push me to solve your core problems?

I've no interest in trying to make you less racists, especially that your BLM is focused on one specific group instead of universal values like - equality of opportunity


Well if it bothers you that much, it would take less work to Right Click -> Block Element via your adblocker than it did to write your comment.


I'm more worried about forcing whole world to fix US core problems

I have to change my nomenclature, tech terms (master, white/black list) just because people in US are biased and kind of racist?

While it may be reasonable, then how often will this happen? everytime there's some nation-wide drama in US?


underrated comment


Because racism is still a thing


but I'm not the citizen of apparently racist US, then why you(?) try to push me to solve your core problems?

I've no interest in trying to make you less racists, especially that your BLM is focused on one specific group instead of universal values like - equality of opportunity


I have always been wondering how Node.js will do as a backend option. I feel it's in decline, a quick google trend confirms it: https://trends.google.com/trends/explore?date=today%205-y&q=...

I invested quite some time on node.js and eventually bailed out and now am using other alternatives. It did not work out as not all applications need those async-logics which made code unnecessarily difficult.

Nowadays for me, nodejs along with npm/yarn are just more of a frontend tool, which are still very useful and essential.


With serverless functions being the hype, along with things like Next.js, Node isn't going away as a popular "backend for frontend" technology.


What does serverless functions mean?

Is it a good idea to make hype a relevant factor in choosing web or app's architecture?


> Is it a good idea to make hype a relevant factor in choosing web or app's architecture?

Yes! Hype = active community and support, interested developers, possibly novel solutions to problems, etc...


It can just as easily mean that nobody knows the solution to its novel problems yet. It potentially means nobody wants to touch it in a few years once the largest problems have revealed themselves en masse.

If something has hype it's worth investigating, absolutely. But it should also be treated with a great deal of scepticism; there's often a lot of money in it for businesses and individuals moving you onto the latest shiny tech.


It's probably best to look at the trends of other platforms/languages, alongside of Node.js. Many languages showed a similar, slight decline, which might indicate something else is going on: https://trends.google.com/trends/explore?date=today%205-y&q=...


I think node is great for stateless micro services. I know many people here have a vendetta against micro service architecture, but it has worked wonderfully for me in my career. Development is fast and I can keep cloud bills down with autoscaling.


Are you meaning to refer to JavaScript rather than Node.js? I don't know of a way to use the Node.js runtime inside of a browser (the frontend), my feeling is that's pretty redundant.


I took the comment to mean it's only useful for something like npm/webpack (dependency management and hot loading), which I strongly disagree with.


I think they mean using Node.js tools like yarn/NPM for compilers, linters, etc, as opposed to using it for a server.


yes it is, it's still essential for frontend devel, actually it's pretty much a must-have scenario.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: