Hacker News new | past | comments | ask | show | jobs | submit login
Request Node lib used by 48k modules is now deprecated (github.com/request)
150 points by lootsauce on Feb 18, 2020 | hide | past | favorite | 126 comments



Previously on HN (72 comments): https://news.ycombinator.com/item?id=19604148


> request will stop accepting new features.

> request will stop considering breaking changes.

> The committers that are still active will try to merge fixes in a timely fashion

Sounds to me like the library is "done".


Hardly a hot take but a lot of people seem to think that if code isn't actively being worked it, you can't use it, as if there can never be a point where fewer and fewer bugs are reported and fewer and fewer features are requested.

when's the last time Knuth wrote a check for TAOCP? Is it "dead"?


> when's the last time Knuth wrote a check for TAOCP?

Pretty recently. I just got one in the mail today.


Heh, that's kind of a bad example, because Knuth is still actively writing TAOCP, and presumably does still send out his (now symbolic) reward checks. Volume 5 expected by 2025! https://www-cs-faculty.stanford.edu/~knuth/taocp.html


He is still working on it and the latest check I know about was from a few months ago: https://nickdrozd.github.io/2019/05/17/knuth-check.html but I still agree with your first sentence.


> when's the last time Knuth wrote a check for TAOCP?

Got one in the mail yesterday (check was written Feb 10). In fact you can figure out how many checks were written in the last month, roughly: compare https://web.archive.org/web/20200110074014/https://cs.stanfo... with https://web.archive.org/web/20200219035903/https://cs.stanfo... I imagine most of these checks (definitely all $7 of mine) were from finding bugs in pre-fascicles, i.e. bleeding-edge drafts he's been putting up online. (See near the bottom of https://cs.stanford.edu/~knuth/news.html)

On the other hand, while you mention Knuth and "done", TeX and METAFONT are better examples. He declared them "done" in 1990 (https://tug.org/TUGboat/Articles/tb11-4/tb30knut.pdf), but still does bug fixes:

> I still take full responsibility for the master sources of TeX, METAFONT, and Computer Modern. Therefore I periodically take a few days off from my current projects and look at all of the accumulated bug reports. This happened most recently in 1992, 1993, 1995, 1998, 2002, 2007, and 2013; following this pattern, I intend to check on purported bugs again in the years 2020, 2028, 2037, etc. The intervals between such maintenance periods are increasing, because the systems have been converging to an error-free state.

(The latest round, in 2013, did surface one bug: the debug string representation of an "empty" control sequence was missing a space.)


Nice to see multiple of us are on here.

> I imagine most of these checks (definitely all $7 of mine)

The 0x$2.00 check I got from 10 Feb 2020 was for one typographical item in Volume 4A and one pedagogical improvement in Volume 1, Fascicle 1. So it is certainly possible to still get checks for material that is already published.

Since it is now 2020, anyone with bug reports from the typography stuff should send them in soon, wouldn't want to miss the deadline.


That's interesting, can you go into more detail on those two -- what was the pedagogical improvement, and the typographical item? Just curious...


Sure.

- On page 715 of Volume 4A, he had something like \`a when he meant to have just à.

- In Volume 1, Fascicle 1, there is a convention that the "main" entry point of an MMIX program begins at LOC #100. The convention is established early on and repeated throughout the text. However, at no point is it explained why LOC #100 was chosen (instead of LOC #0, LOC #80, or whatever). It could be gleaned through careful study -- LOC #0-#80 are reserved for trip/trap handling and one more location before #100 is reserved for a special second entry point -- but you basically had to read the entire fascicle to find /all/ of these. A naive user would be likely to try writing a program beginning at LOC #0 and wonder why it didn't seem to behave correctly. My suggestion was to just add a note explaining why LOC #100 was used. He agreed and you can find the added note in the latest errata for Volume 1, Fascicle 1.


Oh thank you. I looked up page 715 of my copy of Volume 4A and can see the \`a. :-) I also see both corrections you mentioned in the online errata, that's cool. :-)


He no longer writes checks to people for bug-hunting and hasn't done so for years due to concerns about the proliferation of his banking information. He mails out certificates.


Why do checks have your banking secrets on them?


...so that they can be used?

Those numbers across the bottom are your bank's routing number and your account number. I'm honestly surprised I don't hear more about e-check theft done using those numbers.


> Those numbers across the bottom are your bank's routing number and your account number.

Those aren't secrets, are they? Isn't your bank account protected by separate secrets and or physical authentication tokens? You can't just take money out at a bank by giving a bank account number, surely?


> You can't just take money out at a bank by giving a bank account number, surely?

That's exactly what a check is. It's a legal document that says "I give you X dollars from account Y." You can use a napkin instead of a the pre-filled sheets of paper your bank sends you, and it's still the same legal document.

Typically your signature is checked against one on file, but only for large transactions, and of course handwriting signatures can be forged. And, if you are writing checks against other people's bank account, you'll probably go to prison. That is the check on the system.

(It's no different than signing a contract. I'll do this work and you'll pay me when it's done. What happens if they don't pay you? You sue them. The root of trust is the judicial system.)


Right, but the bank is still entitled to check with me before handing over the cash. They have no way of knowing that I signed the contract. Account numbers and squinting at a signature isn't any kind of proof!


> the bank is still entitled to check with me before handing over the cash.

They're allowed to, but have no (legal or socially expected) requirement to do so. I've never heard of a bank doing so for small (<$10k or so) amounts, and then only if fraud alerts are already present.


Is it possible to have an account in the US which forbids anyone paying from it using cheques?


Those numbers are not secrets. They're literally just the bank's routing number and your account number. Using those numbers anyone can withdraw/deposit into that account. Madness isn't it?


My bank authenticates with me before honouring a check - is this not common?


I'm from the US and I've never heard of a bank verifying permission before releasing funds, they simply release the funds. As far as I know, US banks no longer check the signature (if they ever did) and no longer validate the date on the check (at least at my bank you can no longer post-date a check).

We recently lost a book of checks and the bank totally wigged out. They demanded that we close the account immediately and it took a decent amount of back and forth to talk them into waiting a week (we wanted existing checks to clear). To my mind, this implied that they did no validation on checks.


I haven't actually used a check in a looong time so I don't know exactly what you mean.

I do know that I keep the numbers off a check from my checkbook I received when originally opening my bank account like 10 years ago in Lastpass. When sites that don't accept credit need payment information (my student loans mainly) I just copy/paste the numbers into their payment form and the money gets taken out of my account. No verification whatsoever, Nelnet is able to just withdraw the money from my account using those numbers.

I assume anybody with a debit processing backend or service can do the same if they have the routing/account #. It's kind of a wonder peoples money doesn't just disappear all the time really.


If someone knows your account number, they can take money from it, there is no separate secret or anything. It would be illegal, but they can do it.


> If someone knows your account number, they can take money from it

You can seriously just say a bank account number at a bank in the US and walk away with a bag of cash from it, with no other security checks at all?

In the UK they’d make you enter a password or PIN or use 2FA.


No, you would have to do some authentication at a physical bank itself.

You can use the numbers to request a wire transfer from the account. In fact, when you're setting up an ACH transaction for bills and such, they tell you to get the numbers from a check.


what??

You can deposit without a pin bit to withdraw you need photo id and a verbal password or pin in every bank I've ever been in


Are you in the USA? Our banks don't exactly do "security" at all.


Check forging. Was a big problem years ago when checks were widely used; These days they're almost dead so; (See 'Catch me if you can' film if you didn't already!);


In the UK if I write a check my bank checks with me before honouring it. They don’t just let you wave a piece of paper with numbers on it and empty the account lol.


In the UK fraud barely gets noticed by police. In USA there's serious jail time for check fraud


> In USA there's serious jail time for check fraud

There's no serious jail time for it in the UK because the system doesn't let you do it in the first place!


The superior system then;


Direct debits are surprisingly easy to set up in the uk.


The last fascicle of TAOCP was released last November.


Yea, makes me think of how every news blog reported mp3 as "dead" when the patent expired, when they should have been saying mp3 is now license free.


Also, "mp3 is license free" rhymes :)


License free mp3 is good to see. But what we need, according to me, is for AAC to become license free.


People don't wanna pay for CDs

Now every other house holds got PCs

They download MP3s,

People please be reasonable (yeah)

How am I gonna make my g's,

If you got my album before the release,

The quality's rubbish and there ain't no sleaves,


I don't think anyone really uses mp3 for audio compression anymore, though. There's FLAC, mp4 (AAC), and free Vorbis (which claims to be competitive with both); and DVDs and whatnot are still using AC3, EAC3, and various other Dolby Digital audio formats. Given that landscape, I don't know why you'd pick mp3.


Well most pirated music albums and songs are still in mp3s. Even bandcamp I chose mp3s for itune, maybe it's bias confirmation, but I don't believe mp3 went out of style.

Do you have a source for this? Is this audio compression for music production people or in general? I just can't see mp3 being out.


I keep a library of music, ripped from cds I own. My master copy is FLAC, but I use mp3 for on-the-go copies because it is not patent encumbered and it is supported everywhere.

If you have a suggestion for an encoding+container that is will work out of the box both natively and in every major browser, across every major OS (Windows MacOS, Android, iOS and GNU/Linux, specifically distros like Fedora or Trisquel that don't ship nonfree codecs in their default repos), I will gladly switch to it.


Does Ogg Vorbis play on iOS? In my experience it has pretty good support across most other platforms. You do have to install Web Media Extensions on Windows, but that's little different from having to install libvorbis on Linux.


No, it doesn’t... on a stock build. Get the VLC app though, and then you can play (through the app, of course).


FLAC meets a different requirement. AAC isn't license free as far as I know. Many devices won't play Vorbis. So I encode to mp3 for now.


MP3 is good enough for most casual uses, and not being patent encumbered makes it easier to use. I honestly don't know why I should bother using anything else for personal audio consumption.


I read "done" wrong the first time. I read it as "stick a fork in it, it's done," as in, the library is done for. Not long for this world.

It seems like you meant it more in the sense of this library is "complete." Just in case anyone was confused.


Done as in complete, yes.


I wouldn’t mind if most of the software I used was deprecated like this.


Reading the issue laying out the reason for taking this step, it all seems quite reasonable and mature: https://github.com/request/request/issues/3142

TL;DR: at over a decade old it's a primordial Node lib which has become increasingly difficult to evolve as the platform introduces new features, so rather than break every user with a massively disruptive new version that embraces modern features it's going to attempt to quietly sunset to free up the oxygen in the room so that new libs can grow.


I'd agree if it wasn't for it having to remain compatible with future node releases. If it was (and I don't know if it is) part of the node.js testsuite then it seems like the ideal way to end active development on a library.


The annoying thing is, since it is marked as deprecated, everyone installing a package that relies on it will get a deprecation notice. I already had some users opening issues on some of my projects for this.


Deprecating is a weird decision as there is nothing wrong with the "request" module. It doesn't lead to bad code, bugs or security risks.

It would be better to say this is the last version, except for security upgrades. These upgrades can be done by other maintainers that are assigned to the project.


There doesn't have to be something wrong with a solution to deprecate it, there just has to be a better alternative.


"Deprecated" to me means the author says you shouldn't use it, or with more nuance, that "you shouldn't start using it now, and you should switch out of it ASAP if you already are a user, because it will very possibly stop working within the lifetime of you own project".

"A better alternative" is not enough of a reason to not use another alternative, because "better" is not an absolute value and choices tend to have many axis to balance (for example, prior experience).


Well, looks like your only difference is on that ASAP part.

Deprecated functions within a module will surely stop working in the near future, but deprecated modules probably won't. It's not great that we see the same warning for both, and we should not treat both the same.


If the alternatives are better, then devs will choose those solutions and "request" will fade over time. It doesn't have Promise support, so new projects will likely not choose it.

request is very simple and straightforward library that solves a very common issue.

Deprecating it causes a lot of work for every project out there. Either using request directly or indirectly.

I have a lot of respect for the maintainer, and he can choose to do whatever he wants with this module. That's his privilege.

However, the fact that he got "bored" with this solution, doesn't mean he needs to deprecate it. He can just stop working on it, and give ownership to someone else.

If it had a security issue and he is not going to ever update it, then that would deserve the deprecation so people should be warned.


> However, the fact that he got "bored" with this solution

That's a horrible mischaracterisation. There's a clear ecosystem-wide shift to async/await instead of nested callbacks, putting a project that depends heavily on callbacks into maintenance mode rather than releasing a new, entirely incompatible version that uses async/await is completely reasonable.

Also worth pointing out is that the same person who posted the maintenance mode announcement also released a request library that does use async/await: https://github.com/mikeal/bent


>It doesn't have Promise support

That's a feature. https://medium.com/@b.essiambre/continuation-passing-style-p...


While this isn't really the place to discuss this, that your most complex example is a simple waterfall of three nested async functions doesn't do much to sell the superiority. Start mixing in conditional async calls with downstream async branching. And look at the Promise-based solutions to async.js (https://caolan.github.io/async/v3/), a library nobody misses. Still not sure if your blog post is a joke or not though, frankly. If it's a serious post, then it seems troll-y to inject such a fringe opinion any time someone casually mentions promises being good.



Why is an alternative needed? To me it sounds like the library is “done” and has all the features it needs. I wish more code was in that stable state.


fwiw they actually did mark their package as deprecated on NPM which means that all installations will see a warning that urges the user to consider other software.

It's kind of like if you were to maintain your own promise implementation, then the Promise is added to stdlib. You wouldn't say that your project is "done", you'd want to encourage your users to use the native promise.

Request was created back when all we had was the stdlib `http` module.


> Request was created back when all we had was the stdlib `http` module.

What stdlib module supersedes `http`? Browsers have `fetch` and there's `node-fetch` and `axios` modules, but it's sad that core node hasn't updated the core http module.


Node has an http2 module as well, but it's still pretty low level. I don't think there's anything wrong with the stdlib module being lower level than the third-party ones.


Makes sense.


The maintainers claim that the community would be better off without the package. Someone has to stick around to do security fixes for it, and that's time that could be spent making more innovative contributions to the Node.js package ecosystem.


Wouldn't it make more sense to ask the community what they want to do? Without taking time from the original maintainers


It is slightly archaic in that it mostly uses a callback convention, when most of the node community has moved on to promises.

And there are more modern alternatives like axios that do have this promises interface.


node-fetch above all, it's standard, and works in the browser https://github.com/node-fetch/node-fetch/blob/master/package...


This, I use it for all Node.js backend requests. Personally I think we need to move to isomorphic solutions and work to unify backend and frontend JS development as much as we can. This one's a no-brainer


I don't get why it doesn't support cookies but says to parse Set-Cookie header manually and implement it yourself.


there's https://www.npmjs.com/package/fetch-cookie for node

  // fetch with cookies:
  const fetch = require('fetch-cookie/node-fetch')(require('node-fetch'));


When did maintenance mode become deprecated? Why isnt that hyperbole?


They have marked the package as deprecated on NPM. https://docs.npmjs.com/cli/deprecate

See the banner on this page: https://www.npmjs.com/package/request

They are urging people to consider the enumerated alternatives.


> No new changes are expected land. In fact, none have landed for some time.

That's not what "deprecated" means. That's "maintenance mode".

Deprecated means no new clients should use it, and existing clients should migrate to library X or Y, because it's going to be deleted by 202X.


They don't want to maintain this software forever and are definitely urging people to use other software. They even admit that they will merge fix PRs with a disclaimer of "no promises".

I mean, the developer themself has said it's deprecated and has marked it as such formally on NPM with a link to a post that urges people to use alternatives, so I'm unsure of what you're arguing nor what it achieves. Seems to satisfy your own definition, they just don't have an exact date on when their maintenance charity will run out.


>They have marked the package as deprecated on NPM.

Fair enough, the OP link didnt actually mention deprecated anywhere.


They don't expect maintenance mode to last forever and they don't want new developers to use it.


For people looking for an alternative, got [1] is well written, maintained, and as a bonus has a nice comparison chart to other libs. I started w/ node-fetch but quickly moved to this. Its author is also prolific in the Node community.

[1]: https://github.com/sindresorhus/got#comparison



Specifically:

The first version of request was one of the first modules ever created for the Node.js ecosystem. [..] The patterns at the core of request are out of date. [..] A version of request written to truly embrace these new language patterns is, effectively, a new module.


And IMO that's a really good way of solving this issue!

Rather than try to push a paradigm shift under a major version change in a library, and rather than just hand the library over to someone who may or may not be vetted enough, they are gracefully shutting it down while still sticking around to fix any security issues that may crop up.


This is what semantic versioning is for, though. Breaking changes to the API that still serve the same function in an app would not unreasonably be a new major version of the same package. No one is obligated to upgrade to the latest major version.

A complete rewrite would be a ship of theseus situation. It's all new code, and the API may be different, but it still exists to serve the same purpose in an app. Why pretend it's a new module when it's really just a retooled version of the same one?

EDIT: Ok, I misunderstood what was going on with the request maintainers; thought they were planning to start up a new package as a rewrite of request. That practice specifically is what I was arguing against here -- it's not good practice to start a new node module with a new name just because you have breaking changes to your API. I do not in any way mean to suggest that someone else should jump in and turn request into something new.


The author of that issue linked above says it better than I ever could:

> A version of request written to truly embrace these new language patterns is, effectively, a new module. I’ve explored this space a bit already and have a project I’m quite happy with but it is incompatible with request in every conceivable way. What’s the value in a version of request that is incompatible with the old patterns yet not fully embracing the new ones? What’s the point in being partially compatible when there’s a whole world of new modules, written by new developers, that are re-thinking these problems with these patterns in mind?

> The best thing for these new modules is for request to slowly fade away, eventually becoming just another memory of that legacy stack. Taking the position request has now and leveraging it for a bigger share of the next generation of developers would be a disservice to those developers as it would drive them away from better modules that don’t have the burden of request’s history.

Now view that in contrast to react-router, which has gone through 3 or 4 really major rewrites, and had their API change those few times in completely incompatible ways, even shifting paradigms multiple times over the course of the library until now.

They are still committed to maintaining most of the past major branches, and the docs are still around for those versions, but each one is so different that it adds a lot of confusion. And I don't meant to say that react-router is bad, the reasons for the changes are damn good ones, and I actually like the library a lot, but the confusion around it is real because the 2.x branch vs the 3.x branch vs the current branch are all so different that they should really be different libraries in my opinion.

If I had to choose between those 2 things, i'd choose what request is doing both as a library author and a library user.


I misunderstood -- I thought the authors of request were going to keep working on a rewritten version, but were going to start a brand new node module for it.

Strictly speaking, react-router is doing it correctly. It's the same library, just different versions. The normative way in the npm community to handle breaking API changes is to increase the major version, not to completely rename the package.


Yeah but think of it from an (Linux OS) maintainer perspective: if you don't have a supported way to install parallel versions of the same package, you're stuck creating it under a new name (e.g., request999) because packages won't update.

This is the solution in, e.g., Fedora where "junit" means junit4 and "junit5" is a separate package. Over time, junit might become deprecated and be removed from Fedora when all deps have moved on and junit5 takes over the junit package name, perhaps a junit6 will emerge. Or junit4 gets recreated for the last two legacy apps using junit to live on.

Who knows! But from a maintainer perspective, better to have a separate name than a new breaking version... :-)


Who said anything about linux packages? this is NPM.

I'm a library maintainer, and it doesn't make any sense to me to create totally different packages with different names just to handle a breaking change. This is what semantic versioning exists for.


I am now looking for another library that is as expressive with http streaming. I have enjoyed using request for complex things things like piping file downloads while rewriting headers, adding cookies, without buffering the entire response in memory.


“got” has support for streaming. We have a migration guide for “request” users: https://github.com/sindresorhus/got/blob/master/documentatio...


I have a question:

Is it the responsibility of the package manager to keep users safe? By that, I mean, if there was a security vulnerability that the maintainers refused to fix, what would the process be?

Should NPM refuse to install packages marked as deprecated, perhaps after a certain age of deprecation (say, 6 months)?

Comparing this to the browser where I believe it is the expectation Firefox, Chrome, Safari et al to keep users safe, by updating automatically if an issue is discovered.


I don't see why NPM should be so heavy-handed as to prevent installation of deprecated software. It already shows you a warning.

I think it's a good example of when systems over-act on information they do know (a developer has marked their software as deprecated) in ridiculous contrast to all the information they don't know (99% of developers not even bothering to deprecate their package when they abandon it).


It is the responsibility of the developer who added the dependency, or the person who reviewed the PR that added the dependency.

If neither of those are around it is the responsibility of the person who took over either of those responsibilities or the person who now maintains that package.

If that person isn't around then it is unmaintained, and should not be used.

If that sounds complicated it is because it is. In any given package you might have 1-100ish people with responsibilities, but they also have sub-responsibles that they might not know about.

The package management system has no responsibility except to serve the exact version of the exact package you requested. If you expect anything more from them you are not looking for "package management" and npm is probably not the right place to look.

This is one of the reasons I try to not have transitive dependencies in JS projects.


Are they being paid to do so? If not, then you've got to do your own looking of the gift-horse in the mouth, and not expect the giver to do it for you.


Can anyone recommend a library with the same API as request?

For those of us that would be under hardship to rewrite the node-request portions of large stable services, but also can't/shouldn't to have deprecated libraries?


This is going to sound glib, but you can just keep using what you have now.

If you take a look at [1], the maintainers of request aren't "deprecating" it in the sense that they won't touch it ever again. It really just means that there won't be any more active development on it, no new features or breaking changes will be accepted, and some housekeeping had to be done to allow it to accept security fixes without relying on one specific person to be around.

The code will still work (and probably for quite a while), and security issues will still be addressed, it just won't be changing much if at all from now on.

[1] https://github.com/request/request/issues/3142


I totally agree with this approach, however it seems like a good idea to use alternatives for future work.


Agreed, but if you are looking for alternatives with an identical API, then it doesn't seem like a good idea anymore (at least to me).

The main theme behind the depreciation of Request is that the API is old and doesn't really fit with the rest of the node ecosystem. Any other API-compatible library is going to have those same issues. So if you are going to start something new, but want to keep an identical interface, then you might as well just use Request.


Not identical but I am considering got because of nice docs and solid streaming support https://github.com/sindresorhus/got#comparison


48k? Isn't github showing 4.4m?

Everyone I see seems to use axios these days.


> Everyone I see seems to use axios these days

Axios was somewhat abandoned a year or so ago.

A new collaborator was added in Dec 2019 which apparently is picking it up.

https://github.com/axios/axios/issues/1965

Who knows for how long though...


What would be the alternative to Axios?


On the browser I generally use fetch(). It's a bit weird at first but it's easy to write a thin wrapper if the default behavior doesn't make sense. Eg: throwing when you receive a 400 or 500.

On Node I've been using this lately: https://github.com/node-fetch/node-fetch


Github shows which repositories use request. 48k is the number of modules on npm.


axios FTW!


Real title: "Alternative libraries to request". The "deprecating" part is simply the submitter's invention, in order to stir up drama.


The maintainer put it in "maintenance-only" a year back, and officially deprecated it a week ago: https://github.com/request/request/pull/3267


Okay, so not totally invented, but still not the original title (which is perfectly descriptive), but clickbait.


not stirring anything up, its the first headline on the readme. https://github.com/request/request#deprecated


I checked and one of our servers is using this; the real challenge will be seeing if my old dev environment still works because we haven't updated this service in years.


This little maneuver is gonna cost us 51 years.


I don’t see a point to give negative if you don’t get it.


We get it, it just adds nothing to the conversation.


Was hoping to make someone laugh...


I think we all have "NPM/JS bad" fatigue.


Ok, I have been known to have strong opinions on HN, and each one is open to being changed and is rooted in extensive personal experience.

I have said that comments are a code smell.

I have written extensively in favor of decentralization and even formed two companies to promote it in increasingly sophisticated ways (qbix.com and intercoin.org)

So I’m gonna say something that may get me downvoted...

Package Managers are almost as bad as closed source software, which is almost as bad as closed software in “the cloud” (the fake, centralized one) which you don’t host.

Society today still has to rely on feudal lords for many things, just as we used to rely on the post office, printing presses and telephone switchboard operators.

If you are going to use a package manager, you should be at the very least pinning all your versions and personally vetting any changes that are pulled from upstream. You can outsource this security check to some third parties (at least GitHub has those alerts when vulnerabilities are found) but we need the security audit people in the loop, signing releases. Not just pull from upstream, MUCH LESS pulling thousands of new commits across hundreds of packages!

We have seen this introduce security bugs all over the place, in the past. Deprecation is not as bad as that, but you gotta vet what goes into your code.

One of many examples, how can we address this?

https://www.theregister.co.uk/AMP/2016/03/23/npm_left_pad_ch...

I speak a bit flippantly but when you’re building a PLATFORM or FRAMEWORK for apps, this matters. A lot. Linus’ rant about diffs and patches being far better than svn are a version of what I’m talking about. At least svn is inside an organization. This is out on the internet!

The bazaar may be better than the cathedral, but not for security and the more power your framework have the more responsibility you have to not skip security checks and just pull code. It’s worse than executing a shell script downloaded from the ‘net, because you’re essentially shipping this shell script downstream to everyone who uses your code!!


Sure, taking on dependencies introduces pros and cons. But isn't this just the classic HN trope where you fly off the handle on a loosely related rant just because TFA has some sort of triggering buzzword like "NPM"?

Not sure how your rant is related. In fact, that such a popular library is being deprecated contains in itself the realization for unknowing developers a downside of depending on large libraries.

Someone upset by this deprecation that wasn't yet aware might, in the future, seek out a more timeless solution for making http calls like the Node implementation of the browser's native window.fetch(). All without having to read a rant.


The rant might be off topic, but everything he says is reasonable. In a "this is why we can't have nice things" way.


I agree with a lot of the ideas behind what you say, but what is your solution?

That we should use less dependencies? Sure.

That we should review changes in our dependencies? Sure.

But If I were to take a nodejs app and move it from ubuntu version X to ubuntu version Y, node version Z to node version A, and update npm dependencies the code review itself would take me a year.

Is your point that we should only do this to npm dependencies?

Is this a wider problem with how we think about software development as never being finished instead of solving a problem and moving on to the next? Yes.


My solution is to have a specific job for independent people to sign off on all changes and you can sorta trust them before you pull. As opposed to just the developer. And everyone should have a reputation.


It was about time. I did some benchmarks few years ago and the results were disgusting compared to native lib, plus why so many dependencies for a simple request library? I will never understand why it gained so much publicly, it was worthless since the day one.


There is no reason to be disrespectful.

As a user of Request for the past couple of years, it has made my life absolutely simple. I never had to worry about making HTTP requests and catching all those errors because it did this and a lot lot more.

I've worked on several projects and not all of them ever sat down and wondered the benchmarks at a very high scale and the number of dependencies it needed.

It's not a criteria for all the projects in the world.

I have utmost respect for the maintainers and contributors of Request and so should you.


Seems like a disrespectful way of describing someone else's work, regardless of shortcomings.


You know how many packages out there are better than request? Now compare all the work those great developers did, gave their souls for what? 50 stars at GitHub? Cmon, let's face the real problem here which is the balance between good code versus popular code. I'm speaking behalf of every great developer out there that spends hundreds of time and his project got faded inconspicuously. I have seen hardcore projects that every tech company could have envied get like 50 dollars of donations and projects that were made in one day, get thousands.


That's just Power Law. Like it or not, I don't think there is a workaround.


Like judging a book by it's cover or the number of sales. No one said programmers are geniuses, they're just masters of some tools that forge the digital era, nothing more nothing less.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: