Hacker News new | past | comments | ask | show | jobs | submit login
Why I Don't Use React Router (jkk.github.io)
122 points by jkkramer on Sept 16, 2016 | hide | past | favorite | 75 comments



Justin I think you are conflating a couple issues here:

  1). OSS critiques should be more polite

  2). People are stupid for not doing due diligence
First, I agree criticism should be constructive. Yet there is nothing wrong with 100 people saying here they don't like React Router and in fact such feedback is crucial for the authors and the community. Maybe you agree; but ironically the tone of your post seemed have a bit of the scorn and derision that you are asking people to avoid.

Secondly, it's already obvious people are responsible for their own projects. The motivation for stating the obvious seems to be to emphasize people should seek to blame themselves before criticizing others. Wrong. These things are orthogonal - Regardless of one's due diligence, it's perfectly acceptable, indeed beneficial, to critique OSS, when done in a productive way.


Yeah maybe. To me it feels like many in the community complain and shirk responsibility, and some stern words were needed to counterbalance that position. If the community wants stability in its projects, it needs to learn what it takes to achieve that.


Further thoughts: choosing an unstable library and then criticizing it for being unstable seems silly. There are two sides: library authors need to value things like careful design, real world testing, and backwards compatibility. Library consumers need to advocate for the same things, plus learn how to identify risk (semver isn't going to save you), and take ownership of their choices.


> Further thoughts: choosing an unstable library and then criticizing it for being unstable seems silly.

If so many people (apparently) missed the "expect this to be unstable" bit, I wonder if it's just a question of not being signaled effectively enough on Router's home page. Which I can understand since no developer actually expects, let's say, their v3 to actually be completely revamped into a v4. If they knew ahead of time, they'd presumably just have chosen the v4 design.

I guess it's kind of catch-22. Maybe the right thing here is to explicitly say "we currently fully expect this to be stable for the foreseeable future, but cannot predict the future, and are prepared to break everything if a better design is discovered"?

EDIT: I suppose another way to alleviate the problems would be to pledge support for the previous version for a period of time... but no developer working in their spare time really wants to do that. (For very understandable reasons.)


If you're looking to jump ship and your projects use Redux, you might find https://github.com/FormidableLabs/redux-little-router to be a nice alternative. RRv4 still hoards URL state within a component, while Little Router just puts it in the store. This makes deriving most of your app from URL state a reality.

Check out https://formidable.com/blog/2016/07/11/let-the-url-do-the-ta... for more on how we differentiate from the RR philosophy.


There's also Navigo, a minimal router - https://github.com/krasimir/navigo


It's great to someone express this point. Though it's hard to beat the convenience of dropping someone else's library into your codebase, each new dependency adds more security surface area and bloat to your application. I wish people considered this balance more carefully.

In general I think a littlw NIH is a good thing. Even if there exists a library that does what you want, it might also include much more that you don't need, and perhaps the kernel of what you want fits into a small function you can write and vet yourself.


An idea of open source is that it's very likely that your own implementation is buggier, slower, and more poorly specified than the existing state of the art open source implementation.

https://en.wikipedia.org/wiki/Wisdom_of_the_crowd https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect


Even if your code is better than the OSS alternative now, are you going to be able to maintain it at that level, given all of your other responsibilities?

I'm dealing with a bunch of people now who did something like that. At the time they made these decisions they might have had good reasons, but now they're doing other stuff and the custom things they wrote are a huge liability.

Don't write it unless you intend to own it.


I think a good heuristic is, is it your business?

I'd try really hard to avoid implementing address verification. Unless i worked at UPS or FedEx.

I'd try really hard to avoid implementing spellcheck. Unless i worked on Word.

If it's a core to survival thing, yeah, you should probably roll your own. If you can't beat OSS state of the art, well, you've got a problem.

The other heuristic i think is generally good, go for a couple of big dependencies over a whole bunch of small dependencies. Coping with the interaction between 3 huge libraries is so much easier than 100 tiny libraries.


Sure, I use this one too. If it's not your core competency buy it from someone else.


At the same time, depending on external contributions has risks.

My rule is don't introduce code unless you intend to own it.

If it is big, hard to follow, and has crazy dependencies, maybe you don't want to own it.


"My rule is don't introduce code unless you intend to own it."

Or more accurately, you do own it. Whether you intend to or not. All its bugs are your problem now. Each upgrade to the next version, with all those new bugs and features are your problems now. Any bug that other people haven't noticed yet, they're solely your problem now. Any extra code that your use-case didn't really need but opens up more surface area for hackers, it's your problem now.


Even if your code is better than the OSS alternative now, are you going to be able to maintain it at that level, given all of your other responsibilities?

If someone was able to produce better code in some way before, despite any OSS alternatives available at the time, why would anyone assume they could not also maintain and develop that code more effectively than the same OSS community and projects in the future? That makes no sense.

A lot of developers seem to make dubious assumptions today about the quality of something they could import compared to the quality of what they could build in-house. There’s little reason or evidence to support a lot of those assumptions, but people continue to make them regardless, because hype and inexperience are things.

If you’re talking about a huge OSS project with many contributors, quite a few experts involved, well-established infrastructure and funding, and so on, then sure, it’s a tough thing to beat. I couldn’t set up and maintain a new operating system with the same capabilities as something like Debian or FreeBSD, and neither could any other small team.

But most OSS isn’t like that. Lots of OSS projects have only a single main developer, or maybe a small team of contributors, and those people may or may not be as skilled, experienced, dedicated or simply available as your in-house team. Lots of OSS projects effectively get abandoned, sometimes even well-known ones with lots of contributors, major commercial backing and a large user base. Lots of OSS projects are highly unstable, and if you depend on one that needs constant updates to keep it working, that’s an overhead of its own. There is precious little evidence that OSS quality is better in general than something a good team could have built in-house.

And even if none of those things were true, your own in-house development would still be focussed on your specific needs and priorities, instead of trying to be a generic tool with potentially a lot of functionality (and risk) you don’t need, and potentially being steered in a future development direction than doesn’t meet your needs as effectively.

Don't write it unless you intend to own it.

The flip side of that is that if you do want to own it, writing it in-house may well be a better option.

Obviously there has to be a balance, and reinventing the wheel (or a sports car) for every project isn’t necessarily a good use of time and resources. Bringing in a good external tool or library that solves a problem for your project effectively can be a huge win, particularly if it’s in an area that isn’t a core part of your own project.

However, I believe the current culture in some parts of our industry is crazily biased towards bringing in external libraries to do every little thing. That is a very dangerous trend that we must challenge, because we’re writing an awful lot of awful software as a result.


True, but Javascript libraries tend to be pretty small and deal with well-known concepts. Many experienced dev's have seen a router 10x by the time they've come to React-Router, and the entirety of the source code can be read in a single day.

If you both (a) have experience and (b) have read and understood the entirety of a library, then you are in the best position to claim that you can do it better in house.

The prevelance of libraries doing the same thing in Javascript shows that alot of people have different ideas, and due to culture decide to open source it instead of keeping it in-house.

Other communities do the reverse. There are probably a million homebaked Java frameworks that will never see the light of day because people in Java land don't think that MVC is so extra ordinary that they need to release their in-house needs specialized framework.


One of the weaknesses of Javascript is its absolutely terrible standard library. In comparison to languages with solid, comprehensive, "batteries-included" standard libraries, you spend a lot of time re-implementing basic functionality. Or you import jQuery, or LoDash, or underscore, or use pieces of Ember or Angular, or React, etc, etc. Or some bastard abomination of all of those.


I'd argue that's a strength. Note how scala is modularizing its standard lib in Dotty, and TypeScript is doing that same for stdlib typings.

Standard libs are great, but they need to be modular. If they're modular then they are versioned separately from the language, and at that point there's no difference between a well specified and maintained lib (eg lodash) and a stdlib.


>> Don't write it unless you intend to own it. > The flip side of that is that if you do want to own it, writing it in-house may well be a better option.

An unspecified question is "What is the cost of ownership?" (which you alluded to in discussing the size of the project.) I've written plenty of code that could have been replaced by a library. I've put out OSS projects that I actively discourage people from using because the function of the library was so straightforward that its just not worth the dependency (even if its my own, completely flawless code ;) ) Moreover, in most of these cases the code churn even over several years is almost zero. That's a metric that should be looked at a lot more often when evaluating dependencies.

To your point, the cost of ownership is completely negligible in most cases, and all developers are doing by including a new dependency is saving a small amount of time up front with a heavy backside cost.


Not all open source is the same. The language you use might come with an open source standard library which is likely very high quality. On the other hand, piping in a few dozen packages from github or npm does not come with the same sort of quality, and here the "crowd" could actually just be a single developer


It's much easier to implement something for your application than something for a library. You have a specific problem in a specific context, but the open source library must solve a general problem in thousands of contexts.


As programmers, we suffer from various maladies. NIH is one, laziness and shiny-chasing are others. As in life, balance is needed. Don't forget the cost part of cost/benefit analysis.


This is very true for a wide range of applications and projects and is definitely required in many contexts (for instance - don't roll your own crypto). But for simpler components, such as a router, it may preferable to write a small focused class instead of creating a new dependency.


I would agree with this except that I know that 95% of everybody is laughably awful at tasks as simple as reliably parsing or constructing a URL.

The class of difficult problems that you rightfully include crypto into is a lot wider than people want to admit to themselves. Hell most people and a number of programming languages I have worked with can't fathom the idea that other spoken languages might put words in a different order than your native language uses.


You make a reasonable point, but can the current join-the-dots culture of software development and having fewer “developers” who can actually perform basic development tasks themselves really be entirely unrelated?


It's the "smart" ones who are the worst, unfortunately.

But I think there space for both the cut and paste crowd and the deep knowledge folks. In most industries the tool makers are separate from the users. The latter aren't worthless or subhuman because they don't do the former.

In most industries there's a place for fast people and there's a place for perfectionists. Tooling is not a volume business but needs a very low failure rate and good support.

We just haven't sorted ourselves into camps yet, but I think the era of stackoverflow is the beginning of that process.


In most industries the tool makers are separate from the users.

True, but I’m not sure software development is one of those industries.

Someone who makes pencils is not necessarily an artist, and vice versa. They require different skills and have different goals.

But where do someone who draws with those pencils, someone who makes colour-by-numbers books, and someone who uses the pencils to fill in those colour-by-numbers books fit into this analogy?


My thought is that we're too young for it to be separate, but it'll come, and probably not too long from now.


Yes, but if my version only has the features I need. Then it's lighter and most probably faster. Since OS tends to support a smorgasbord of usecases it's more often than not bloated and unnecessarily complex when used.


NIH is also kinda what leads to the proliferation of so many very similar, but slightly different, modules in npm. everybody thinks they can solve X problem better than the previous dozen people who solved it. Maybe it's driven by ego, maybe a sincere belief that their new way is better enough to justify a whole new project (and the division of talent available for solving that problem), etc.

There used to be a feeling in OSS that forking was a solution of last resort. Now, with the process of writing software becoming much more focused on micro-libraries (and the tools for using those micro-libs getting better enough to make it not so painful), the barrier to entry on writing a new library to solve a specific problem is often very low. Routing is not a huge problem. A single developer with some experience can build a reasonably complete one in a week. So...here we are. There's, what, a dozen popular routers? All mostly the same. Maybe one or two use promises, and maybe that seems much more modern, so they get some uptake. But, with one developer behind them, and maybe a couple of occasional other contributors, you have little feedback pushing for stability. The same desire that led to wanting to write a new router (to use the latest technology and ideas) is the same desire that leads to breaking changes.

I'm feeling particularly overwhelmed by the size and...um...inconsistency in quality, of the npm ecosystem. I really have very little of the NIH drive. I'm perfectly happy to put together Lego projects from off-the-shelf components, when possible (my business partner brings enough NIH to the table for both of us). But, I barely know where to start in node. NIH seems to have been elevated to a religion.


There's a big difference between acting on NIH for your own project, and acting on it and then publishing it. It takes a considerable amount of work to prepare code for general use. On the other hand, if it's used in a specific way within one codebase, that may actually make much more sense than reaching for a library that is designed to do that one thing and a dozen others.


I recently started working seriously with node.js (I've tinkered over the years since it was launched, and we provide some support for it in our products, but never actually built anything with it). I went looking for a library to deal with logins, authentication, password resets, etc. Normal stuff that most web frameworks have some solutions for.

I found a package on npm that sounded like it did everything I wanted (plus a few extra things, but I figured I could ignore those). It took longer than I expected to install...so, I did a little digging. It had installed over 53,000 files, and the resulting directory was 110 MB in size!

I was absolutely flabbergasted. I couldn't believe installing one package, for something seemingly simple, could balloon up that large. I won't name names, as I did a little more poking around, and realized that most npm installations pull in thousands of files via automatic dependency resolution, though this one was a particularly egregious example. I've gotten to where I only install stuff via npm when I'm on a free connection; I normally work on mobile broadband, which is very expensive (and adds up to almost $300/month even before I started playing with npm).

Now, to be fair, it was pulling in a web framework...maybe Express or Hapi, I don't remember which, and all of its dependencies, so it was actually a lot more than just the login module. The kind of annoying bit was I already had a global installation of both of those frameworks from following tutorials, but it still seemed to insist on pulling in its own preferred versions of stuff, and putting them into the project directory.

I come from the Perl world, where if you don't spend at least half your time looking for and evaluating libraries before you start writing code, you're not being very productive. I'm, frankly, overwhelmed by how big and unfiltered the npm ecosystem is. I've found myself relieved to start tinkering with more "all in one" libraries and frameworks, because I don't have the time or knowledge to evaluate libs on my own. I ordinarily prefer a more a la carte approach, where you just pull in what you need, and so big libraries and frameworks don't fit that. But, I can't make sense out of the ecosystem without some guidance. There are over 70,000 npm packages! Curation really has turned out to be one of the big problems in computer science.


Please name, names. It'd really help me not to accidentally pull these things into my own repository.

I'd love it if, NPM would list total package size including dependencies.


I've been using a chrome extension to inject a link to this npm dependency viewer whenever I'm on a module's page: http://npm.anvaka.com/#/view/2d/react-router

Really recommend it. As you can see, react-router's deps are actually extremely conservative. Webpack, another awesome project, gives a good example of a bigg dep tree: http://npm.anvaka.com/#/view/2d/webpack


> I've gotten to where I only install stuff via npm when I'm on a free connection; I normally work on mobile broadband, which is very expensive

You could always use this:

https://www.npmjs.com/package/npm-proxy-cache

It caches the package listings and the packages that you download. It will act as a pass through that with a limited TTL on the cache, but there is an option to fallback to the cache if you can't connect to upstream.

Granted, you have to have already installed something for it to work as an offline cache.

Also, part of the problem with all those files is that npm allows packages to installed pinned dependency versions. If package-a requires lodash 2.x and package-b requires lodash 3.x, then both will be installed within the respective package's directory. For example let's dive into the node_modules/ in one of my projects.

  $ ls node_modules/**/lodash.js
  node_modules/cordova-lib/node_modules/lodash/chain/lodash.js
  node_modules/findup-sync/node_modules/lodash/dist/lodash.js
  node_modules/findup-sync/node_modules/lodash/lodash.js
  node_modules/globule/node_modules/lodash/dist/lodash.js
  node_modules/grunt-contrib-less/node_modules/lodash/dist/lodash.js
  node_modules/grunt-contrib-less/node_modules/lodash/lodash.js
  node_modules/grunt-contrib-watch/node_modules/lodash/dist/lodash.js
  node_modules/grunt-contrib-watch/node_modules/lodash/lodash.js
  node_modules/grunt-curl/node_modules/lodash/dist/lodash.js
  node_modules/grunt-curl/node_modules/lodash/lodash.js
  node_modules/grunt-legacy-log-utils/node_modules/lodash/dist/lodash.js
  node_modules/grunt-legacy-log-utils/node_modules/lodash/lodash.js
  node_modules/grunt-legacy-log/node_modules/lodash/dist/lodash.js
  node_modules/grunt-legacy-log/node_modules/lodash/lodash.js
  node_modules/grunt-legacy-util/node_modules/lodash/lodash.js
  node_modules/grunt-ng-constant/node_modules/lodash/dist/lodash.js
  node_modules/grunt-ng-constant/node_modules/lodash/lodash.js
  node_modules/grunt-protractor-runner/node_modules/lodash/lodash.js
  node_modules/grunt/node_modules/lodash/lodash.js
  node_modules/jshint/node_modules/lodash/chain/lodash.js
  node_modules/lodash/chain/lodash.js
  node_modules/phantomjs-prebuilt/node_modules/lodash/lodash.js
  node_modules/preprocess/node_modules/lodash/lodash.js
  node_modules/protractor/node_modules/lodash/lodash.js
That's 24 copies of lodash.js installed that could all be a unique version of lodash used only by said module.


This seems like maybe a really big area for evenagelism in the node community. I've watched a ton of talks and tutorials lately, and several of them made a strong point of saying, "use versioned dependencies" because libraries aren't practicing good semantic versioning, so even minor version changes can be breaking. So...maybe there needs to be a lot more attention being paid to semantic versioning being used religiously.

I'm new to this ecosystem, so I'm definitely not an expert, but it's certainly been an intimidating point for me; maybe the most difficult thing to wrap my head around. I'm used to being able to spelunk into my project, and read everything I'm depending on, or at least skim it and kinda grok where things happen. How would one even do that with 53,000 files? How can anyone trust any application they build with these tools? I mean, the security implications alone are breathtaking, to me.


You're being unfair. You're obviously using npm 2.x, we've since moved to 3 for a very long time now where the dependency tree is flattened and this issue is avoided.


It's better but it's still an issue when multiple version of a module are needed.


That node_modules directory was created using Node.js v5.10.0 and npm v3.10.5.


Does the 24 copies all get bundled into the final JS file in the case of a browser app?


Of course not.

Also the modules he lists here are mostly for tooling purposes.


Are you sure? I could see multiple downstream versions of utility libraries like lodash and jQuery used in runtime dependencies.

Maybe not as bad as 23, which doesn't seems as likely for non-toolong stuff, but I'd still expect some divergence.


The problem isn't pinning, the problem is that it's routinely accepted for dependencies to disagree about which version of a library is ready for production use, because npm doesn't treat that as a disaster that should block deployment until the community coordinates their acceptance testing.

Basically, if lib1 and lib2 each use lib3, I don't want to upgrade anything until both lib1 and lib2 agree that a newer version of lib3 works.


What if there is a breaking change in lib3 and only lib2 upgrades? Does that mean lib1 can just never be used again because of something lib3 did?


If a new version of lib2 moves to a half-baked version of lib3, I want to stay on the old version of lib2 (and lib3) until the community agrees that the lib3 change is good and everyone has migrated to it. Having two different versions of lib3 in process is begging for bewildering bugs, and it was a serious mistake for npm to let that happen (much less by default!)


While I'm not defending running multiple versions of a library, I've never seen it cause bewildering bugs, or any bugs at all. The deeply nested style of npm may have issues, but it makes it really easy to avoid thinking about dependency problems.


I really need to get my blog up and running so that I can write about things like this.

My view is that dependencies are someone else's solution to my problem of technical debt.

I'd be a straight-up liar if I claimed to be proud of every line of code I've written, either for an employer or for myself. Sometimes you just have to hammer a square peg into a round hole and be done with it because deadlines. Or lazy. Or boredom. Or whatever this project is going nowhere anyway, so wtf? Hack the shit out of it.

I always tell myself I'm going to get back to that later and clean it up, but I often don't because, well, moar deadlines.

Dependency updates--particularly breaking ones--are things I love to hear about. Dependency updates give me an excuse, both professionally and for my side projects, to revisit stuff that I knew was janky and crappy and broke when I wrote it, but have since come to accept.

Security updates are absolute gold in this game of not wanting to suck but still having to meet deadlines.

"Sorry boss, but there's a vulnerability in lib x. We have to update. But it's breaking. So now we have to refactor. Two weeks, at least. Maybe more."

I just got rid of a crap-ton of bad code while I was updating for that dependency. Oops.


I didn't get my blog up and running until I a) started keeping a journal on 750words.com and b) started writing in a spiral-bound notebook every spare half-hour.

Those two habits and https://blot.im made blogging nigh-effortless. Like dandruff, I get it for free.


Oh my problem isn't that at all. I have no problem writing endless amounts of crap that no one will ever read. It's just that I really care about my writing, and I want it to have a perfect home. So I'm constantly writing and rewriting blog engines.

In this case, the perfect is the enemy of no one. A blog engine is the one side-project that I don't just hack. It's my one and only place for writing pure, elegant code.

And I won't let myself write at length again on someone else's platform until I get this exactly right.

Everyone wins: I don't clutter the internet with inane crap; you aren't tempted to read it.

If I ever decide to start publishing my writing, the best part of it will be the code that presents it.


I was(am?) in the same boat as you. I don't like using other platforms because they don't do exactly what I want. I go through the same decision points while trying to maintain a todo list. I guess that is the drawback of being a developer, you know you can do better(for your usecase) job of developing a great application and eventually, writing a blog or maintaining a todo list becomes an excercise in yak shaving (http://www.dailymotion.com/video/x2est2c_yak-shaving_fun ).

I am trying to get over this habit. Any advice/suggestions would be appreciated.


Thanks for sharing this and introducing me to 750words, I just finished my first entry! The analysis provided after just a single writing session is awesome - I can't wait to see how this affects my writing, focus, and ability to communicate with myself and others. Did you see a shift in sentiment, focus or elsewhere in your analysis as you prepared to make your writings public? For instance, I notice a pretty even distribution of Emotion and Mindset which I imagine will become way more focused as I work towards writing in public.


The only writings I make public are short essays I write during the day about victories and frustrations at work. When I am following the "morning pages" habit, I am usually extremely short on focus.

I am extremely pleased with how journaling improves my communication — when I find myself quoting that morning's pages, I get feedback on how decent my predictions were.

I feel more prepared, having hypothesized these circumstances in the morning, the afternoon offers fewer surprises (and novelty still fits nicely in contrast).


haha, yes.

Got an React project running for a year now. Hundred of deps. I just don't know anymore...

Last year Flummox got deprecated and we had to replace it with Redux, good times...


As the person who wrote the top comment in the thread that you link to, I will admit: You are right for the most part. The choice to use an unproven library is my fault as it was my choice and it is also my responsibility to deal with the consequences and costs of the instability. This is why I will be removing React Router from my project as soon as I can find a stable and suitable replacement.

Recognizing the responsibility for your own actions does not preclude you from being frustrated and at times outright angry at the outcomes of those actions. In this case I (wrongly) assumed that after a 1.0 release the library would be relatively stable and have been repeatedly duped into believing that this time the major version release would be the stable one. I don't really see anywhere that people were issuing an outcry for V4, so I am still confused as to why it was so urgent to release it. It wasn't perfect but it was fine. Unfortunately in my frustration and confusion I chose to write a very strongly worded comment that apparently some people did not like.

> To be clear: developers that release libraries and then iterate the API in public do not deserve personal scorn for doing so

I have never gotten the impression that react-router was someone's personal library, rather it is a community project that is maintained by notable members of the javascript community and it is my belief that delivering half-baked stuff to the people who counted on them, and who they led to believe could count on them, is not a fair thing to do. I don't believe it is unreasonable to be frustrated with fickle leadership from people who stepped up to lead the project. If they can't deal with the criticism or don't have the time/effort/inclination/whatever to lead in a way that is agreeable to most of the community then perhaps someone else could lead. When projects have thousands of stars on github, making rapid successions of breaking changes throws all of those people for a loop.

If you do in fact view the repo as your personal project and want to make huge changes all the time, like every 6 months, I don't think that the version with thousands of stars is really the place to do that. Why not just do it on your own where it isn't going to affect so many people.

Note: Neither this, nor my original comment should be taken as personal attacks on the maintainers although I am aware that both are probably over stated. I'm sure they are all extremely talented developers and kind people.


Thanks for the thoughtful reply! I don't have a problem with strong opinions and harsh words when warranted, with the understanding that we're all learning.

What I wrote wasn't entirely in response to the HN thread. Similar thoughts about the JS community have been brewing for a while and the discussion prompted me to speak my mind.


We use React Router on our project. Looks like 4.0 does break pretty much everything.

My thoughts on the upgrade are pretty much:

- Do we actually need to upgrade? Old react router works fine.

- If we do need to upgrade - how long will it take? If it's just a few hours to shift some code around, maybe it's not that bad. Especially if it's moving to a cleaner more "react" API.

At the end of the day, we could write our own routing class, or another. We'll probably stick with react-router though - but save the upgrade for a day where we've got nothing else to do, or if an engineer has some free time.


> but save the upgrade for a day where we've got nothing else to do, or if an engineer has some free time.

Or when you stumble on a show-stopping bug right before a deadline, that is only fixed in supported newer versions.


Is that more or less likely with an older version of something like React Router, or a fairly new release?


This can happen with any release of any open source project. The only surefire way of dealing with it is diving into the source and fixing it yourself.


I'm reminded of https://medium.freecodecamp.com/you-might-not-need-react-rou... which discusses how to write your own such functionality from scratch. I often find that before adopting third party code you really have to consider it and perhaps even leave it out and feel the pain before accepting that it's necessary and adopting it. And even once you do, you should be open to maintaining or replacing it. There's a cost to roll-your-own though, you might need to write additional documentation. For routing though, it's usually one function and easy enough to grok on its own.


Don't get it. It's FOSS. npm install the version you want. A new version released you don't like? Keep using the old version. Don't like the old version? Write your own. Fork it. It's free and awesome global collaboration!


I might be missing something (after reading the article twice) but the author says nothing about why he doesn't use React Router...

The only mention I can find is "But the API smelled funny to me, and had not settled, so I continued to wait" that hints that the API is changing and looks weird. But other than that, there is no constructive criticism here.

I'm all for being able to write freely about software we find bad, but without any concrete examples or even pointers to what is bad, I don't think this article have any merit (except that it's good to make sure you understand the dependencies you have).


It looked like a risky dependency, simple as that. Despite the HN title, the point of the article was not to talk about React Router.

EDIT: Additionally, I'm not saying React Router is bad per se, just that it's not fully baked. I'm glad to see people working on the problem. I look forward to leveraging the fruits of their labor in the future (in fact, I already do - I use the history library on which RR is built).


I still feel that you're not giving reasons of why it's a risky dependency or fully baked. And probably we/you should rename the title if it doesn't convey the article body...

Not that I am defending or even using React Router myself, I'm just saying that it's usually better for everyone if the feedback is better explained than just "It's bad" or "It's risky to use".


If anyone is wondering what leads to a Not Invented Here mindset, this probably would be a good example of it. Sure, it's not the fault of the open source library author that you depended on their work. However, when OSS fails to deliver or we criticize others for counting on it, we only discourage the dependability of OSS and push others away from the community. If we care about the continued adoption and growth of OSS then we'll need to have higher standards as well.


This is a very important dimension to consider when making technical decisions. I've never been able to find an easy way to assess this risk, beyond hearsay and word-of-mouth.

Is anyone aware of an easier way to assess these longer-term risks of a piece of technology? Things like API stability, community strength and responsiveness, backwards compatibility, upgrade paths, etc.


A large margin of safety / margin for error / margin for refactoring is the correct way to deal with uncertainty of any kind.

Humans are very poor at reasoning about low probability events that have serious consequences, i.e. large earthquakes with very long return periods.

For this reason, it's difficult to assess the 'risk' of external code because risk management processes don't work very well when high uncertainty is involved.

If you decide to use external code that has a high degree of uncertainty, just make sure to have a generous allowance of time/money/effort for future problems.


It's hard. 1) Gain experience. 2) Talk to tech veterans and journeymen. 3) Look to other communities for lessons.


Yep, this is a thing I push with my team all the time. Always vet your deps. In the long run is is WAY worth it. In the short run the cost is rather minimal unless you have devs that cannot actually deliver on the needs the dependency covers.


> unless you have devs that cannot actually deliver on the needs the dependency covers.

You say this off-handedly like its not a big deal. No single person or company understands all the dependencies from the app all the way down to the electricity. That's the real reason we use dependencies - for things that aren't core to our business, that we don't want to have to understand.


I guess I just assumed everyone understands that no team will have the ability to cover all your needs. So you actually need to judge when your team cant deliver, then pick the dependency to pull in.


I just wanted to say that the clojure ecosystem is littered with the carcases of half baked libraries. Including some of my own. Its not a JS problem, its the beauty of open source & low friction sharing.


You're not wrong. But it feels worse in JS. In both cases lib authors would do well to set proper expectations - if your lib is half baked, be up front about it.


I generally agree. I've been using RR for a while, and I'm not the biggest fan of their APIs. Nothing wrong with it, it just doesn't do what I want it to do; or rather... It does too much stuff, most of which I don't use. Just my opinion.

With that being said, they claim you'll be able to migrate slowly to v4, so hopefully it's not so bad. If they just broke backwards compatibility without a migration strategy, I'd certainly feel much more frustrated. I went with RR largely because it's widely used. I also don't think it's an unreasonable expectation for a widely used library to provide a migration strategy. Not that they're under any obligation to do so, of course.

My personal bar for adding dependencies is asking myself: would I feel comfortable debugging and fixing this? I recognize that it's not a free ride.


That's why they call it angular 2. It's finally out and ready to be used. Works really well with routers. Although you don't get the constant updating but you do get codeless directives that are really easy to understand.


Every ecosystem is full of "half-baked" libraries. JS has more as the ecosystem is bigger and the barrier to entry is very low. It's the nature of open source, it's also the nature of open source to evaluate dependencies.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: