I am surprised at all of the hate that Yahoo! is receiving for this, in the comments section of Hacker News (not surprisingly). This is great in my opinion, I think React.js is definitely the future of SPA's especially when combined with the Flux architecture. As someone who has been using React on a daily basis for the last few months, I have a severe man-crush on it. It just makes so much sense, combined with something like Browserify and working with a isomorphic workflow (shared codebase front and back-end).
And those who say Yahoo! are just jumping onboard the React hype train or Node.js hype train, you have it all wrong. Yahoo! have been using Node.js for the last few years, in-fact early 2010 is around the time Yahoo! engineers started playing with Node.js, long before it was considered mainstream cool or being used really in any high-profile scale environment.
It is rare that a company the size of Yahoo! truly ever embraces moving at this kind of pace and embracing new open source technologies, languages, frameworks and libraries. Now that Yahoo! have openly declared their use of React on such a large scale, expect it to explode even more so in 2015. For an open source project that is a little over a year old, React is getting the kind of user-base and adoption that most open source projects can only dream of having.
This news excites me. I honestly cannot wait to see how it all turns out.
PS. I have noticed a few people in the comments section getting confused. Yahoo! Mail is NOT using React just yet. The current mail product is still using YUI and plain HTML/Javascript. If you read through, it mentions 2015...
I really need someone to explain to me what they love so much about React.js. I tried it out and I absolutely hated the way components didn't understand their relation with other components. I had to chain a callback all the way down to my ListItem Component just so it could set which item was selected in the component and let other components know that. If you ever end up adding another parent component you have to rewrite all those top level components to pass that information down.
That's because you used React alone, which works as a top-down view engine. If you want to modify something in the view, you have to go back to the top and modify the model. Instead of doing it manually, Facebook has created the Flux architecture to help you scale this process efficiently.
Which was a poor decision, imho. Using React only makes sense with Flux, why separate them? Furthermore, the examples on React's docs use the pass-down-the-event-handlers madness, instead of using Flux. React+Flux is a good solution, but the documentation leaves a lot to be desired.
React components can be brought into any architecture. Using them with flux certainly simplifies many things, but by the same token, flux apps can be written using other libraries (react based like om, or even full frameworks like angular). The decoupling is good. It lets you try out a component from http://react.rocks/ without diving into a fully architectured SPA.
I think if you need to pass callbacks several levels down you should consider some form of event system so that you may decouple your components from the hierarchy.
That's because React only makes sense if used with Flux. Otherwise just use Ractive.js - components can communicate with each other via events, so you don't have to pass event handlers down the hierarchy.
Why on Earth people who aren't merely enthusiasts of "cool async JavaScript with V8" or those who began as a webdevs and knows nothing better than JS and PHP, could choose a single-threaded solution, which blocks the whole app if a single function blocks, and forces programmers to write spaghetti wrappers around asynchronous call-back hells in a non-functional but GC'ed language?
I am really too stupid to get it. "Single language for a whole stack" is a pretty stupid Java-sque argument, a naive assumption that one single language is good for all kinds of tasks.
Node supports parallel processes (granted, not threads), promises are a nice alternative to nested callbacks, and Javascript supports basic functional style. You've named some legitimate challenges of Javascript, all of which have been addressed in some capacity.
Sharing code between client and server lets reduce duplication and use the same templating/rendering on both sides. Fits very well with complex single-page apps.
That said, I'm not sure there are that many apps that really need this vs. the tradeoffs you have to make going all-in on JS and the complexity this can bring.
I don't know why it bothers me so much, but it really does. What happened to just saying that the same code is run on the client and server? Avoids the jargon, doesn't make math nerds mad, isn't that much longer, and says exactly what you mean.
As for blocking, it's something you deal with. Realistically speaking if you're doing CPU intensive tasks then Node is definitely the wrong solution. However, if you're doing tasks that are IO oriented then Node is fantastic.
Actually I think that even a lot of CPU intensive tasks could run on Node. Split them into smaller tasks (which you probably have to do anyway if you want to use multiple threads with another language) and run those as promises. Of course js is bad for most numerical calculation but for some things it might work. Or combine it with Dart or Asm.js.
You can also use multiple worker processes for your more intensive cpu tasks, keeping that out of your main loop altogether. Pooling and queue limits can prevent too many of those bound processes from running at once, and you can distribute workers across several systems.
> Realistically speaking if you're doing CPU intensive tasks then Node is definitely the wrong solution
Here's the answer:
Spawn $(nproc) more Node processes, use IPC if you need to.
Good luck seizing the theoretical performance gains one might see from threading (while not falling victim to diminishing returns related to context switching) in another comparable language.
If you have callback hell, it is an expression of muddled thought.
If you have broken your code up into small enough modules and functions, and you know which tasks depend on other tasks, you will not have callback hell.
If you are unclear on the logic of your code, you can often make it work in a messy way with the "laundry list approach". This involves formulating a sequence of tasks that happens to work, without needing to understand the actual dependencies. Many synchronous languages facilitate a laundry list approach, as their inefficient blocking paradigm removes the need to formulate the laundry list as a sequence of nested callbacks. It's still messy code.
> If you have callback hell, it is an expression of muddled thought.
No. It's an indication that you're using a FOTM runtime environment. There's a reason why no production-ready runtime uses continous passing / callback models.
I know a lot of people running node in production, and getting very high performance out of it. It may anger hacker news that such a thing is even possible, but there you have it.
I never worked much with JS, but from my experience I find it hard to imagine writing such a big application in JS. It is so easy to do things wrong in JavaScript. Weak types, global namespace, ambiguous and unexpected behaviours.
I feel like people stick fingers in their ears and ignore the ecosystem whenever JS is mentioned. There's a wealth of transpilers and tools that fix all the problems mentioned here (if you even agree they are problems), and yet they are brought up every time.
Weak types: Check out Facebook Flow or TypeScript.
Global namespace: Not true with require/modules.
Ambiguous/unexpected behavior... please explain? You could use any number of libraries to add consistency there. It's incredibly easy to include a JSLint file in your repo and as a git hook to ensure consistent syntax usage. ES6 introduces lots of nice features for writing code, and is usable today with transpilers.
Yes but if you're using a language that transpiles to Javascript, you might as well use another language all together (for backend programming). Debugging becomes much simpler with a language that doesn't has to turn into something else.
For frontend development, a lot of frameworks and libraries only work consistently with Javascript. I've looked at TypeScript for instance, and it indeed looks nice. But it has a bunch of edge cases with Angular and/or Backbone. You're just piling on complexity with these languages.
Oh well, at this point we're moving into subjective opinion.
But you need to write your frontend SPA with JS. And the isomorphic part, if designed from the ground up adds almost no extra code. So, really it's almost free to render first with Node, if you're starting a fresh SPA. I don't see why you wouldn't want to do it.
And we can all agree the ES5 has some warts, so you may as well transpile. Whether it's ClojureScript, Coffee, Type, or anything else doesn't matter, I just think it's a problem solved. Webpack makes debugging them easy with sourcemaps and live-compilation. In fact you can get hot-reloading[1] in React. So your app re-compiles without reloading.
I'm happy with ES6 + Flow. It's all ES6 syntax anyway, works with all libraries, and is fast to write, good looking, and functional.
If you write very modular code with good test coverage, the odds of you needing to fire up a debugger are greatly reduced. Regardless of the language you choose.
Given JS has first-class functions, and many libraries to follow functional patterns. You can write very stable code, with almost no chance of side effects within a given function/context.
I think the real problem is that this has nothing to do with the article on hand.
Yahoo is writing a greenfield application, and they are avoiding exactly those problems by using React. So, literally your pain point is exactly what they are writing about avoiding, and yet you still comment on this article with the same anti-JS nonsense.
True, there's a plethora of JS tools someone can use. The problem is that they have to be discovered. And there's no standard. I found out about Browserify 2 days ago and realised all the problems it could have solved for me. Now I have to migrate the code I have to it.
Utilities should be a part of the language's "battery".
Ambiguous/Unexpected behaviour: I mean I can redirect you to the popular "Wat" talk. Moreover, isn't that the entire premise of "Javascript: The Good Parts" anyway?
Flow isn't mature enough yet, but even TypeScript eliminates pretty much the entire Wat talk.
Also, it compiles to CommonJS, which means it works well with both node and browserify.
Yes, we need a blessed, "batteries included" stack for JS made of components that play nicely together. Here is a nice list to get started with (IMO, ymmv)
1. TypeScript or Flow
2. React
3. Promises: bluebird, when or p-promise
4. Observables: rxjs or most (they play well with promises)
5. immutable-js for immutable data structures
6. lodash and/or Ramda.js
7. browserify
1. Not a problem, though I don't use either, I tend to use very small modules (not necessarily via npm, but require'd in my own project, or outside modules)
2. React, and even the Yahoo flux tools are pretty nice. React by itself is less useful.
3. I'd go with es6-promise here, which complies with the spec. I wrote i-promise as a module to give an ES6 compatible promise library, or native as available.
4. Observables are pretty evil... the whole flux architecture is to avoid direct observations in favor of a unidirectional data flow.. but that's more opinion.
5. completely agreed... immutable data flows work to prevent side effects. Even if you don't use these libraries, avoiding side effects is a big thing.
6. Also agreed, though you can usually get the parts you want without the weird deep dependency chains in lodash.
7. Agreed here, though webpack is interesting, imho it breaks too much with node's approach to includes, and doesn't work as well with reusing code on the server-side (imho).
Other things to look into include csp, streams, events and the gulp build tool.
Well, observables would just be used as a mechanism to implement a flux-like architecture or decouple various parts of your server-side codebase (making them open to extension). A bit like event emitters, but without the stringly typed event names. I don't think thats evil? [1]
Or they can also be used in place of asynchronous lists (like "object streams"). I find them handy for various tasks :)
Regarding es6-promise, its too minimal for my taste, and missing my favorite feature (provided by bluebird. when and p-promise): long stack traces on (possibly) unhandled errors, which are quite invaluable on the server side when debugging longer chains of events. p-promise has the additional advantage of being a relatively small library which is pretty awesome :)
I also agree with just about everything here, and thanks for the tip on es6-promise. I'll have to check it out.
I actually have this entire stack running without observables currently, and my goal is to build a variety of applications using it to really sort out the best techniques with it, and then open source it as a platform.
As for #7. I use webpack and love it. It does work wonderfully server side as well, you just have a build step with webpack that strips out anything you don't want for the server, and then have your server run that built file. Not perfect, but works well. It also avoids any need for gulp.
If you're already at the point where you are utilizing compilation, why not implement a compiler-level async/await solution (probably based off of a loop + a state machine, a la regenerator).
Even with generators we still need to yield something or await on something. C# awaits on Tasks, in JS Promises would be a logical choice. You definitely need a value though - you can't await for node style callback functions as they provide no value to yield (or otherwise pass to the runtime) between calling the function with a callback and your callback getting called.
Even so, lambdas, observables and some FP constructs such as map/reduce/filter/etc can potentially eliminate the need for imperative style code. There are also some semi-imperative solutions to certain problems that work okay [1]
Another advantage of functional code is that its easy to extend - adding a combinator that implements filtered (typed) catch for promises is a lot easier than implementing a language construct such as `catch (e if e.code == 404)` for generators.
var only = (predicate, handler) => e => {
if (predicate(e)) return handler(e) else throw e;
}
promise.catch(only(e => e.code == 404, e => {
handle...
}))
Here is another example (its a bit of an overkill though :)
Ambiguous/unexpected behavior... nothing fixes it.
To make matters worse, there's always the promise of some magical library or tool that's going to fix your problems, until you discover that it sucks, or that it is unmaintained, or that it doesn't interoperate with another library and then you're back to search between thousands of pieces of crap that do the same thing.
I remember the days when people were having boners over CoffeeScript. Boy, that was a bad move.
I prefer straight JS over CoffeeScript/TypeScript/Flow etc...
I don't find require/modules to be too much of a mess.. unless you are using a lot of really large tool-chains with deep dependencies... npm dedupe helps point out problems there.
Breaking your logic into idempotent functional components without side effects as separate files/modules reduces a lot of what can be considered ambiguous or unexpected.
If you work with a functional flow, avoiding OO in JS as much as prudent, you can get a lot done without nearly as much confusion.
For now co/generators/promises get you really far along. In the future await/async will take it farther.
don't you feel that if you a language's short coming is addressed by adding more of it in the form of duct tapes is problematic? I love javascript when it was doing simple things on the DOM, not trying to recreate the desktop in the browser, making it render across different devices and browsers, making it crawlable by running a headless qtwebkit process to serve pages that google can't see, using a slow and unstable document storage database often advertised with an old javascript engine requiring you to write chains of callbacks for even the simplest operations, all in the back of my mind believing that somehow all this complexity has helped me, why, look around everybody's doing it. This Java/PHP/Apache garbage that has worked well but didn't feel cool or fad enough? Let's kick it.
With node, you can run multiple processes per server... the challenges of inter-process communication really aren't any harder, and are very similar to when you have to scale to multiple systems anyway.
As for blocking, if you are doing something that is blocking in your main event-loop, you're probably doing something wrong. Break it off into a req/res queue and let it flow.
Regarding call-back hell... you have next-generation tooling like co/koa as well as ES6 Promise patterns you can use. Not to mention the async library.
You can readily follow functional patterns in JS and Node... just because there is prototype based inheritance doesn't mean you need to use it, and most of the time, I don't.
Also, Node itself isn't really single-threaded... the main event loop is, which is where your JS code runs. The underlying platform calls are running against a thread pool.
For stateless web requests (solving the C10x problem), this is not inherently bad for most applications.
> which blocks the whole app if a single function blocks
This is behavior you'll see in many other languages.
Management of execution time is very prevalent in lower-level languages. The only languages not susceptible to this problem are high-level, preemption-capable, and possess some form of green threading/scheduler e.g. Erlang/Elixir.
IMO, if you need to rely on a scheduler to help ensure that blocking doesn't occur in a disruptive fashion, you're in no position to deride those who "began as a webdevs and knows nothing better."
> asynchronous call-back hells
This is a matter of taste, opinion, and necessity.
I personally hold the opinion that "callback hell" is a symptom of poor design, and properly managed callbacks are not unpleasant to work with (I may be subject to Stockholm Syndrome).
Also, try implementing an equivalent non-blocking webserver around an event loop in a lower-level language and let me know what non-callback strategies you come up with for the coalescence of asynchronous events.
As an aside, there are hundreds of different ways to improve or eliminate the composition of callbacks in JS: emulated coroutines via generators, C#-style implementations of async/await, promises, the list goes on.
> non-functional but GC'ed language
Which functional programming languages in common use aren't garbage collected? (again: in common use)
Which other common, productive, high-level programming languages (often tasked for C10K web servers) aren't garbage collected?
What does "non-functional but GC'ed" actually mean? What does the language's programming paradigm have to do with its memory management model in this context?
> I am really too stupid to get it
- People are productive and enjoy JavaScript.
- Browsers, by and large, only run JavaScript code.
- JavaScript on the server-side is a very capable, battle-tested building block for performant web software.
- Admittedly: JavaScript is not a great language when compared to other modern programming languages.
- Aforementioned opinionated gotchas aren't enough to dissuade communities from using JS.
It's not, but having a single language to glue together all of the other ones is a pretty good idea. That's what I use Node for. With the addition of Edge.js I can run any .NET language right in the same runtime as Javascript (or, run Node inside my .NET app).
I've been a Javascript and Node skeptic for years now but I think the tide is finally turning. So much time and energy has been poured into making JS better that it's finally starting to pay off. Javascript with all the ES6 enhancements is really not a bad language. The runtime performance is already very good and continually getting better. Tools like Typescript and Flow make dealing with larger code bases much easier.
I don't think the other "scripting" languages like Ruby, Python, and PHP stand much of a chance against it in the longer term. They just don't have the resources to compete.
> The runtime performance is already very good and continually getting better
Have you checked out the Computer Benchmarks Game? Here are some benchmarks they provide, comparing the JavaScript V8 to Python [1], Ruby [2], and PHP [3].
Looking at statistics such as these, I'm left thinking, "1. This is awesome! 2. Is there something about these other languages that prevents V8-like performance, or is this more a matter of browser performance being highly invested in by Google, Mozilla, etc.?"
V8 does precompilation to native code and JIT optimization, which can typically turn numerical computations into efficient, and often highly optimized, machine code. These benchmarks seem to primarily be testing algorithms which require a lot of numerical computation and can take advantage of Javascript's typed arrays, which V8 is designed specifically to optimize.
A somewhat more fair comparison would be V8 vs. PyPy, Rubinius/RuJIT, HHVM, and LuaJIT. LuaJIT and V8 would likely still come out on top due to the amount of work and sheer excellent engineering put into the compilers (and in LuaJIT's case, also the interpreter), but PyPy's performance should be a lot better than CPython's for these benchmarks.
I also imagine (though am mostly speculating) that a benchmark involving a full-stack frontend framework like Angular or Ember, with hundreds or thousands of JS objects and lots of DOM manipulation, should put V8's performance a bit closer to CPython's.
I don't think more code will be a big problem for javascript. It seems to mostly affect start up time. Look at the asm.js tests with Massive. Also think about all js projects that use anywhere from 10 000 lince of code (LOC) to 100 000 LOC. I haven't heard anyone saying that the runtime got into any problem. There is always some problems with organizing that amount of code but that is true in almost any language. And javascript is actually quite effective to write in meaning a js program is probably much shorter than the same program in Java (at least before Java 8).
Well, it would be a different comparison; and I dare say a website showing such a comparison would attract a lot of traffic, if only someone would "please take the program source code and the measurement scripts and publish those measurements."
> I also imagine (though am mostly speculating) that a benchmark involving a full-stack frontend framework like Angular or Ember, with hundreds or thousands of JS objects and lots of DOM manipulation, should put V8's performance a bit closer to CPython
Interesting idea! How would we make a version of the benchmark for CPython though?
You couldn't do the DOM parts, but you could compare Python dicts with Javascript objects, and compare performance when there are a lot of heap allocations and GC calls.
benchmarksgame is not a serious comparison site, it's in the name.
The comparisons they do is not of idiomatic code... for example in python they do a lot of array programming and don't use numpy (http://www.numpy.org/). So for most languages the results in there are completely meaningless.
The name "benchmarks game" signifies nothing more than the fact that programmers contribute programs that compete (but try to remain comparable) for fun not money.
The performance of those programs with CPython explains the existence of numpy, and if you look you'll find that numpy programs are shown.
That's not an objective comparison, since Javascript has been optimized to deal with arrays and numbers, as that's one main use-case that people have in the browser. Take a look at the source code and tell me if that's the kind of code that you write.
No, it's not an objective comparison because that's not the kind of code that we deal with in 90% of the cases, except when doing games, but then again plain Javascript is shitty for doing games, unless you develop in a strict subset that isn't meant for humans.
I disagree. Every time I try to get into node, I am stonewalled by poor debugging support. I end up wading through obtuse stack traces only to find poor quality libraries are the culprit. Tracing dynamic languages is so hard already. The only thing I can tolerate is Python with Werkzeug.
I'm a 70%-90% back-end guy with 10-30% front-end work (when necessary).
As the companies I work(ed) for evolve from JS => jQuery => Backbone.JS => Backbone.JS + Marionette => EmberJS, so does my skill have to evolve. I also have to write some JS code on top of PhantomJS + CasperJS (not for automation testing but instead for performance monitoring) to support enterprise product at the moment.
I used to hate JS with passion but now after seeing Chromebook, Chrome Apps, NodeJS, and my current favorite: MeteorJS. I just have to suck it up and learn JS to be honest...
Would I bet an enterprise app to use NodeJS if it was using my own money? Probably No. Would I bet my weekend projects and ideas on NodeJS/EmberJS/JS? Yes.
I think the JS ecosystem is still unstable and a big mess but let's hope it moves to a better direction.
I will not take the # of NPM vs Java Maven as is. Keep in mind that Java SE (API) is way more complete than NodeJS basic APIs.
Also keep in mind that in Java, most people have 1-2 choices at most and those choices tend to be way more mature and stable so we tend to feel comfortable with those (at most) choices.
You can slice and dice my analysis anyway you want. You can say lack of choices is bad and millions of Java devs will disagree with you. You can say that NodeJS is evolving like mad and thousands of other more experienced Devs that myself would argue that NodeJS ecosystems is playing "see which one sticks". Your values are different than of mine.
StackOverlow stats you posted doesn't mean anything. Literally. We know JS has been around for ages. The fact that JS claim of fame to be smaller/leaner than Java yet rank #2 behind Java humongous ecosystem is a question mark in my mind: "What The Heck?"
It can go either way.
Gulp vs Grunt, NPM and Bower. Java has Maven. One tool. Yes, there are Ant and Gradle but let's be honest, Maven is king whether you like it or not. With Java, I declare the archetype (web-app or simple app) and dependencies and away I go (unit-test is part of the build, integration-test requires a few additional lines).
With JS, I have to .. ? Let me know if it's simpler than Maven. Let's not waste time to argue about XML vs JSON or whatnot, that's a matter of taste, let's focus on the required steps to build, run tests, and package it up whether as a deployable or a dependency of others.
'Most modern' ecosystem? That's likely, though it's honestly got the same things as every other programming ecosystem. I think the Perl ecosystem might be the most 'complete' programming ecosystem that exists today, in terms of implementing modern aspects of a programming ecosystem.
'Most stable' ecosystem? How do you figure that? It's only been around a short while in its current form. Other languages have had their current ecosystems hanging around for decades.
'Most evolved' ecosystem? Technically a single server-side javascript have existed since 1994, but other than JScript on ASP and Lotus Notes xPages (lol), nobody really used it again for non-frontend work until Node came around in 2009. AFAIK, jQuery was really the beginning of the 'modern' Javascript age back in 2006, and has slowly been adopting the practices of other programming ecosystems since. I would wager V8 was the biggest boost to JS development in the past 10 years, as it finally stopped being the slowest interpreted/scripting language in the world. Is there even a CPAN equivalent for JavaScript yet? It seems they've only implemented an equivalent of PAUSE or PPM.
Anyway, it's obviously getting better, but it's still pretty young.
I'm not as familiar with CPAN, but how is NPM significantly different?
As someone who used JS server-side from Netscape Livewire, to ASP JScript and even via Synchronet, and other JS runtimes... I've always appreciated the core language. Long before "The Good Parts" because Crockford only got a lot of coverage for what I already knew. Same goes for some recent adjustments regarding Object.create (and even inheritance chains in general).
JS at it's core is a decent language. ES5/ES6 enhancements make it very usable. Currently developing against Node 0.11.x --harmony, with a an es6ify for browser support. Given Node's very good async i/o interfaces, I've written a lot of bridges the past few years between systems that have trouble communicating directly.
Namely SOAP providers are a pain to consume in general a lot of the time. Not a problem to create a JSON endpoint in node that runs it fine. Need to process a huge XML or CSV file... Node is actually pretty nice... and can generally ship the final data where you need it.
Node is imho the ultimate service middle-ware... And to me, this extends to being the servers/services that web applications talk to... relaying to other backend systems/databases.
'Most stable' in the sense that both NPM and Bower are quite reliable.
'That's likely, though it's honestly got the same things as every other programming ecosystem'
I disagree on this point because dependency systems like Gems (for Ruby) and PIP (for Python) are miles behind NPM and Bower.
I'm also not aware of popular task runners on other programming languages in the same way Grunt or Gulp are.
Perl isn't a language I know well so I can't argue on that.
By 'Most evolved' I didn't mean Node.js but really the NPM / Bower module ecosystem. You rarely if ever have to write something from scratch due tu a lack of plugins.
I agree with you thought that thanks to V8, JavaScript has a performance level that is close to compiled, lower level code.
> I'm not sure why you consider the JavaScript ecosystem unstable ?
Well I'd say his opening might be the reason:
"As the companies I work(ed) for evolve from JS => jQuery => Backbone.JS => Backbone.JS + Marionette => EmberJS, so does my skill have to evolve."
It's evolving fast, and this makes it unstable in terms of knowledge and what's being used. Depending on outlook (or age?) this can make it exciting. It's also a pain in the proverbials to keep up with, and everyone's at different places (while the HN crowd appears to have moved onto Gulp, many people I know are just discovering & starting to use Grunt).
Thousands of plugin says nothing about whether it's a stable ecosystem either: the majority could all be high quality, or they could be scratch-an-itch-and-move-on projects.
Even though the frameworks evolve quickly in JavaScript, nobody is stopping you with sticking to an "older" one if it works for your project.
It's true though that your value as a developer shifts quite fast depending on the trends. But I feel that a fast moving world is merely a consequence of super large communities like the JavaScript one.
By "stability" I meant that NPM and Bower are quite reliable.
Also, if you look at the usage stats for the top plugins, the still show in my opinion stability since you have a lot of devs contributing through pull requests.
I asked myself the same question and although I'm not sure what the answer is, my guess would be that the situation you describe would be valid for any (relatively) new programming language.
It shouldn't be particular to JavaScript or Node.js.
What annoys me in the Javascript world in those "and", there is always an "and" with many commas before, to many options, when you suffer from big choice paralysis like me you lose a lot of time setting your stack and being productive.
I switched from WPF, C# dev to Angular JS with Typescript and Web API with relative ease and I'm digging it. Typescript definitely helped though. I know I have plenty more to learn, but it's not as shitty as people make it out to be.
If you're using Web API (built on ASP.NET) then you're not using Node, which is a Web server written in Javascript.
But I do think it's cool that someone can transition from WPF to Web development with Angular without feeling too lost. Dynamic Web development used to be a very, very different ballgame from client application development.
I don't see why Ruby, Python or PHP can't compete, if Dart, TypeScript, Coffeescript, JSX, Elm, purescript, ClojureScript, etc. can all still be things.
"Javascript with all the ES6 enhancements is really not a bad language."
File under "damning with faint praise", methinks. Not sure what it means if the "not all that bad" version is one that doesn't run natively anywhere yet...
I think in this instance JavaScript has to compete with Java, C#, go, erlang, etc. rather than the other "scripting" languages. I don't think many would argue JavaScript isn't comfortably #1 within that domain.
I'm genuinely curious why you think JavaScript is comfortably #1 within the domain of Python, Ruby and PHP. I've been writing (client side) JavaScript for a very long time, and I generally enjoy it, but I don't see myself abandoning Python on the server side for JavaScript anytime soon.
But is that because you like Python, have a large code base that takes time to convert, is missing some tools, don't think javascript is mature enough or something else?
I use C# myself and would not like to convert our whole app to js but I could be ok with Dart or TypeScript instead (or Go but that is not a scripting language).
I didn't argue JavaScript is #1 on the server side, simply that it's the language with the most traction when it comes to that group of languages (of which Python is no part by the way, PHP and Ruby are though). I wouldn't touch JavaScript on the server side with a 10 feet pole.
What an odd trend. First Netflix moves part of their infrastructure to Node (https://news.ycombinator.com/item?id=8631022) and now Yahoo is doing something similar. Node.js is great but I don't think huge enterprise systems for some of the largest brands in the world are necessarily the best fit. I wish they'd provide some insights on why they're making that particular move.
Speaking from personal experience, I'm reluctantly moving from my favorite stack (C# MVC) to Node.js in order to be able to build an isomorphic SPA in the most straightforward way. Node is the stack of choice for JS and the obvious choice if you want to render on the client and the server using as much shared code as possible.
That said, you're still free to develop your API using your favorite technologies. To be honest, this is where the heavy lifting is, anyway, so it makes sense to have stronger typing, richer data types, etc. at this level.
>...your API...is where the heavy lifting is, anyway, so it makes sense to have stronger typing, richer data types, etc. at this level.
I've had the opposite experience with the SPA projects I've worked on, one of them pretty large.
With the possible exception of authorization/authentication, the API for all my projects has been pretty straightforward, whereas the client-side app has been where I've missed stronger typing and rich data types most acutely, to the point where I've been actively evaluating compile-to-js alternatives with a better/safer type system.
However, with the advent of Facebook's Flow, I might just stick with JS.
I guess what I meant was that there can be significant data manipulation at the API level, and this calls for more language and runtime support.
Agreed that client code can become very complex. It's one of the reasons I avoided doing too much on the client in the past. My solution to this has been to use TypeScript, which is fantastic for solving this sort of problem. Yes, it moves away from prototypal inheritance. It also doesn't play nice with JSX. It's still hugely worth it for me. I haven't tried Flow but it looks like it addresses the same problems.
As a C# developer, I too am building isomorphic apps currently in NodeJS (with ReactJS).
Then I found out about ReactJS.NET [0] which allows for server-side rendering of ReactJS components from ASP.NET. I haven't had a chance to try it out properly yet but my preliminary test of it made it seem plausible for creating isomorphic apps in ASP.NET. Have you given it a shot? If so, what are your thoughts?
I've experimented with it. I think it's a great solution if you really want to stay with a uniform stack (as much as is possible, anyway). What I came to realize, though, is that the Razor views were just scaffolding. You would need to re-implement any such scaffolding in your client-side views. The next problem is the router.
Any client-side routes will need to be mirrored on the server if you want isomorphism. This means you're using two routers. Many Node solutions will use different routers on the client and server, but there are a growing number that can be shared.
The router is my biggest stumbling block right now in my road to React. There are a variety. There seems to be a consensus around react-router, but now we have the Yahoo contribution. There are two problems that I'm having. The first is that, if you want to use HTML5 History (instead of the dreaded hash bang), you're kind of on your own. There is usually little documentation on how best to handle this. If you do this, you have to intercept and prevent navigation, but only for the relevant local routes, of course. This is intimately tied to eventing. If you're using React, you'll probably use React events (vs, say jQuery).
The next issue is that most of the examples out there are simplistic. They will show a simple page with one component embedded that swaps out a part of the view. Any real app will require layouts, views, and components. Some of these levels will vary based on routes. ASP.NET has an elegant solution for this. I haven't seen an analogy with React + router.
I think part of my problem is that I'm still ramping up. I'm guessing that this is largely teething pains.
Hey, I'm the developer of ReactJS.NET. Thanks for trying it out!
> Any client-side routes will need to be mirrored on the server if you want isomorphism. This means you're using two routers.
A while back I worked on a library called [RouteJs](http://dan.cx/projects/routejs) that would expose certain ASP.NET MVC routes to JavaScript. The use case here was to have a way to build URLs client-side rather than hard-coding them (similar to the `Url.Action` helper in Razor) but a similar technique could be used to do client-side routing. That's actually a really good feature request for RouteJs :)
> The next issue is that most of the examples out there are simplistic. They will show a simple page with one component embedded that swaps out a part of the view.
Yeah, that's true. I have one page using server-side rendering in production, and it's a page on my site/blog: http://dan.cx/socialfeed.htm. It's also a simple page but I used it for building the original proof-of-concept for server-side rendering. I haven't actually tried using ReactJS.NET to build a single-page app, but I think that would be an interesting use case.
Hi Dan, I think ReactJS.Net is a great project. I've got to say, it was very reassuring to see that it was there when I began exploring React. This is definitely the approach that I will use when not building an SPA. The criticism about routers was not directed at this project. It was a more general criticism of some of the JS routers that I've seen, which are sometimes light on documentation. Cheers!
I haven't really given the routing issue too much thought regarding ReactJS.NET. I assumed it would work just as well as my isomorphic apps in NodeJS; one route in server-side land that handles all URLs and returns a View that contains the server-side rendered React app. I maybe missing something for why that wouldn't work in ASP.NET land but like I said, I haven't worked with it that much.
react-router is all right and they've got server-side rendering working but it's still missing one very important facet that is required for most isomorphic apps; the initial server-side render should be capable of fetching whatever data is needed is fully render the page. Right now the best you could do is deliver a page with a spinner if the page requires some data on the initial render. Andrey Popp's react-router-component solved this issue by using react-async which uses fibers to allows for getInitialState to work asynchronously. The react-router guys think the "fibers stuff is stupid"[0] but they're still working on their own solution to the problem.
The only reason I'm still keeping tabs on react-router instead of abandoning it for react-router-component is that it doesn't look like Andrey Popp's is going to be maintaining react-router-component and react-router seems like the only other game in town when it comes to routing in React.
P.S. I don't know that much about node and fibers to understand why fibers is stupid but it solves the problem and from what I read about fibers, it seems like the node community just don't like it because it resembles threads and they don't like threads in node.
One of the exciting things about React, and Node.js more generally, is how quickly it is moving. This usually means that there are sharp edges and rough patches and that's been my experience so far. I'll take that any day over a backwater that gets no attention.
I am going to spend some time with react-router, since it looks like the one that is gaining the most traction. To my comment above, they do support nested views, which appears to solve the layout issue for me.
Yes, that was strange. I noticed it was removed shortly after I posted my comment. I can only assume they want to clean up the examples in the announcement and integrate them into the docs. Here's where I saw the original link:
I feel that if you go the whole react and SPA route you should use the tool mostly used with that : grunt/gulp, npm/bower, browserify/webpack. Even MS is going this route the VS 2015 projects will use grunt/npm/bower in the starting template, don't expect to have updated Nugget package for all those JS lib you will use for so long. And the is no Nugget packages for most lib you will have to use with React. The only thing that make it interesting is for the initial render but I don't know if it is easy to set up.
You're moving to an entirely different stack simply to make a web app searchable? After you chose to use an architecture (I mean SPA paradigm) that doesn't work with search engines very well? I just don't get it. No, really. I know that's what many people do, but I still don't get it.
It is possible to build extremely dynamic websites that are not SPAs. It is possible to do it in a relatively straightforward fashion. (Using something similar to web components.) So why move?
Highly debatable. I've had great success developing highly dynamic websites using progressive enhancement and directive/web component like approach.
People often smirk when I mention progressive enhancement these days. But the same people often claim that something is "impossible" to do using PE, while it's not only possible, but downwright easy. This makes me think most of them never even tried this approach seriously.
My overall strategy is creating some crude "date model" of the application in pure HTML, then identifying additional behaviors I need to make it look/work the way I really want to. I implement each behavior as separate JS libraries. The libraries are configured by adding additional directives to my markup, so using and re-using them requires no coding per se.
You would be surprised how much you can accomplish this way. Moreover, it forces you to write highly reusable components that are easy to reuse.
CSS3 is a great help in this regard, because (in most cases) you do not have to specify look-and-feel of anything in the library itself. You can simply generate some additional markup that can be styled separately for every app.
This is so true. The greatest gift of the transactional model of HTTP applications, where you submit your action, to receive a page for your next step, was freeing application development out of the bind it was in: heavy client state with a myriad of small transactions triggered by events.
To put it another way, when you design transactional first, you actually design a very good approach to your business layer public interface. If you do that step right, you can do the next two with minimal effort:
1. An application API;
2. Client side javascript for improving the UI of common operations.
Single page applications often smell like the Windows Forms applications of old: A convoluted spaghetti of application states linked by insignificant events and event handlers, where the boundary between business logic and user interface is fuzzy or non-existent.
The notion that SPA automatically gives you better user experience is a fallacy. Better UI design gives you better user experience, and good UI design is possible using the classic approach to web development. As an added bonus, you get searchability for free and you don't have to move away from your existing server-side stack.
I used to think the same way, but trying out a non-SPA site really feels jarring now. The intermittent blank screen and the 2 second wait period makes for a shit user experience. I'm much more used to the fluid desktop-grade UI.
My only gripe with SPA was the initial load-time (looking at you Gmail), but with virtual-dom, being able to generate the html on the server gives you the best of both worlds.
You don't have to go SPA to avoid the page reload. AJAX works just fine with regular webpages as well, judiciously applied on the most latency-sensitive actions.
BTW, you're posting this on Hacker News, which is the ultimate in retro Web-1.0 technology. Heck, it even uses tables for layout.
SPAs use Ajax to sync data with the server, but the templates/assets are on the client already. In a regular webpage, Ajax would have to pull the templates for the new components as well.
I think this is what Gmail does, though it still takes forever for the initial load. Anyway, I can see this becoming a complicated mess very quickly, and find Ractive/React to be much simpler solutions.
As for HN, I doubt anyone comes to HN for the design. PG and YC's brand helped create a solid community with good content and good discussions, which is what keeps bringing us back. But let's call a spade a spade - the UI design here is a joke, even more so considering that its target demographic is the tech community.
Single page applications allow for more flexible UI design, and that flexibility allows for lots of potential improvements. Forcibly re-drawing the canvas during transitions is quite limiting, and I don't know any visual or interaction designers who couldn't do a better job without that artificial constraint.
Now, whether the benefits of getting rid of that constraint are worth the costs depends entirely on what you're building.
You do not have to re-draw the canvas during transitions. Believe it or not, it is possible to do AJAX declaratively and with progressive enhancement. My approach is to develop forms normally, then add a directive that intercepts them via AJAX, "cuts" certain pieces of the target page and "pastes" it into the current page. It is a very simple, generic and flexible approach. The markup looks like this:
Normally, the only thing I need to add to go from non-AJAX to AJAX is submit-to="[CSS Selector Here]". Possibly, some IDs. Everything else works automatically. The library adds a certain attribute to the "pasted" parts, so I can add visual transitions via CSS3. I can use form's action URL to rewrite current URL via push state.
Yep, node.js is the new PHP. Not by how it works, but by "shit everyone uses". Now, I won't try to convince anyone that it's crap (while I do believe it is), but it's the modern crap technology. Welcome to 1998, we are all "modern".
My point though, is I'm not referring to "bleeding edge startups." I'm talking about almost every major publisher here Conde Nast, Hearst, Wenner Media, Gawker, the list goes on and on. These are massive old and new media publishers that are embracing node, not startups in the valley.
Node is at a tipping point right now, and talking about how "dumb and shitty" it is won't change that. The technology has shortcomings no doubt, but there's just too many advantages to it from the business strategy side for the shortcomings to matter.
NPM chokes for me about 1 time out of 10. The main repo has reliability issues. A lot of the libs install executables that require sudo. Java solved dependency management years ago. I don't know why every language doesn't just rely on a classpath.
I've never had any issues with npm and I started coding about 2 years ago. In fact I didn't have any issues with npm when I first started. Maybe you need to have a friend show you how to do it correctly. I can't imagine what your issue is.
I use several frameworks/libs for different tasks: Webmachine, N2O, Cowboy
I don't believe in RoR type of frameworks. I believe in clear separation between server and client. We are steadily moving towards a web of websockets and "one page JS applications". Won't comment on whether that is nice thing or not. I am not sure for myself. For client we are stuck with JS. Sucks but it's a fact of life.
Erlang (or other things using this model, though, nothing as mature exists) is the only sane way of writing "multi-user" or "mega-user" servers. For me, server side is a solved problem (because of Erlang). And I'm talking about huge, clustered backends. Node.js feels like a child's toy compared to Erlang.
I would suggest that everyone who claims to be a web programmer should at least know how to use Erlang. Otherwise, you aren't really aware of how big the world really is. And what is possible actually. This may sound elitist, and it probably is, and I am probably not a good person for talking like that.
But you can't really argue with things like this: http://blog.whatsapp.com/196/1-million-is-so-2011
Once you have experienced things like this, and once you really understand why it works and how to steadily reproduce it on every project you work on... well, that's why I claim that server side is a solved problem.
As someone who knows their way around Erlang, it needs some work for more 'average joe' programmers to get much done with it. I'm not talking about the syntax, but some set of libraries or a framework or something that gets people up and running quickly. Chicago Boss is interesting, but it's not very Erlangy (it uses compiler magic).
Edit: that said, a big company would certainly have the resources to make Erlang work well for web development, and, yes, it's way more solid than Node.js from an architectural point of view.
So does this mean Erlang only makes sense if you work for a big company or have a large team to implement it properly? (I'm responding in light of davidw's edit which you may not have seen).
You can do web stuff in Erlang. It's just that it's not like Rails, where you start it up and it's easy to do so much with so little code. Since there's not as much infrastructure, you have to do more yourself.
That can still be worth it, given the advantages the run time gives you, but I think you'd really want to know exactly what you're doing.
Thanks for your suggestion, I will definitely look into erlang. I have tried to look into it but I have found it quite unapproachable. You have to be really motivated on your own to learn it.
Everytime someone points a erlang, they bring up whatsapp. And everytime they do that, I tell them facebook the company that owns it is a php shop (Atleast, that was the core driver. Things might be different today with hiphop).
Writing something in PHP made sense in 2004. Although Erlang was around at the time, it wasn't at all well known.
And no, you are not absolutely right, Facebook is not a PHP (only) shop. Their chat backend is in Erlang and that's a much bigger tell-tale than their legacy code.
Also, a lot of us Erlangers speculate that Facebook bought WhatsApp (not only but also) because of their huge infrastructure know-how and (Erlang) talent.
Not true anymore. They switched away from Erlang because of reliability and scaling problems. So saying that mega-user server side is a solved problem because of Erlang feels like a stretch.
Thanks. Sounds like they didn't really figure out what the problem was with the Erlang implementation but instead reimplemented with C++ systems that were existing, built in-house, and better understood. Makes total sense but shouldn't be taken as a point for or against Erlang as much as it's a point for using a stack your team is familiar with (or has a deep motivation to get familiar with).
> Isn't that argument bad for erlang? That you need to pay an atrocious price for erlang know how.
That erlang know-how commands a high price on the market is a good argument for developers to invest in developing that know how, and for organizations to work on fostering it in their staff.
Its only "bad for erlang" if you assume the only way to adopt erlang is for an organization to wake up out of the blue one day and decide "from this day forth, all our work will be in Erlang, and we're going to go out on the market today and hire up Erlang talent to enable that to occur."
Can you make some recommendations for Erlang IDEs?
What's your typical day-to-day setup, and what's your workflow look like?
I love Erlang and its concept, but the lack of friendly developer tools and easy testing put me off after the one successful (commercial) project I completed a couple of years ago.
I'm all geared up to become an Erlang evangelist (particularly for the reasons you give - Node.js really does feel like a toy in comparison), it's just the process of putting the code together feels painful compared to the tools I'm used to. (whether Visual Studio or PHPStorm or...)
If you can spend a few minutes discussing your setup, I would find it immensely useful.
Erlang is great and so is Elixir but node serves a different purpose which really amounts to many different purposes of which none include things like building massive real-time and fault-tolerant messaging services (you could still use node and make it work but it certainly would be a suboptimal choice for that).
Just to follow up on topic of WhatsApp and Erlang, here is a presentation given at Erlang Factory 2014 about their goals with scaling WhatsApp to billions of simultaneous users.
I would argue the discussion is less about which framework and more about the suitability of the language. It's a debate worth having but I struggle to think of good arguments to move infrastructure of this sort of scope to JavaScript over the more "typical" options (Java/Scala, .NET, Python, go, erlang, etc.). JavaScript obviously has a place in web application development, I'm just not convinced that place is on servers in the vast majority of cases.
I've got first hand experience with this at a major airline - node is a surprisingly good fit for Bigco because it is encouraging modularization and has very predictable dependency management but most of all it allows you to fit it into your current systems easily and organically. There is not much convention (also an argument against it in some cases) other than being in many ways a very "unix" kind of environment (streams etc) with the added benefit of having Javascript's functional/event-driven roots.
Most of all to me node is fun which is especially important when you have to conquer problems in un-fun environments (most Bigco systems).
Hmm, those are strange results indeed. I would not be surprised if those measurements were made a year ago. Back then, promises were really slow.
But then bluebird came out and changed everything, and many of the other promise libraries followed suit. Most promise libs are now really lightweight (with the only remaining exception being Q).
For example, bluebird (and most other promise libraries today) have 2 to 3 times lower overhead than caolan's async and are comparable to the most hand-optimized raw callbacks.
I've watched a presentation where (I think) Linkedin talked about their use of node.js. They used it as an API Gateway (http://microservices.io/patterns/apigateway.html). The main requirements for this are making a lot of service calls quickly and transforming several JSON documents into a single JSON response. Node.js is actually pretty reasonable for this.
It's interesting to note despite the memory link, the incredible amount of traffic WalMart was handling with its Mobile apps running on NodeJS. That's what was more impressive to me.
I'd say, more than anything, the benefit it provides is more personnel-related than anything.
While there are definitely dedicated server-side and client side JS people, speaking the "same language" helps within teams and allows for some shared responsibilities.
That's appealing to large organizations with decent turnover.
I think you've touched upon the strongest argument for using JavaScript within a server stack. That said, that argument seems less strong for companies the size of Netflix, Yahoo, and so on. They don't exactly have problems recruiting top engineers. Curious stuff.
Recruiting? No, absolutely not. But maintaining? I mean if you can move in and out of server and front side, that might build some lasting engagement with developers.
It also encourages an ecosystem wherein almost any developer out there can work on either side for you.
I don't think they have mail storage, spooling and sending implemented in node.js. I think they use both PHP before and node.js now as a front tier. And this is okay, it's where node.js shines.
See, this confuses me. From working with Node and understanding its core architectural concept, the single event loop, I find Node's worst use case to be front end development.
Server side rendering takes time, even 10 ms. On an single event loop, that is terrible. You can only server 100 request/second due to the 10 ms limit. While it is nice that you can move a request into the backing queue while waiting for the data, you're capped at 100 requests/second.
Where I think Node shines is low level network management. Netflix could use it to pipe video data from one of its boxes out to a user. It's got that kind of work built right into it. As a result, I think that storage, spooling and possibly even sending would be best in Node. Those are largely I/O bound, schlep data from port to port operations.
My understanding of PHP is that it's really hard to have really global variables. Compare this to Node.js where Javascript naturally does this. I can't find it now, but I remember back when Node was young a guy had an issue with his shopping cart system. People's orders were screwed up. Turns out that he missed a `var` in a function. PHP, I don't think, could do this easily since the widest screw up scope is file. So you're still limited to request scoping.
Well...there is the global operator, and it takes into account the entire running "stack of files" - so include your config.php, lib.php wrappers, which in turn include other files for your application/framework, and then use the global $var tag in every function, and you get a reference to a truly global variable.
Why, it's nice? Throw several worker processes in and you can easily handle 1000 rps per VM.
And no, you don't really need global variables in frontend that much; and backend PHP programs (which you should not do of course) have long-lived global variables.
Not OP, but I'll chime in. Enterprise software has some unique challenges. Enterprises deal with a galaxy of small systems, a mixture of proprietary and purchased software, a mix of IT staff and consultants, leadership who may or not have techology as a core competency, frequently a lot of bureaucracy and compliance rules, lots of long-running maintenance and as a corollary, you may retain a lot of lower-tier talent if you're recruting developers to maintain an 8 year old expense reporting system. All of these scream for safety and risk mitigation more than pushing the envelope. Static typing, schemas, battle tested development tools, lots of vendor support, big pool of talent to recruit from. There's no appetite for risk taking especially when the benefits are so hard for management to understand. As a developer at a consulting company, my peers understand technology very well but clients are 50/50. They mostly standardize on Java or .NET and and I don't think I could even put together a cogent argument of why node would help them at all. Maybe personal bias because I think node is kinda shitty and my limited exposure has been nothing but painful.
Two reasons - mobile and support. Mobile, because everything on the web is moving towards API-based backends decoupled from the presentation layer. Support, because "everyone" knows how to write JavaScript.
Little bit off topic but is there anyone like me in community having a problem with liking javascript?
I have worked with javascript for years but it was always for DOM manipulation. When it comes to building an app with javascript, i feel like it is too fragile to depend on.
Same here, to a greater extent. Why? At the highest level, because it's a language that was chosen based on a whim (not merit) and is now being extensively worked on and showed down everyone's throats. I like languages, plural. I like new languages. I like to have a choice. (And no, I cannot "simply chose something else" if I have to work with people who say every other choice is invalid by default. And yes, this is the mentality I commonly see.) In a way, JavaScript is the new Java.
The core language (syntax, available directives) hasn't changed much since the time everyone deservedly hated it. Its performance was optimized, it got many more libraries and it is still the only language supported by browsers. But the question is, why all of this applies to JavaScript and not Python/Ruby/whatever?
A lot of its benefits people claim simply weren't around when they chose to stick with it. It's like saying you chose IIS over Apache because of features of C# 5.0 and ignoring the fact that you chose IIS in 2004, where C# 5.0 was not around. It's a dishonest argument.
Speaking of which. How can people claim that JS benefits from the same language running on client and server when 90% of server-side libraries do not work on the client and there are potentially major differences even in core library (e.g. forEach)?
Also, a huge part of JS ecosystem are kludges, crutches and workarounds that make the stack way, way more complex that it should be. (E.g. source maps. Imagine someone minifying PHP for faster parsing and then asking everyone to use some source mapping technology for debugging. They would be laughed out of any conference or discussion.) This mentality permeates everything. I mean, seriously, require directives need to be implemented as a library? Namespace isolation is a closure-based hack? Can you imaging stuff like that being tolerate for any other language?
>Imagine someone minifying PHP for faster parsing and then asking everyone to use some source mapping technology for debugging. They would be laughed out of any conference or discussion.
Source maps or debugging symbols have been accepted for compiled languages for a long time.
I agree there are some serious problems. But a lot of the problems are worked on, like modules. I agree they should be in the language and it looks like they will be in ES6.
Regarding the different languages on the server and client you don't really have a choice. JS is what runs in the browsers. If you want the same on the server the server has to change. Very few people seems to want Java in the browsers again. And you can write gateway systems in Node that then connects to servers further back which may still use Java/C#/Python or something else. This seems to be the natural route for many projects.
Are you kidding me? Source maps allow you to debug preprocessed as well as minified code. This allows seamless integration of multiple languages into the same ecosystem, libraries, etc.
You develop a language that is sent to the client in plain text. Then you write a lot of code in it. Then you realize that with so much code, file transfers take too long. You write a utility to strip out spaces and comments, as well as change variable names so files are smaller. Okay, so now you have unreadable plaintext files. After juggling minified and unminified version of the same code for some years you realize that debugging like that sucks. You develop a new format that allows you to map minified and unminified files. This format requires support on the browser side, as well as in the minification utility, and it also requires the author of the minified library to provide you with two additional files. You declare victory and pat yourself on the back. The whole process only takes a few years.
Meanwhile, people debugged binaries for several decades before your language even been developed.
Are you kidding? I can't remember the last time a node.js or javascript related article on the HN homepage didn't have a barrage of highly up-voted "javascript sucks" themed comments. I'm beginning to wonder the opposite - is there anyone like me in the community that doesn't have a problem liking javascript?
I've been leaning toward functional programming and libraries-over-frameworks for a while now (influenced largely by the Clojure community), and when I realized that the Javascript world embraces these views, I was able to make peace with Javascript.
I'm in the same boat. It's a necessary evil though and it's not going anywhere anytime soon so I try and force myself to like it, CoffeeScript helps, planning to try out TypeScript soon as well.
Any particular reason why you feel like server-side JavaScript feels fragile? Also if by DOM manipulation you mean libraries like jQuery, have you tried more declarative approaches found in frameworks like Angular and React?
I'm no zealot, and don't think these are absolute truths by any means but I've worked extensively on enterprise solutions in both C#/.Net and Node.js and can respond to your points below.
1. It turns out the reactor pattern is a good way to build network servers that scale reasonably well with minimal developer effort. In my experience developers tend to avoid writing threaded code in synchronous languages, and threads don't show up very often in run-of-the-mill solutions. To be sure, JavaScript's lack of threads is a blemish, but oddly it forced the community to focus heavily on distributed architectures, and as a result most of the tooling encourages distributed systems, which is a win for some common use cases.
2. Type safety is a hotly debated topic and I don't think you can find resolution on this one. For some people, "weakest of weak typing" is a benefit.
3. Static vs. dynamic, see #2.
4. Do you mean that everything must be enforced by convention instead of static code analysis, compile-time checks and type safety? If so, I think you're making the same point three times.
Erlang is a beautiful language and may be the most correct in some sense, but doesn't look nearly as attractive to the kinds of teams I work on when you start to consider developer availability, library ecosystem, tooling, ISP support, documentation and community.
I think your answers to 2-3 are not real answers. You're not giving an explanation that would convince someone who doesn't share your opinion. You're simply saying that you don't care about there concerns with nothing to demonstrate that your viewpoint is somehow more valid.
I'm just conceding that these argument cannot be won. Tomes have been written and the argument becomes unproductive very quickly. It's like people who believe in God arguing with those who do not. It's not about JavaScript or Node, it's about your entire belief system.
An addendum to #1... node's i/o tooling and other libraries do run multiple threads against a pool, listening for events, then resolving those listeners into the event-loop. The main JS/v8 event-loop is single threaded.
1. This is rarely a problem. You want to be careful about writing code that will take a long time to execute, but most long-running APIs are async which helps a lot.
2. Weak typing is a nice feature for prototyping, but for larger projects, a stronger type system is better at catching bugs. Many JavaScript programmers (myself included) are used to using separate systems to check types for them. My team uses the Closure Compiler, which, along with compiling the JS to a more optimized version, is also happy to check all your types and fails to build if your types don't line up.
3. Again, I believe this is something that you can make sure that the compiler catches. And, of course, anyone can write bad code (in any language); if you're overwriting stuff halfway across the codebase, then that's "bad code" and you should avoid doing that (and during the code review stage, you should be making sure that your coworkers don't do that either).
4. Like for any language, have guidelines for how you write code, and enforce some of your conventions with the compiler and with linting tools. It leads to a more consistent codebase.
All of these are valid points except 1. Node's async IO is one of its strengths. Contrast with Rails, for example, where the standard practice for concurrency is to spawn multiple processes (or, less commonly, threads). How many Rails processes can you fit on one machine? 5-10? Node can handle thousands of concurrent connections, all on a single thread. And when you hit those limits, you can always continue scaling with multiple processes like you would with Rails.
Doing CPU-bound computation in your application server is an anti-pattern. IO-bound computation, however, is where Node excels.
The thing to realize is that pre-emptive multitasking is costly. It is convenient for the programmer (the programmer doesn't need to worry about blocking and locking up the rest of the program), but it comes at a cost. Lightweight "green" threads, or equivalently, Node's evented dispatch mechanism, are a much more efficient use of the CPU. For applications composed of short-lived computations (e.g., < 1ms), it doesn't make sense to interrupt them and context switch. It's more performant to let them complete and then switch. You just have to make sure you aren't doing any CPU-intensive computations in the app server—which you probably shouldn't be doing anyway.
The downside of Node's approach is callback hell. And that's why we have Go.
Cooperative vs. Preemptive multitasking is not the same as Light threads vs. OS threads/processes. Erlang (and also Haskell for example) does preemptive multitasking with light threads. And that's what I'm talking about.
Yeah, preemptive multitasking may be costly. But as you say, when you have lots of users, most of their tasks are sleeping or waiting for timers. That's where preemptive multitasking excels. You can have millions of sleeping tasks and several (at times) doing real work. That's why Erlang is "scalable" and node.js is not. Especially if you need to have state in your workers and if you need them to live longer.
Saying that node's cooperative multitasking excels in IO-bound computation is a clear sign that you haven't tried Erlang.
I have another counter-point for this as a rails developer tinkering with golang recently. I think Go got this correct in many ways. Having light weight goroutines that can scale well; having good IO which does epoll/libuv style wait in the background transparently when you read/write. Easy to understand multiprocessing language in general. I have no idea why it's not taking off in web development though.
> How many Rails processes can you fit on one machine? 5-10?
Wat?!
By the end of the 90's, people started the "10K project", aimed at adjusting Linux and Apache so that a hight-end machine could support 10k simultaneous Apache threads, all doing IO at the same time. And they were successfull.
loxs argument (and as an Elixir guy I fully agree with him) is that those options are all terrible. The Erlang VM (BEAM) is extraordinary in how it solves that problem. It's both preemptive and green threads. It supports SMP and clusters across nodes out of the box. I highly recommend taking a look at BEAM and how different it is from everything else out there.
For enterprise companies the ability to move programmers around from frontend to backend (or demand that they're doing both at the same time) is probably a benefit.
Not a good idea, but that's rarely a hindrance for both enterprisey and startup "human resources" management. Reminds me a bit of how we treated torchbearers and henchmen in D&D...
No, they aren't. The only thing "first-class" in Java is classes. They added lambda expressions in Java 8 but they still must be defined inside of a class method. It is still illegal to define a function outside of a class in Java. And Java methods are still not lexical closures; there are some tricks you can do to sort-of emulate this but closures are not a language feature.
> The only thing "first-class" in Java is classes.
And lo and behold, they have decided that an anonymous class with only one method is equivalent to an anonymous function object, and given us syntactic sugar to invoke it without the class declaration boilerplate.
You're essentially doing the same as complaining that conditionals aren't a Smalltalk language feature.
An anonymous inner class with only one method. The "inner" is important. It still has to exist inside of an explicitly declared class. Also, a variable holding a reference to a lambda function cannot be called as a function (no function pointers).
Contrast with JavaScript, where functions are first-class, can be declared at top level, can be passed as arguments to and returned by other functions, can be called by dereferencing a variable, can be nested, can be partially applied, etc.
The minimal syntactic sugar for anonymous inner classes added in Java 8 doesn't even begin to approach the power of the function support in languages like JavaScript. Language-level support for this stuff matters.
They have exactly the same "power," they're just spelled differently. "func()" doesn't let you do anything that you can't do with "callable.call()". It's trivial to do everything that isn't syntactic sugar (for example, partial application, passing as arguments for abstraction) that you mention in Java, if a bit tedious (and often pointless, since the standard APIs generally weren't designed with that in mind.) _A Little Java, A Few Patterns_ came out in 1998; give it a read.
The fact you put the word "power" in scare quotes indicates to me that you don't understand what it means in this context. Here, have a pg essay: http://www.paulgraham.com/power.html
I think really what happens is Node enables a new type of stack[1]. It doesn't actually really step on anything new the SPA's aren't already doing. Instead it moves to become isomorphic, so initial render is done by Node, but still the data sources, validation, and heavy backend logic all stays behind an API. That API could certainly be Erlang powered.
Also to respond to your points...
1. This I see as the most valid argument, but not a deal-breaker (see Walmart, LinkedIn using Node without problems scaling).
2. Check out Flow by Facebook [2] or TypeScript.
3. Not true with require/modules.
4. It's incredibly easy to include a JSLint file in your repo and as a git hook.
> 1. Cooperative multitasking. Really? In 2014? Hello?
I find this sentiment odd. Obviously it's not ideal for many workloads, but for some things it's the sane option. I work with Tornado in python at work and cooperative multitasking is probably the thing I'm thankful for the most. The server is very IO bound, so it keeps logic seemingly synchronous but hasn't shown any throughput issues thus far.
1. Yeah, Erlang beats JS in that regard without contest. Still, its better than and conditions race deadlocks. (Few mainstream platforms solve this really well and Erlang is one of them).
2,3,4. TypeScript, Flow, PureScript add varying degrees of strictness, checking and purity (from low to high, in that order)
not OP, but some issues I get with JS are mainly due to refactoring. I think the tooling isn't as mature as in other languages, and moving around code has proven difficult (oh the joys of `this`!)
Software development is one part creation, one part understanding existing software, and one part changing existing stuff. Javascript has proven a bit difficult on the last part.
This is likely an issue caused by code that is too monolithic in nature. Writing idiomatic node code involves making many small npm modules. Refactoring becomes very easy within the tiny code base of a module, and moving them around is a matter of require them somewhere else.
I don't think you can necessarily overcome the deep-seated knowledge that the language is hot garbage. You can learn to use tools to reduce it and understand the language enough to realize where its truly horrific spots.
I don't find it fragile but like Java, PHP, and Perl it's also just not a pleasure to use the way Python is due to Python's design and quality of its community packages.
I see a ton of discussion about the NodeJS decision and almost nothing about the more interesting industry paradigm shift to reactive programming and using functional-style programming in an imperative language.
They're joining the likes of Facebook, Netflix, Square, Microsoft, Instagram, Khan Academy, SoundCloud, Trello, New York Times, and others in adopting reactive extensions.
They are also moving to a Flux-inspired application structure which is supposedly very similar to functional reactive programming, which is what the OP was referencing.
What exactly do you mean by reactive here? As one of the other posters said, react's 're-render the virtual dom' concept doesn't have any direct relationship with the reactive manifesto, though flux might be a better candidate.
Yahoo's React/Flux projects mentioned at the end are intriguing. I have tried implementing Flux architecture in an app and found it really tedious and boilerplatey. I'm still looking out for well-implemented Flux dispatcher libraries.
I was hoping to see a conversation around the points addressed in the slides (e.g. how to handle async data in Flux). Instead, it's a bunch of neckbeards whining about people using Node.
With the Opera browser it takes over a second to load the GUI whenever I click something, and the overall design looks like it has been made by someone making his/her first homepage.
No wonder they decided to use something that encourage in-line HTML where code and design get entangled like a pile of spaghetti - making it almost impossible to maintain.
In the last years JavaScript has exploded with new frameworks and "compile to JS languages". But I have yet seen anything close to usable. Maybe it's because I've become "speed blind" after coding JS for over 15 years.
I do not see all the problems ppl see in JavaScript, until I look at code written by beginners that seem to use every framework out there, and over-complicate the code, and naming everything with one letter variables and the name of their favorite pizzas.
It's funny that this showed up on the front page, because yesterday I logged into an old Yahoo email account for the first time in a few years and was absolutely astounded at how awful the UI was.
True, if you adhere to Bootstrap's methods, you'll create a less terrible static UI, but you can still create awful data representation and transitions.
Well, this submission is also about switching to React. The headline on the React website is "A JavaScript library for building user interfaces", so I figured my anecdote was relevant.
The exciting part of this for me would be around improvements in Yahoo - removing old cruft code. There's lot of strangeness and/or flat out errors I run into on Yahoo regularly. Any improvement in that regard could be a big benefit.
On the topic of JavaScript - I love Node as a glue. For larger applications - I just don't know how to structure (architect) a large application on a language like JS. Maybe that's just my ignorance.
Yea, I'm not saying it's a miracle cure, but in general I have found the following to be useful:
1) Break any functionality possible into a separate npm module - not just a separate file with module.exports. Keep the npm module small and break it up further if it gets too big.
2) This module will be small enough that it is trivial to debug, refactor, test, and document. If it gets to a size where these tasks are hard for any reason and you wish for strong typing or something, the module is too big.
3) Please make sure that you do test and document the module. This will make your team way faster in the future.
Yahoo seems to be championing React more than Facebook! With the flux-router-component they are trying to solve something that FB hasn't really addressed and it shows that they're actively trying to share the solutions to practical problems people come across when working with Flux.
Just curious, what are the reasons for 'isomorphic' apps these days? SEO used to be the big reason, but since google and bing can now render most javascript, and since you can use pushState for SEO, that's not really a sufficient reason anymore.
Speed on first load I'm guessing? Compatability with screen readers and readability type things? There are many reasons, and it is simply conceptually better.
Is finding Clojure people that hard (honest question)?
The Clojure + ClojureScript approach has many more batteries included. You get all the benefits of reusing the codebase on both sides of the fence while the language is solving the "Transactional Store", efficient dirty checks, nice server-side concurrency primitives and many unrelated problems for you.
As someone that's recently started working with Clojure (and soon ClojureScript), I wonder the same thing. My first guess is that people are unfortunately and unjustly turned off by its Lisp pedigree.
I've been using and enjoying React and am excited to learn more about Reagent. It seems like Clojure + Reagent will make development quicker and more enjoyable.
I think that his main issue is/was that node isn't robust enough. I agree, but personally think that promises largely solve that issue. Streams are still a bit icky though.
He also says that node isn't moving fast anymore, which is unfortunately true. The node core team was stuck trying to release 0.12 for quite a while because of some significant changes to internals (V8, AsyncListener). Once 0.12 is out, things should be much easier. On the plus side, the ecosystem is still moving at the same speed which is pretty great, and JS gets new advanced tooling every day: typescript, flow, 5 (!) es6 compilers...
Do you guys think that we are now moving into a new trend on the client side now? React + Flux vs. Angularjs, Backbone.js?
It seems that every job, even backend positions, require Angularjs or Backbone.js knowledge. Having largely ignored the two and hoping they would die, I am ready to learn React + Flux to accelerate to this cause.
And those who say Yahoo! are just jumping onboard the React hype train or Node.js hype train, you have it all wrong. Yahoo! have been using Node.js for the last few years, in-fact early 2010 is around the time Yahoo! engineers started playing with Node.js, long before it was considered mainstream cool or being used really in any high-profile scale environment.
It is rare that a company the size of Yahoo! truly ever embraces moving at this kind of pace and embracing new open source technologies, languages, frameworks and libraries. Now that Yahoo! have openly declared their use of React on such a large scale, expect it to explode even more so in 2015. For an open source project that is a little over a year old, React is getting the kind of user-base and adoption that most open source projects can only dream of having.
This news excites me. I honestly cannot wait to see how it all turns out.
PS. I have noticed a few people in the comments section getting confused. Yahoo! Mail is NOT using React just yet. The current mail product is still using YUI and plain HTML/Javascript. If you read through, it mentions 2015...