No, I think criticism is a good thing. Rants are good. They highlight issues with our current tools and over time that's what leads to progress. Using a plow isn't just different from digging in the dirt with a stick. It's better.
For instance, nulls are clearly a problem. After countless rants we're seeing languages that have better ways to deal with missing or optional values enter the mainstream (Swift, Scala, Rust).
J2EE's original idea to configure a system through separate XML files was heavily criticised for being too bureaucratic. After countless rants we got configuration by convention, better defaults and annotations as part of the source language.
Of course progress is not a straight line and quite often it's not clear what is and isn't progress because there are many trade-offs. But where would we be without a constant process of criticising our tools?
Totally not against solid critique. But it becomes a culture, a signal of intellectual superiority. Which is too easy. You can be, and I have been highly productive in plain ole JavaScript. Same with Java, Same with Ruby, same with Python. Are the language debates irrelevant, absolutely not. But
“Node.js is one of the worst things to happen to the software industry” Hey if it is then I fully submit that I'm just some stupid fool that should rage quit the internet and give up because.... Wait no that would be stupid.
Instead of responding to the fact that JavaScript:
1. Has no adequate threading model.
2. Has no reasonable type system.
3. Has better alternatives on the server side.
...you're making it personal, and about the people. Look at your post; you're just accusing the critics of Node of feeling "intellectually superior". That's just an ad hominem attack. And you think that critics want you to rage quit the internet and give up? No, that's not what critics of Node want. Or at least, that's not what I want.
What I want is for people to either acknowledge the problems with Node and fix them (unlikely), or start using better alternatives on the server side, and start investing more in WebAssembly and langauges targeting WebAssembly, so we don't have to use JavaScript any more. I want this because as long as Node/JavaScript are around, I have to either deal with those horrible tools, or not take those jobs.
This isn't about feeling superior, it's about improving my life by using tools that aren't godawful.
1. For an async-first environment, which node mostly is, lack of threading is hardly an issue. Only threading node should ever have is the web worker w/ shared memory & atomic ops that’s already going into the Web Platform[1]. I say this is an advantage.
2. Typescript is great, it even has strict null checking via union types. Way better than the half-hearted attempt at optional typing you’ll find in Python 3. So if you think Python is a good ecosystem, then node is miles better. Sure, it’s not Scala, but on the other hand interop w/ Javascript and Web Platform is seamless. It’s a trade-off, but one that I think is very much worth taking. Also it compiles in seconds, not hours.
3. You can share code w/ client which is actually useful considering web app logic is moving client-side. See: React.
4. WebAssembly is nowhere ready. No idea if using it/tooling/debugging will be any good for anything other than what emscripten enables right now. It’s a very risky investment to be considering that early.
Even if you think WebAssembly will pan out, the languages that will target it already target Javascript. It’s just alternative bytecode, really.
2. JS's duck typing is thought to be a "feature" to some and a
"bug" to others, so if you think it's a bug, use TypeScript or Flow. They'll give you "reasonable type system[s]"
Process spawning and threading are two different but related mechanisms. The former is much more expensive and hard to use with optimization techniques like pooling. It also forces message passing for IPC rather than shared memory.
> 2. JS's duck typing is thought to be a "feature" to some and a "bug" to others, so if you think it's a bug, use TypeScript or Flow. They'll give you "reasonable type system[s]"
Nothing that transcompiles into JavaScript can fix JavaScript's lack of a native integer.
Not that this should be necessary, but it'd actually be pretty simple for a compiler to fix #2: transform `const i: int = 3;` into `var i = new Int32Array([3]);` and change future references from `i` to `i[0]`.
No clue what the performance implications would be for really heavy uses of this, but it'd at least be a workable solution if you absolutely required a true integer type at run-time.
asm.js has valid, correctly-behaving integers, and asm.js works correctly on non-asm.js-aware implementations, so the lack of a native integer doesn't seem like a problem.
The emitted code might have lots of "|0"s in it, but I don't see many people complaining about the beauty of their C compiler's generated object code and the lack of native anything-other-than-integers.
> The emitted code might have lots of "|0"s in it, but I don't see many people complaining about the beauty of their C compiler's generated object code
What are the performance implications of that? It's basically adding an extra operation. Also, having been one to dive into intermediate assembly from time to time, there certainly are people who complain about the obtuseness of object code. Particularly since the generated object code is (out-of-order execution notwithstanding) how the CPU is going to actually execute the instructions.
> the lack of native anything-other-than-integers
Um, what? C has native floating-point types on every platform with an IEEE-754-compliant FPU, and probably on some that don't as well. Pointers are also not integers, though bad programmers frequently coerce them into such because most compilers will let them.
Regarding "|0", there are absolutely no performance implications. An ASM.js optimising compiler will recognize these code patterns and interpret them completely differently (it will not execute a bit OR operation, and instead will only treat |0 as a type hint)
There are certain classes of programming and mathematics that really need integers such as cryptography and fields that would need big number libraries.
That's totally valid, but they shouldn't use Node.js if that's a mission critical part of the project. Nobody is saying that Node.js is the holy grail, but I see that expectations are that high.
I'll bite on this. Javascript's type system is with no question my favorite part of it! JS's object model is incredibly powerful, which is why ES6 classes are implemented basically as syntax sugar, and why so many languages can be transpiled to Javascript.
I also don't think that the lack of threading is as much of a problem as you might think it is. For one, it means you can sidestep a lot of the data ordering problems that you get in a threaded environment.
There's no question that Javascript has shortcomings, but so many of the problems that happen in Javascript come from people treating it like Java, when it's a lot more like Lisp. In many ways, the architecture of Javascript revolves around a very simple, powerful object model in a way that's tough to parallel in most other languages.
I'm guessing he mean't static type system. This is the formal meaning of "type" in programming language theory. What you describe is a system of runtime tags.
I'd actually like to list the worst things (IMO) to happen to the software industry.
(1) Marketing of computers at boys resulting in a generation of girls being excluded. (2) Software patents suffocating innovators. (3) DMCA & international equivalents (4) Daylight savings time. (5) A generation of programmers being taught that anything other than "OOP best practice" was heresy
I'm sure there are plenty I'm missing.
Does one particular programming environment deserve to be listed with that stuff? Personally I think it's a pretty ludicrous suggestion.
It's always annoys me whenever programmers complain about time zones and daylight savings time, because of the difficulty and inconvenience handling them in software. It's getting things backwards. Software should model the (human) world* ; the human world shouldn't be changed to model what's most convenient for software.
* That doesn't mean you can't have a simpler model that you translate to/from.
> Software should model the (human) world* ; the human world shouldn't be changed to model what's most convenient for software.
Agreed, but my point is that daylight savings time is a huge human-world complexity that brings a marginal boost to some sectors. There are a ton of negative effects and I question whether there's a net benefit, even before all the software complexity needed to support it.
Oh man, "OOP as gospel"... If there's one thing that I like about Node.js, it's that it made me actually think about when I should take an object oriented approach vs a functional approach.
Dealing with time-zones and daylight savings time (which countries have it and which don't) can be a very challenge and messy problem that is really easy to get wrong.
Most do, but the problem is less about the library and more about understanding how to use it.
For example, if you want an event to happen in 12 hours, and it happens to be 9pm the day before daylight savings time, do you schedule it using the timezone-aware API (so it happens at 9am, which is actually 13 hours away) or 8am (which is 12 hours, but non intuitive)? What about the event that's supposed to happen every 12 hours in perpetuity?
What happens when the user/device is mobile, and crossing timezones? Which times do you use?
What happens when you're scheduling something far in advance, and then the timezone definition itself changes (as happens a few times a year) between the time you scheduled the event, and the time something actually is supposed to happen? Does the event adjust for the new definition or follow original time?
Luckily for many problem domains, the details around this don't matter too much, but this is just the tip of the iceberg with timezone challenges.
E.g. a rather trivial example of displaying a hourly graph/table of some measurement, including comparison with yesterday (because there are daily patterns of fluctuation).
DST means that some of days have 23 hours and some days have 25 hours. The libraries will help you make the relevant calculations, but now you have to make a decision wether the comparison that you make with "yesterday equivalent" of today's 11:00 is yesterday's 11:00 (23 hours ago) or yesterday's 10:00 (24 hours ago).
For another example, accounting of hours worked - you may have a person that has worked 25 hours in a single day, such events break some systems.
DST adds complexity to the already ugly mess timezones are. Timezones are literally driven by real-world politics, and they can be as messy as only humans can invent.
As well as adding to the already-confusing mess of timezones and date calculations, daylight savings can change frequently, and sometimes at very short notice.
This year, Azerbaijan decided to cancel DST, and agreed the cancellation just 10 days before the scheduled clock change [1].
Egypt cancelled DST with even less notice - just 3 days! [2]
It has to be accounted for in time and date calculations and that poses a new set of challenges.
It is also just one of many geographical, cultural, and temporal challenges that need to be addressed by any business system relying on accurate international dates and timing.
Time and scheduling are a bit tricky, to say the least. It would help if there were some reasonable assumptions that you can make - for example, a day containing 24 hours. DST makes such assumptions not true.
Wow! You haven't run into DST issues in software yet?! How I envy you! If you work in the industry, you have a lot of fun unintuitive but sort of neat bugs to look forward to!
Rants without proposing any alternative solutions can't redeem any use at all. Their only reason to be is to wrongly make their authors feel smart or skilled.
He proposed list of languages. And really, there's a lot of good languages to use on the servers. JS was never good language and we use it on the client-side just because it's the only language browsers know.
He doesn't explain why... other than ranting about the callback model (which has been in our desktops forever, most event systems rely on callbacks). The quote in the post has also misleading statements, NodeJs does scale in load and performance which is one reason why people use it instead of Python (which was one of the quote suggestions)
That was not one of his suggestions, unless you count stackless python. It was an example (i.e. twisted in python) in support of his point that callback's are a bad way to structure concurrent programming. His suggestions were Erlang and Go which are arguably better approaches in a purely technical dimension to NodeJS.
What his rant misses is that most technical decisions aren't made on purely technical merit for a host of different reasons.
>His suggestions were Erlang and Go which are arguably better approaches in a purely technical dimension to NodeJS.
I would say that it is based on a subset of the technical dimension. Maybe Erlang and Go might have nicer ways to handle concurrency flows but if it doesn't have a library X to communicate with backend component Y then there is a technical reason not to use it.
No, J2EE's original idea was to separate wheat from chaff, i.e. good programmers from mediocre ones. Good programmers would then work on containers and mediocre ones would write standardized apps to deploy into those containers, with containers being easily interchangable. In this worldview it's irrelevant whether those standardized apps are developed with lots of XML, or convention over configuration or whatever, because mediocre programmers won't complain.
Now as much as this worldview is flawed, this is just one of the manifestations of an attempt to make software development easier, which is, of course, a noble goal that even Node.js aspires to.
> For instance, nulls are clearly a problem. After countless rants we're seeing languages that have better ways to deal with missing or optional values enter the mainstream (Swift, Scala, Rust).
No they are not. They are a state of an object. What is the problem is a lack of documentation.
One thing I've seen coming up, at least in some code I've looked at, is the @Nullable and I've not seen any complaining about null from those who use this tag.
The idea is to always document when a variable or type can be null. This way you know, with 100% certainty (given a strict type system adherence) that you're not going to erroneously see a null object where it isn't expected.
From this, you have a huge benefit: a performant error state or a better method of representing an issue without throwing an exception.
Riddle me this, how would a Map be implemented if Null was stripped from Java? Should it throw an exception if a key is not found? If that is the case, then you should always need to check for this exception, otherwise you're prone to bugs. You've also increased the execution time of your code by in some a huge performance hit [0] that is more cumbersome.
This is why I think rants are a big problem. They don't describe the opposition to a statement accurately. They are one person's opinions and after reading them, if it all sounds nice, you're usually willing to take it all as fact. "Hey, they're writing an article! They must be smarter then me. I'll take their word for it." As a result, our communities don't really evolve we just spew memes back and fourth dictating what someone else thought.
What really matters, in every sense, is reading a discussion had between two experts in opposing camps. That way you can see both sides of the arguments, their weak points, and make an educated choice on what side to agree with.
That being said, what do you think of my points? Can you address them? My oppinion is that NPEs can be easily avoided by using very simple measures AND null more closely resembles most problems while being more performant and less cumbersome then throwing exceptions willy nilly. I'd love to hear how you feel about this.
Also...
> J2EE's original idea to configure a system through separate XML files was heavily criticised for being too bureaucratic. After countless rants we got configuration by convention, better defaults and annotations as part of the source language.
I'd say that is a good example of discussion, brain storming, and coming up with a much more accurate representation of the ideas we are trying to convey (the use of annotations to configure methods).
The problem with null as implemented is that it type-checks as a value of any type. This means that anywhere you have a value of some type, it may be a valid value of that type, or it may be null. If you think about it, this is very odd. What if the number 42 were considered a valid value of every type, such that it were usually necessary to check whether you actually have 42 before using a value? That seems ridiculous, but it's almost exactly how null works.
I'm someone who uses @Nullable and complains about null, so now you've met one of us! The problem with @Nullable is that it's mostly useful when all your code uses it, but the compiler will not force you to do so. It is a Sisyphean task. (But at least Java has @Nullable - most languages with null don't have anything like that.)
In a parallel universe version of Java with no nulls, a Map implementation would return Optional.<Foo>empty(). This seems similar to null, but with uglier syntax, but critically, it is not of type Foo. In order to get a valid Foo, for instance to pass to a method that takes one, you must unwrap the Optional you have and if it is empty you simply can't call that method with a Foo because you don't have one. The advantage here is that this method taking a Foo now knows for sure it has one and does not need to bother checking that assumption. How pleasant!
Which would have been a great idea if it had been in Java 1.0 :-(
Alas, the horse has left the barn.
Much like immutability: if it had been the default, with a "DANGER: mutation ahead" keyword required otherwise, that would arguably have been good. But it's too late, now. (Java) Beans, beans, the magical fruit...
I mean when people say Nulls are bad I don't think they necessarily mean they are categorically bad, only that allowing every single object in a system to have a null value without any supporting language features requires a lot of discipline. I don't know anyone that's used something like Maybe[String] and felt it was categorically no better than 'null' values, but on some level they are both features that express the same thing. Just one of them is more coherent and targeted.
When people say that null was a billion dollar mistake I don't think they were referring to any possible implementation of a value that indicates nonexistence.
It's not "removing a possible value from your map", it's having your type system explicitly check that you're handling the edge case at compile time. This works in practice in a wide variety of languages.
Maps should do whatever ArrayList does when your index is out of bounds. And so long as null is a value that could be in the collection, lookup should definitely not return null.
If a lookup fails, it is often a bug or a violation of a constraint that you shouldn't be handling. Throw an exception. If a lookup is expected to fail in normal execution, you can use an Optional to mash together a check and lookup in a typesafe way. Lookup failures should not be allowed to propagate as nulls only to be caught as NullPointerExceptions in disparate parts of the code.
Tony Hoare, the computer scientist who introduced null, called it a billion dollar mistake. He's pretty accomplished in his field, and there are better options than null for signifying that a Map doesn't have a particular value, like Option types.
For instance, nulls are clearly a problem. After countless rants we're seeing languages that have better ways to deal with missing or optional values enter the mainstream (Swift, Scala, Rust).
J2EE's original idea to configure a system through separate XML files was heavily criticised for being too bureaucratic. After countless rants we got configuration by convention, better defaults and annotations as part of the source language.
Of course progress is not a straight line and quite often it's not clear what is and isn't progress because there are many trade-offs. But where would we be without a constant process of criticising our tools?